text
stringlengths 67
1.03M
| metadata
dict |
|---|---|
# Notebook from rishiCSE17/py_Maths
Path: System Automation API using Python .ipynb
# Python Basics_____no_output_____## Variables
Python variables are untyped, i.e. no datatype is required to define a variable_____no_output_____
<code>
x=10 # static allocation _____no_output_____print(x) # to print a variable 10
</code>
Sometimes variables are allocated dynamically during runtime by user input. Python not only creates a new variable on-demand, also, it assigns corresponding type. _____no_output_____
<code>
y=input('Enter something : ')
print(y)Enter something : hello there...
hello there...
</code>
To check the type of a variable use `type()` method. _____no_output_____
<code>
type(x)_____no_output_____type(y)_____no_output_____
</code>
Any input given is Python is by default of string type. You may use different __typecasting__ constructors to change it. _____no_output_____
<code>
# without typeasting
y=input('Enter a number... ')
print(f'type of y is {type(y)}') # string formatting
# with typeasting into integer
y=int(input('Enter a number... '))
print(f'type of y is {type(y)}') # string formattingEnter a number... 25
type of y is <class 'str'>
Enter a number... 25
type of y is <class 'int'>
</code>
How to check if an existing variable is of a given type ?_____no_output_____
<code>
x=10
print(isinstance(x,int))
print(isinstance(x,float))True
False
</code>
## Control Flow Structure _____no_output_____### if-else-elif _____no_output_____
<code>
name = input ('Enter name...')
age = int(input('Enter age... '))
if age in range(0,150):
if age < 18 :
print(f'{name} is a minor')
elif age >= 18 and age < 60:
print(f'{name} is a young person')
else:
print(f'{name} is an elderly person')
else:
print('Invalid age')Enter name...Rishi
Enter age... 30
Rishi is a young person
</code>
### For-loop_____no_output_____
<code>
print('Table Generator\n************************')
num = int(input('Enter a number... '))
for i in range(1,11):
print(f'{num} x {i} \t = {num*i}')Table Generator
************************
Enter a number... 5
5 x 1 = 5
5 x 2 = 10
5 x 3 = 15
5 x 4 = 20
5 x 5 = 25
5 x 6 = 30
5 x 7 = 35
5 x 8 = 40
5 x 9 = 45
5 x 10 = 50
</code>
### While-loop_____no_output_____
<code>
print('Table Generator\n************************')
num = int(input('Enter a number...'))
i = 1
while i<11:
print(f'{num} x {i} \t = {num * i}')
i += 1Table Generator
************************
Enter a number...6
6 x 1 = 6
6 x 2 = 12
6 x 3 = 18
6 x 4 = 24
6 x 5 = 30
6 x 6 = 36
6 x 7 = 42
6 x 8 = 48
6 x 9 = 54
6 x 10 = 60
</code>
## Primitive Data-structures _____no_output_____### List
List is a heterogenous linked-list stucture in python_____no_output_____
<code>
lst = [1,2,'a','b'] #creating a list_____no_output_____lst_____no_output_____type(lst)_____no_output_____loc=2
print(f'item at location {loc} is {lst[loc]}') # reading item by locationitem at location 2 is a
lst[2]='abc' # updating an item in a list
lst_____no_output_____lst.insert(2,'a') # inserting into a specific location
lst_____no_output_____lst.pop(2) # deleting from a specific location
lst_____no_output_____len(lst) # length of a list_____no_output_____lst.reverse() # reversing a list
lst_____no_output_____test_list=[1,5,7,8,10] # sorting a list
test_list.sort(reverse=False)
test_list_____no_output_____
</code>
### Set _____no_output_____
<code>
P = {2,3,5,7} # Set of single digit prime numbers
O = {1,3,5,7} # Set of single digit odd numbers
E = {0,2,4,6,8} # Set of ingle digit even numbers_____no_output_____type(P)_____no_output_____P.union(O) # odd or prime_____no_output_____P.intersection(E) # even and prime_____no_output_____P-E # Prime but not even _____no_output_____
</code>
Finding distinct numbers from a list of numbers by typecasting into set _____no_output_____
<code>
lst = [1,2,4,5,6,2,1,4,5,6,1]
print(lst)
lst = list(set(lst)) # List --> Set --> List
print(lst)[1, 2, 4, 5, 6, 2, 1, 4, 5, 6, 1]
[1, 2, 4, 5, 6]
</code>
### Dictionarry
Unordered named list, i.e. values are index by alphanumeric indices called key. _____no_output_____
<code>
import random as rnd
test_d = {
'name' : 'Something', #kay : value
'age' : rnd.randint(18,60),
'marks' : {
'Physics' : rnd.randint(0,100),
'Chemistry' : rnd.randint(0,100),
'Mathematics' : rnd.randint(0,100),
'Biology' : rnd.randint(0,100),
}
}_____no_output_____test_d_____no_output_____
</code>
A list of dictionarry forms a tabular structure. Each key becomes a column and the corresponding value becomes the value that specific row at that coloumn. _____no_output_____
<code>
test_d['marks'] # reading a value by key_____no_output_____test_d['name'] = 'anything' # updating a value by its key_____no_output_____test_d_____no_output_____for k in test_d.keys(): # reading values iteratively by its key
print(f'value at key {k} is {test_d[k]} of type {type(test_d[k])}')value at key name is anything of type <class 'str'>
value at key age is 44 of type <class 'int'>
value at key marks is {'Physics': 56, 'Chemistry': 0, 'Mathematics': 10, 'Biology': 58} of type <class 'dict'>
</code>
### Tuples
Immutable ordered collection of heterogenous data. _____no_output_____
<code>
tup1 = ('a',1,2)_____no_output_____tup1_____no_output_____type(tup1)_____no_output_____tup1[1] # reading from index_____no_output_____tup1[1] = 3 # immutable collection, updation is not possible_____no_output_____lst1 = list(tup1) #typecast into list
lst1_____no_output_____
</code>
## Serialization
### Theory
Computer networks are defined as a collection interconnected autonomous systems. The connections (edges) between netwrok devices (nodes) are descibed by its Topology which is modeled by Graph Theoretic principles and the computing modeles i.e. Algorithms are designed based on Distributed Systems. The connetions are inheritly FIFO (Sequential) in nature, thus it cannot carry any non-linear data-structures. However, duting RPC communication limiting the procedures to only linear structures are not realistic, especially while using Objects, as Objects are stored in memory Heap. Therefore Data stored in a Non-Linear DS must be converted into a linear format (Byte-Stream) before transmitting in a way that the receiver must reconstruct the source DS and retrive the original data. This transformation is called Serialization. All Modern programming languages such as Java and Python support Serializtion._____no_output_____### Serializing primitive ADTs _____no_output_____
<code>
test_d = {
'name' : 'Something', #kay : value
'age' : rnd.randint(18,60),
'marks' : {
'Physics' : rnd.randint(0,100),
'Chemistry' : rnd.randint(0,100),
'Mathematics' : rnd.randint(0,100),
'Biology' : rnd.randint(0,100),
},
'optionals' : ['music', 'Mechanics']
}_____no_output_____test_d_____no_output_____import json # default serialization library commonly used in RESTFul APIs_____no_output_____# Step 1
ser_dat = json.dumps(test_d) # Serialization
print(ser_dat)
print(type(ser_dat)){"name": "Something", "age": 20, "marks": {"Physics": 89, "Chemistry": 55, "Mathematics": 89, "Biology": 25}, "optionals": ["music", "Mechanics"]}
<class 'str'>
# Step 2
bs_data = ser_dat.encode() # Encoding into ByteStream
print(bs_data)
print(type(bs_data))b'{"name": "Something", "age": 20, "marks": {"Physics": 89, "Chemistry": 55, "Mathematics": 89, "Biology": 25}, "optionals": ["music", "Mechanics"]}'
<class 'bytes'>
# Step 3
ser_data2 = bytes.decode(bs_data) # Decoding strings from ByteStream
print(ser_data2)
print(type(ser_data2)){"name": "Something", "age": 20, "marks": {"Physics": 89, "Chemistry": 55, "Mathematics": 89, "Biology": 25}, "optionals": ["music", "Mechanics"]}
<class 'str'>
# Step 4
json.loads(ser_data2) # Deserializing _____no_output_____
</code>
### Serializing Objects _____no_output_____
<code>
class MyClass: # defining class
# member variables
name
age
# member functions
def __init__(self,name, age): #__init__() = Constructor
self.name = name #'self' is like 'this' in java
self.age = age
def get_info(self): # returns a dictionary
return {'name' : self.name , 'age' : self.age}
obj1 = MyClass('abc',20) # crates an object
obj1.get_info() #invoke functions from object_____no_output_____json.dumps(obj1) # object can't be serializable in string _____no_output_____import pickle as pkl # pickle library is used to serialize objects
bs_data = pkl.dumps(obj1) # serialization + encoding
print(bs_data)
print(type(bs_data))
obj2 = pkl.loads(bs_data) # Decoding + Deserialization
obj2.get_info()b'\x80\x03c__main__\nMyClass\nq\x00)\x81q\x01}q\x02(X\x04\x00\x00\x00nameq\x03X\x03\x00\x00\x00abcq\x04X\x03\x00\x00\x00ageq\x05K\x14ub.'
<class 'bytes'>
</code>
# Interfacing with the Operating System
In this section we will discuss various methods a Python script may use to interface with an Operating Systems. We'll fist understand the Local interfacing i.e. the script runs on top of the OS. Later, We'll see how it communicates with a remote computer using networking protocols such as Telnet and SSH. _____no_output_____## Local interfacing _____no_output_____
<code>
import os
cmd = 'dir *.exe' # command to be executed
for i in os.popen(cmd).readlines():
print(i) Volume in drive C has no label.
Volume Serial Number is 720E-DBD8
Directory of C:\Users\sapta\Documents
21/02/2020 20:13 9,916,256 FileZilla_3.46.3_win64_sponsored-setup.exe
05/06/2019 22:32 63,046,477 kodi-18.2-Leia-x64.exe
05/06/2019 01:31 23,130,408 XTUSetup.exe
3 File(s) 96,093,141 bytes
0 Dir(s) 116,005,609,472 bytes free
</code>
To run a command without any outputs_____no_output_____
<code>
import os
# write a batch of commnad
cmds = ['md test_dir' ,
'cd test_dir' ,
'fsutil file createnew test1.txt 0',
'fsutil file createnew test2.txt 0',
'fsutil file createnew test3.txt 0',
'cd..'
]
# call commands from the batch
for c in cmds:
os.system(c)
# verify
for i in os.popen('dir test*.txt').readlines():
print(i) Volume in drive C has no label.
Volume Serial Number is 720E-DBD8
Directory of C:\Users\sapta\Documents
06/12/2020 15:32 0 test1.txt
06/12/2020 15:32 0 test2.txt
06/12/2020 15:32 0 test3.txt
3 File(s) 0 bytes
0 Dir(s) 116,013,948,928 bytes free
</code>
## Remote Interfacing_____no_output_____* Install Telnet daemon on the Linux host : `sudo apt -y install telnetd`
* Verify installation using : `namp localhiost`_____no_output_____
<code>
import telnetlib as tn
import getpass
host = '192.168.1.84'
user = input("Enter your remote account: ")
password = getpass.getpass()
tn_session = tn.Telnet(host)
tn_session.read_until(b"login: ")
tn_session.write(user.encode('ascii') + b"\n")
if password:
tn_session.read_until(b"Password: ")
tn_session.write(password.encode('ascii') + b"\n")
tn_session.write(b"ls\n")
print(tn_session.read_all().decode('ascii'))
Enter your remote account: rishi
········
</code>
_____no_output_____Remote config with SSH (Secure Communication)_____no_output_____
<code>
import paramiko
import getpass
host = input('Enter host IP')
port = 22
username = input("Enter your remote account: ")
password = getpass.getpass()
command = "ls"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port, username, password)
stdin, stdout, stderr = ssh.exec_command(command)
for l in stdout.readlines():
print(l)Enter host IP192.168.1.84
Enter your remote account: rishi
········
distribution-karaf-0.4.4-Beryllium-SR4.tar.gz
download
odl
ShellMon_sock
test
test1.txt
test2.txt
test34.txt
test3.txt
test5.txt
test_dir
testfile2.txt
testfile.txt
test.sh
</code>
_____no_output_____# Home Tasks_____no_output_____1. Write a python API that runs shell scripts on demand. The shell scripts must be present on the system. The API must take the name of the script as input and display output from the script. Create at least 3 shell scripts of your choice to demonstrate.
2. Write a python API that automatically calls DHCP request for dynamic IP allocation on a given interface, if it doesnt have any IP address.
3. Write a python API that organises files.
* The API first takes a directory as input on which it will run the organization
* Thereafter, it asks for a list of pairs (filetype, destination_folder).
* For example, [('mp3','music'),('png','images'),('jpg','images'),('mov','videos')] means all '.mp3' files will be moved to 'Music' directory likewise for images and Videos. In case the directories do not exist, the API must create them.
4. Write a python API that remotely monitors number of processes running on a system over a given period. _____no_output_____# Course Suggestion
https://www.linkedin.com/learning/python-essential-training-2/_____no_output_____
|
{
"repository": "rishiCSE17/py_Maths",
"path": "System Automation API using Python .ipynb",
"matched_keywords": [
"biology"
],
"stars": 1,
"size": 36851,
"hexsha": "cb836722715b026e1c8970aad4e9eb6a4a8fc48a",
"max_line_length": 1514,
"avg_line_length": 25.4847856155,
"alphanum_fraction": 0.5092941847
}
|
# Notebook from pritchardlabatpsu/cga
Path: notebooks/run04-L200only_reg_rf_boruta.ipynb
<code>
from ceres_infer.session import workflow
from ceres_infer.models import model_infer_ens_custom/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tqdm/std.py:668: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version
from pandas import Panel
Using TensorFlow backend.
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
import logging
logging.basicConfig(level=logging.INFO)_____no_output_____params = {
# directories
'outdir_run': '../out/20.0909 Lx/L200only_reg_rf_boruta/', # output dir for the run
'outdir_modtmp': '../out/20.0909 Lx/L200only_reg_rf_boruta/model_perf/', # intermediate files for each model
'indir_dmdata_Q3': '../out/20.0817 proc_data/gene_effect/dm_data.pkl', # pickled preprocessed DepMap Q3 data
'indir_dmdata_external': '../out/20.0817 proc_data/gene_effect/dm_data_Q4.pkl', # pickled preprocessed DepMap Q3 data
'indir_genesets': '../data/gene_sets/',
'indir_landmarks': '../out/19.1013 tight cluster/landmarks_n200_k200.csv', # csv file of landmarks [default: None]
# notes
'session_notes': 'L200 landmarks only; regression with random forest-boruta lite iteration',
# data
'external_data_name': 'p19q4', # name of external validation dataset
'opt_scale_data': False, # scale input data True/False
'opt_scale_data_types': '\[(?:RNA-seq|CN)\]', # data source types to scale; in regexp
'model_data_source': ['CERES_Lx'],
'anlyz_set_topN': 10, # for analysis set how many of the top features to look at
'perm_null': 1000, # number of samples to get build the null distribution, for corr
'useGene_dependency': False, # whether to use CERES gene dependency (true) or gene effect (false)
'scope': 'differential', # scope for which target genes to run on; list of gene names, or 'all', 'differential'
# model
'model_name': 'rf',
'model_params': {'n_estimators':1000,'max_depth':15,'min_samples_leaf':5,'max_features':'log2'},
'model_paramsgrid': {},
'model_pipeline': model_infer_ens_custom,
'pipeline_params': {'sf_iterThresholds': [], 'sf_topK': None},
# pipeline
'parallelize': False, # parallelize workflow
'processes': 1, # number of cpu processes to use
# analysis
'metric_eval': 'score_test', # metric in model_results to evaluate, e.g. score_test, score_oob
'thresholds': {'score_rd10': 0.1, # score of reduced model - threshold for filtering
'recall_rd10': 0.95}, # recall of reduced model - threshold for filtering
'min_gs_size': 4 # minimum gene set size, to be derived
}_____no_output_____wf = workflow(params)
pipeline = ['load_processed_data', 'infer']
wf.create_pipe(pipeline)
wf.run_pipe()INFO:root:Loading preprocessed data...
INFO:root:Adding landmarks...
INFO:root:Running model building and inference...
100%|██████████| 521/521 [12:32:26<00:00, 86.65s/it]
wf = workflow(params)
pipeline = ['load_processed_data', 'load_model_results', 'analyze', 'analyze_filtered', 'derive_genesets']
wf.create_pipe(pipeline)
wf.run_pipe()INFO:root:Loading preprocessed data...
INFO:root:Adding landmarks...
INFO:root:Loading model results...
INFO:root:Analyzing model results...
/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:303: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
feat_summary = varExp_noNeg.groupby('target')['target', 'score_rd', 'score_full'].first()
/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:36: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False
plt.pie(df_counts.values, labels=labels, autopct=autopct, colors=colors)
INFO:root:Analyzing filtered results...
/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:303: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
feat_summary = varExp_noNeg.groupby('target')['target', 'score_rd', 'score_full'].first()
/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:36: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False
plt.pie(df_counts.values, labels=labels, autopct=autopct, colors=colors)
INFO:root:Deriving gene sets...
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
WARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
</code>
|
{
"repository": "pritchardlabatpsu/cga",
"path": "notebooks/run04-L200only_reg_rf_boruta.ipynb",
"matched_keywords": [
"RNA-seq"
],
"stars": null,
"size": 14075,
"hexsha": "cb840f6d1492ed45b0d7faf60ffe5dd3ff8d8411",
"max_line_length": 410,
"avg_line_length": 67.3444976077,
"alphanum_fraction": 0.6789342806
}
|
# Notebook from jswelling/CMU-MS-DAS-Vis-S22
Path: notebooks/movie_frame_generator.ipynb
<code>
import matplotlib.pyplot as plt
import numpy as np
from os import mkdir
from os.path import join_____no_output_____bov_counter = 0
def writeBOV(g):
"""g is presumed to be a numpy 2D array of doubles"""
global bov_counter
bovNm = 'file_%03d.bov' % bov_counter
dataNm = 'file_%03d.doubles' % bov_counter
bov_counter += 1
try:
mkdir('frames')
except FileExistsError:
pass
with open(join('frames', bovNm), 'w') as f:
f.write('TIME: %g\n' % float(bov_counter))
f.write('DATA_FILE: %s\n' % dataNm)
f.write('DATA_SIZE: %d %d 1\n' % g.shape)
f.write('DATA_FORMAT: DOUBLE\n')
f.write('VARIABLE: U\n')
f.write('DATA_ENDIAN: LITTLE\n')
f.write('CENTERING: ZONAL\n')
f.write('BRICK_ORIGIN: 0. 0. 0.\n')
f.write('BRICK_SIZE: 1.0 1.0 1.0\n')
with open(join('frames', dataNm), 'w') as f:
g.T.tofile(f) # BOV format expects Fortran order
_____no_output_____#
# Scaling constants
#
# You'll have to pick a value for dt which produces stable evolution
# for your stencil!
XDIM = 101
YDIM = 101
tMax = 5.0
dx = 0.1
dy = 0.1
dt = 0.025 # FIX ME!
vel = 1.0
xMin = -(XDIM//2)*dx
yMin = -(YDIM//2)*dy_____no_output_____def initialize():
"""Create the grid and apply the initial condition"""
U = np.zeros([YDIM, XDIM]) # We just use this for shape
ctrX= 0.0
ctrY= 0.0
sigma= 0.25
maxU= 5.0
grid = np.indices(U.shape)
x = (grid[1] * dx) + xMin # a full grid of X coordinates
y = (grid[0] * dy) + yMin # a full grid of Y coordinates
distSqr = np.square(x - ctrX) + np.square(y - ctrY)
U = maxU * np.exp(-distSqr/(sigma*sigma))
return U_____no_output_____# test writeBOV
bov_counter = 0
writeBOV(initialize())_____no_output_____def doTimeStep(U, UOld):
"""
Step your solution forward in time. You need to calculate
UNew in the grid area [1:-1, 1:-1]. The 'patch the boundaries'
bit below will take care of the edges at i=0, i=XDIM-1, j=0,
and j=YDIM-1. Note that the array indices are ordered like U[j][i]!
"""
xRatioSqr= (dt*dt*vel*vel)/(dx*dx)
yRatioSqr= (dt*dt*vel*vel)/(dy*dy)
UNew = np.empty_like(U)
dxxterm = xRatioSqr * (U[1:-1, 2:] + U[1:-1, 0:-2] - 2*U[1:-1, 1:-1])
dyyterm = yRatioSqr * (U[2:, 1:-1] + U[0:-2, 1:-1] - 2*U[1:-1, 1:-1])
UNew[1:-1, 1:-1] = 2*U[1:-1,1:-1] + (dxxterm + dyyterm) - UOld[1:-1, 1:-1]
# Patch the boundaries. This mapping makes the surface into a torus.
UNew[:, 0] = UNew[:, 1]
UNew[:, -1] = UNew[:, -2]
UNew[0, :] = UNew[1, :]
UNew[-1, :] = UNew[-2, :]
return UNew_____no_output_____def timeToOutput(t, count):
"""A little test to tell how often to dump output"""
return (count % 4 == 0)_____no_output_____
U = initialize()
UOld = np.copy(U)
t = 0.0
count = 0
while t < tMax:
if timeToOutput(t, count):
writeBOV(U)
print ('Output at t = %s: min = %f, max = %f'
% (t, np.amin(U), np.amax(U)))
UNew = doTimeStep(U, UOld)
UOld = U
U = UNew
t += dt
count += 1_____no_output_____
</code>
|
{
"repository": "jswelling/CMU-MS-DAS-Vis-S22",
"path": "notebooks/movie_frame_generator.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 5648,
"hexsha": "cb849f33bd5b262e6e9710a9085d140a4f753f2a",
"max_line_length": 87,
"avg_line_length": 26.641509434,
"alphanum_fraction": 0.4654745042
}
|
# Notebook from Skylion007/jupytext
Path: tests/notebooks/ipynb_coconut/coconut_homepage_demo.ipynb
Taken fron [coconut-lang.org](coconut-lang.org)_____no_output_____pipeline-style programming_____no_output_____
<code>
"hello, world!" |> printhello, world!
</code>
prettier lambdas_____no_output_____
<code>
x -> x ** 2_____no_output_____
</code>
partial application_____no_output_____
<code>
range(10) |> map$(pow$(?, 2)) |> list_____no_output_____
</code>
pattern-matching_____no_output_____
<code>
match [head] + tail in [0, 1, 2, 3]:
print(head, tail)0 [1, 2, 3]
</code>
destructuring assignment_____no_output_____
<code>
{"list": [0] + rest} = {"list": [0, 1, 2, 3]}_____no_output_____
</code>
infix notation_____no_output_____
<code>
# 5 `mod` 3 == 2_____no_output_____
</code>
operator functions_____no_output_____
<code>
product = reduce$(*)_____no_output_____
</code>
function composition_____no_output_____
<code>
# (f..g..h)(x, y, z)_____no_output_____
</code>
lazy lists_____no_output_____
<code>
# (| first_elem() |) :: rest_elems()_____no_output_____
</code>
parallel programming_____no_output_____
<code>
range(100) |> parallel_map$(pow$(2)) |> list_____no_output_____
</code>
tail call optimization_____no_output_____
<code>
def factorial(n, acc=1):
case n:
match 0:
return acc
match _ is int if n > 0:
return factorial(n-1, acc*n)_____no_output_____
</code>
algebraic data types_____no_output_____
<code>
data Empty()
data Leaf(n)
data Node(l, r)
def size(Empty()) = 0
addpattern def size(Leaf(n)) = 1
addpattern def size(Node(l, r)) = size(l) + size(r)_____no_output_____
</code>
and much more!
Like what you see? Don't forget to star Coconut on GitHub!_____no_output_____
|
{
"repository": "Skylion007/jupytext",
"path": "tests/notebooks/ipynb_coconut/coconut_homepage_demo.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 8104,
"hexsha": "cb854a80c5881fc9b31778b746ad12a59894de95",
"max_line_length": 64,
"avg_line_length": 20.1592039801,
"alphanum_fraction": 0.4664363277
}
|
# Notebook from ManchesterBioinference/GrandPrix
Path: notebooks/McDavid.ipynb
# Applying GrandPrix on the cell cycle single cell nCounter data of PC3 human prostate cancer
_Sumon Ahmed_, 2017, 2018
This notebooks describes how GrandPrix with informative prior over the latent space can be used to infer the cell cycle stages from the single cell nCounter data of the PC3 human prostate cancer cell line._____no_output_____
<code>
import pandas as pd
import numpy as np
from GrandPrix import GrandPrix_____no_output_____
</code>
# Data decription
<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102402/" terget="_blank">McDavid et al. (2014)</a> assayed the expression profiles of the PC3 human prostate cancer cell line. They identified the cells in G0/G1, S and G2/M cell cycle stages. The cells identified as G0/G1, S and G2/M have been mapped to the capture times of 1, 2 and 3, respectively. Due to the additional challenge of optimizing pseudotime parameters for periodic data, random pseudotimes having the largest log likelihood to estimate cell cycle peak time points have been used to initilize the prior.
The __McDavidtrainingData.csv__ file contains the expression profiles of the top __56__ differentially expressed genes in __361__ cells from the PC3 human prostate cancer cell line which have been used in the inference.
The __McDavidCellMeta.csv__ file contains the additional information of the data such as capture time of each cells, different initializations of pseudotimes, etc._____no_output_____
<code>
Y = pd.read_csv('../data/McDavid/McDavidtrainingData.csv', index_col=[0]).T
mData = pd.read_csv('../data/McDavid/McDavidCellMeta.csv', index_col=[0])_____no_output_____N, D = Y.shape
print('Time Points: %s, Genes: %s'%(N, D))Time Points: 361, Genes: 56
mData.head()_____no_output_____
</code>
## Model with Informative prior
Capture time points have been used as the informative prior information over pseudotime. Following arguments have been passed to initialize the model.
<!--
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __latent_prior_mean__: _array-like, shape N_ x 1, _optional (default:_ __0__). > Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial mean values of the approximate posterior distribution over pseudotime.
- __latent_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial variance of the approximate posterior distribution over pseudotime.
- __kernel:__ _optional (default: RBF kernel with lengthscale and variance set to 1.0)_. Covariance function to define the mapping from the latent space to the data space in Gaussian process prior.
-->
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __latent_prior_mean__: _array-like, shape N_ x 1. Mean of the prior distribution over pseudotime.
- __latent_prior_mean__: _array-like, shape N_ x 1. Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1. Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1. Initial mean values of the approximate posterior distribution over pseudotime.
<!--
- __latent_var__: _array-like, shape N_ x 1. Initial variance of the approximate posterior distribution over pseudotime.
-->
- __kernel__: Covariance function to define the mapping from the latent space to the data space in Gaussian process prior. Here we have used the standard periodic covariance function <a href="http://www.ics.uci.edu/~welling/teaching/KernelsICS273B/gpB.pdf" terget="_blank">(MacKay, 1998)</a>, to restrict the Gaussian Process (GP) prior to periodic functions only.
- __predict__: _int_. The number of new points. The mean of the expression level and associated variance of these new data points will be predicted. _____no_output_____
<code>
np.random.seed(10)
sigma_t = .5
prior_mean = mData['prior'].values[:, None]
init_mean = mData['capture.orig'].values[:, None]
X_mean = [init_mean[i, 0] + sigma_t * np.random.randn(1) for i in range(0, N)] # initialisation of latent_mean _____no_output_____mp = GrandPrix.fit_model(data=Y.values, n_inducing_points = 20, latent_prior_mean=prior_mean, latent_prior_var=np.square(sigma_t),
latent_mean=np.asarray(X_mean), kernel={'name':'Periodic', 'ls':5.0, 'var':1.0}, predict=100)/Users/mqbpwsae/newInstall/GPflow_1_1_0/gpflow/expectations_quadrature.py:65: UserWarning: Quadrature is used to calculate the expectation. This means that an analytical implementations is not available for the given combination.
warnings.warn("Quadrature is used to calculate the expectation. This means that "
pseudotimes = mp[0]
posterior_var = mp[1]
mean = mp[2] # mean of predictive distribution
var = mp[3] # variance of predictive distribution_____no_output_____Xnew = np.linspace(min(pseudotimes), max(pseudotimes), 100)[:, None]_____no_output_____
</code>
# Visualize the results
The expression profile of some interesting genes have been plotted against the estimated pseudotime. Each point corresponds to a particular gene expression in a cell.
The points are coloured based on cell cycle stages according to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102402/" terget="_blank" style="text-decoration:none;">McDavid et al. (2014)</a>. The circular horizontal axis (where both first and last labels are G2/M) represents the periodicity realized by the method in pseudotime inference.
The solid black line is the posterior predicted mean of expression profiles while the grey ribbon depicts the 95% confidence interval.
The vertical dotted lines are the CycleBase peak times for the selected genes.
To see the expression profiles of a different set of genes a list containing gene names shound be passed to the function `plot_genes`._____no_output_____
<code>
selectedGenes = ['CDC6', 'MKI67', 'NUF2', 'PRR11', 'PTTG1', 'TPX2']_____no_output_____geneProfiles = pd.DataFrame({selectedGenes[i]: Y[selectedGenes[i]] for i in range(len(selectedGenes))})_____no_output_____
</code>
## Binding gene names with predictive mean and variations_____no_output_____
<code>
geneNames = Y.columns.values
name = [_ for _ in geneNames]
posterior_mean = pd.DataFrame(mean, columns=name)
posterior_var = pd.DataFrame(var, columns=name)_____no_output_____
</code>
## geneData description
The __"McDavidgene.csv"__ file contains gene specific information such as peak time, etc. for the top 56 differentially expressed genes. _____no_output_____
<code>
geneData = pd.read_csv('../data/McDavid/McDavid_gene.csv', index_col=0).T
_____no_output_____geneData.head()_____no_output_____%matplotlib inline
from utils import plot_genes
cpt = mData['capture.orig'].values
plot_genes(pseudotimes, geneProfiles, geneData, cpt, prediction=(Xnew, posterior_mean, posterior_var))_____no_output_____
</code>
|
{
"repository": "ManchesterBioinference/GrandPrix",
"path": "notebooks/McDavid.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": 14,
"size": 261096,
"hexsha": "cb89c612a3afdaae0695e6105c52972f8d82550e",
"max_line_length": 239578,
"avg_line_length": 405.4285714286,
"alphanum_fraction": 0.9136754297
}
|
# Notebook from klavinslab/coral
Path: docs/tutorial/sequences.ipynb
# Sequences_____no_output_____## `sequence.DNA`
`coral.DNA` is the core data structure of `coral`. If you are already familiar with core python data structures, it mostly acts like a container similar to lists or strings, but also provides further object-oriented methods for DNA-specific tasks, like reverse complementation. Most design functions in `coral` return a `coral.DNA` object or something that contains a `coral.DNA` object (like `coral.Primer`). In addition, there are related `coral.RNA` and `coral.Peptide` objects for representing RNA and peptide sequences and methods for converting between them.
To get started with `coral.DNA`, import `coral`:_____no_output_____
<code>
import coral as cor_____no_output_____
</code>
### Your first sequence
Let's jump right into things. Let's make a sequence that's the first 30 bases of gfp from *A. victoria*. To initialize a sequence, you feed it a string of DNA characters._____no_output_____
<code>
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
display(example_dna)_____no_output_____
</code>
A few things just happened behind the scenes. First, the input was checked to make sure it's DNA (A, T, G, and C). For now, it supports only unambiguous letters - no N, Y, R, etc. Second, the internal representation is converted to an uppercase string - this way, DNA is displayed uniformly and functional elements (like annealing and overhang regions of primers) can be delineated using case. If you input a non-DNA sequence, a `ValueError` is raised._____no_output_____For the most part, a `sequence.DNA` instance acts like a python container and many string-like operations work._____no_output_____
<code>
# Extract the first three bases
display(example_dna[0:3])_____no_output_____# Extract the last seven bases
display(example_dna[-7:])_____no_output_____# Reverse a sequence
display(example_dna[::-1])_____no_output_____# Grab every other base starting at index 0
display(example_dna[::2])_____no_output_____# Is the sequence 'AT' in our sequence? How about 'AC'?
print "'AT' is in our sequence: {}.".format("AT" in example_dna)
print "'ATT' is in our sequence: {}.".format("ATT" in example_dna)'AT' is in our sequence: True.
'ATT' is in our sequence: False.
</code>
Several other common special methods and operators are defined for sequences - you can concatenate DNA (so long as it isn't circular) using `+`, repeat linear sequences using `*` with an integer, check for equality with `==` and `!=` (note: features, not just sequences, must be identical), check the length with `len(dna_object)`, etc._____no_output_____### Simple sequences - methods
In addition to slicing, `sequence.DNA` provides methods for common molecular manipulations. For example, reverse complementing a sequence is a single call:_____no_output_____
<code>
example_dna.reverse_complement()_____no_output_____
</code>
An extremely important method is the `.copy()` method. It may seem redundant to have an entire function for copying a sequence - why not just assign a `sequence.DNA` object to a new variable? As in most high-level languages, python does not actually copy entire objects in memory when assignment happens - it just adds another reference to the same data. The short of it is that the very common operation of generating a lot of new variants to a sequence, or copying a sequence, requires the use of a `.copy()` method. For example, if you want to generate a new list of variants where an 'a' is substituted one at a time at each part of the sequence, using `.copy()` returns the correct result (the first example) while directly accessing example_dna has horrible consequences (the edits build up, as they all modify the same piece of data sequentially):_____no_output_____
<code>
example_dna.copy()_____no_output_____# Incorrect way (editing shared + mutable sequence):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]
print
# Correct way (copy mutable sequence, then edit):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna.copy()
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]['AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA']
['ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'AAGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATAAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAATAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGAAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAAGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGAAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAAAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAAAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAAATTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACATTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTATTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTATCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTACACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTAACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA', 'ATGAGTAAAGGAGAAGAACTTTTCAATGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACAGGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTAGA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGAA', 'ATGAGTAAAGGAGAAGAACTTTTCACTGGA']
</code>
An important fact about `sequence.DNA` methods and slicing is that none of the operations modify the object directly (they don't mutate their parent) - if we look at example_dna, it has not been reverse-complemented itself. Running `example_dna.reverse_complement()` outputs a new sequence, so if you want to save your chance you need to assign a variable:_____no_output_____
<code>
revcomp_dna = example_dna.reverse_complement()
display(example_dna)
display(revcomp_dna)_____no_output_____
</code>
You also have direct access important attributes of a `sequence.DNA` object. The following are examples of how to get important sequences or information about a sequence._____no_output_____
<code>
# The top strand - a simple python string in the 5' -> 3' orientation.
example_dna.top_____no_output_____# The bottom strand - another python string, also in the 5' -> 3' orientation.
example_dna.bottom_____no_output_____# Sequences are double stranded, or 'ds' by default.
# This is a directly accessible attribute, not a method, so () is not required.
example_dna.ds_____no_output_____# DNA can be linear or circular - check the boolean `circular` attribute.
example_dna.circular_____no_output_____# You can switch between topologies using the .circularize and .linearize methods.
# Circular DNA has different properties:
# 1) it can't be concatenated to
# 2) sequence searches using .locate will search over the current origin (e.g. from -10 to +10 for a 20-base sequence).
circular_dna = example_dna.circularize()
circular_dna.circular_____no_output_____# Linearization is more complex - you can choose the index at which to linearize a circular sequence.
# This simulates a precise double stranded break at the index of your choosing.
# The following example shows the difference between linearizing at index 0 (default) versus index 2
# (python 0-indexes, so index 2 = 3rd base, i.e. 'g' in 'atg')
print circular_dna.linearize()
print
print circular_dna.linearize(2)ATGAGTAAAGGAGAAGAACTTTTCACTGGA
GAGTAAAGGAGAAGAACTTTTCACTGGAAT
# Sometimes you just want to rotate the sequence around - i.e. switch the top and bottom strands.
# For this, use the .flip() method
example_dna.flip()_____no_output_____
</code>
|
{
"repository": "klavinslab/coral",
"path": "docs/tutorial/sequences.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 34,
"size": 15368,
"hexsha": "cb89f7ef8d85c103e72e0e7c51e0f1c429b11261",
"max_line_length": 1031,
"avg_line_length": 30.1925343811,
"alphanum_fraction": 0.6150442478
}
|
# Notebook from mirokuru/ml_toolkit
Path: nlp/bag-of-words/my_natural_language_processing_svm.ipynb
# Natural Language Processing_____no_output_____## Importing the libraries_____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd_____no_output_____
</code>
## Importing the dataset_____no_output_____
<code>
dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)_____no_output_____
</code>
## Cleaning the texts_____no_output_____
<code>
import regex
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
corpus = []
for i in range(0, len(dataset)):
review = regex.sub('[^\p{L}]', ' ', dataset['Review'][i])
review = review.lower()
review = review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
review = [ps.stem(word) for word in review if not word in set(all_stopwords)]
review = ' '.join(review)
corpus.append(review)[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Admin\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
print(corpus)['wow love place', 'crust not good', 'not tasti textur nasti', 'stop late may bank holiday rick steve recommend love', 'select menu great price', 'get angri want damn pho', 'honeslti tast fresh', 'potato like rubber could tell made ahead time kept warmer', 'fri great', 'great touch', 'servic prompt', 'would not go back', 'cashier care ever say still end wayyy overpr', 'tri cape cod ravoli chicken cranberri mmmm', 'disgust pretti sure human hair', 'shock sign indic cash', 'highli recommend', 'waitress littl slow servic', 'place not worth time let alon vega', 'not like', 'burritto blah', 'food amaz', 'servic also cute', 'could care less interior beauti', 'perform', 'right red velvet cake ohhh stuff good', 'never brought salad ask', 'hole wall great mexican street taco friendli staff', 'took hour get food tabl restaur food luke warm sever run around like total overwhelm', 'worst salmon sashimi', 'also combo like burger fri beer decent deal', 'like final blow', 'found place accid could not happier', 'seem like good quick place grab bite familiar pub food favor look elsewher', 'overal like place lot', 'redeem qualiti restaur inexpens', 'ampl portion good price', 'poor servic waiter made feel like stupid everi time came tabl', 'first visit hiro delight', 'servic suck', 'shrimp tender moist', 'not deal good enough would drag establish', 'hard judg whether side good gross melt styrofoam want eat fear get sick', 'posit note server attent provid great servic', 'frozen puck disgust worst peopl behind regist', 'thing like prime rib dessert section', 'bad food damn gener', 'burger good beef cook right', 'want sandwich go firehous', 'side greek salad greek dress tasti pita hummu refresh', 'order duck rare pink tender insid nice char outsid', 'came run us realiz husband left sunglass tabl', 'chow mein good', 'horribl attitud toward custom talk one custom enjoy food', 'portion huge', 'love friendli server great food wonder imagin menu', 'heart attack grill downtown vega absolut flat line excus restaur', 'not much seafood like string pasta bottom', 'salad right amount sauc not power scallop perfectli cook', 'rip banana not rip petrifi tasteless', 'least think refil water struggl wave minut', 'place receiv star appet', 'cocktail handmad delici', 'definit go back', 'glad found place', 'great food servic huge portion give militari discount', 'alway great time do gringo', 'updat went back second time still amaz', 'got food appar never heard salt batter fish chewi', 'great way finish great', 'deal includ tast drink jeff went beyond expect', 'realli realli good rice time', 'servic meh', 'took min get milkshak noth chocol milk', 'guess known place would suck insid excalibur use common sens', 'scallop dish quit appal valu well', 'time bad custom servic', 'sweet potato fri good season well', 'today second time lunch buffet pretti good', 'much good food vega feel cheat wast eat opportun go rice compani', 'come like experienc underwhelm relationship parti wait person ask break', 'walk place smell like old greas trap other eat', 'turkey roast beef bland', 'place', 'pan cake everyon rave tast like sugari disast tailor palat six year old', 'love pho spring roll oh yummi tri', 'poor batter meat ratio made chicken tender unsatisfi', 'say food amaz', 'omelet die', 'everyth fresh delici', 'summari larg disappoint dine experi', 'like realli sexi parti mouth outrag flirt hottest person parti', 'never hard rock casino never ever step forward', 'best breakfast buffet', 'say bye bye tip ladi', 'never go', 'back', 'food arriv quickli', 'not good', 'side cafe serv realli good food', 'server fantast found wife love roast garlic bone marrow ad extra meal anoth marrow go', 'good thing waiter help kept bloddi mari come', 'best buffet town price cannot beat', 'love mussel cook wine reduct duck tender potato dish delici', 'one better buffet', 'went tigerlilli fantast afternoon', 'food delici bartend attent person got great deal', 'ambienc wonder music play', 'go back next trip', 'sooooo good', 'real sushi lover let honest yama not good', 'least min pass us order food arriv busi', 'realli fantast thai restaur definit worth visit', 'nice spici tender', 'good price', 'check', 'pretti gross', 'better atmospher', 'kind hard mess steak', 'although much like look sound place actual experi bit disappoint', 'know place manag serv blandest food ever eaten prepar indian cuisin', 'worst servic boot least worri', 'servic fine waitress friendli', 'guy steak steak love son steak best worst place said best steak ever eaten', 'thought ventur away get good sushi place realli hit spot night', 'host staff lack better word bitch', 'bland not like place number reason want wast time bad review leav', 'phenomen food servic ambianc', 'return', 'definit worth ventur strip pork belli return next time vega', 'place way overpr mediocr food', 'penn vodka excel', 'good select food includ massiv meatloaf sandwich crispi chicken wrap delish tuna melt tasti burger', 'manag rude', 'delici nyc bagel good select cream chees real lox caper even', 'great subway fact good come everi subway not meet expect', 'serious solid breakfast', 'one best bar food vega', 'extrem rude realli mani restaur would love dine weekend vega', 'drink never empti made realli great menu suggest', '', 'waiter help friendli rare check us', 'husband ate lunch disappoint food servic', 'red curri much bamboo shoot tasti', 'nice blanket moz top feel like done cover subpar food', 'bathroom clean place well decor', 'menu alway chang food qualiti go servic extrem slow', 'servic littl slow consid serv peopl server food come slow pace', 'give thumb', 'watch waiter pay lot attent tabl ignor us', 'fiancé came middl day greet seat right away', 'great restaur mandalay bay', 'wait forti five minut vain', 'crostini came salad stale', 'highlight great qualiti nigiri', 'staff friendli joint alway clean', 'differ cut piec day still wonder tender well well flavor', 'order voodoo pasta first time realli excel pasta sinc go gluten free sever year ago', 'place good', 'unfortun must hit bakeri leftov day everyth order stale', 'came back today sinc reloc still not impress', 'seat immedi', 'menu divers reason price', 'avoid cost', 'restaur alway full never wait', 'delici', 'place hand one best place eat phoenix metro area', 'go look good food', 'never treat bad', 'bacon hella salti', 'also order spinach avocado salad ingredi sad dress liter zero tast', 'realli vega fine dine use right menu hand ladi price list', 'waitress friendli', 'lordi khao soi dish not miss curri lover', 'everyth menu terrif also thrill made amaz accommod vegetarian daughter', 'perhap caught night judg review not inspir go back', 'servic leav lot desir', 'atmospher modern hip maintain touch cozi', 'not weekli haunt definit place come back everi', 'liter sat minut one ask take order', 'burger absolut flavor meat total bland burger overcook charcoal flavor', 'also decid not send back waitress look like verg heart attack', 'dress treat rude', 'probabl dirt', 'love place hit spot want someth healthi not lack quantiti flavor', 'order lemon raspberri ice cocktail also incred', 'food suck expect suck could imagin', 'interest decor', 'realli like crepe station', 'also serv hot bread butter home made potato chip bacon bit top origin good', 'watch prepar delici food', 'egg roll fantast', 'order arriv one gyro miss', 'salad wing ice cream dessert left feel quit satisfi', 'not realli sure joey vote best hot dog valley reader phoenix magazin', 'best place go tasti bowl pho', 'live music friday total blow', 'never insult felt disrespect', 'friendli staff', 'worth drive', 'heard good thing place exceed everi hope could dream', 'food great serivc', 'warm beer help', 'great brunch spot', 'servic friendli invit', 'good lunch spot', 'live sinc first last time step foot place', 'worst experi ever', 'must night place', 'side delish mix mushroom yukon gold pure white corn beateou', 'bug never show would given sure side wall bug climb kitchen', 'minut wait salad realiz come time soon', 'friend love salmon tartar', 'go back', 'extrem tasti', 'waitress good though', 'soggi not good', 'jamaican mojito delici', 'small not worth price', 'food rich order accordingli', 'shower area outsid rins not take full shower unless mind nude everyon see', 'servic bit lack', 'lobster bisqu bussel sprout risotto filet need salt pepper cours none tabl', 'hope bode go busi someon cook come', 'either cold not enough flavor bad', 'love bacon wrap date', 'unbeliev bargain', 'folk otto alway make us feel welcom special', 'main also uninspir', 'place first pho amaz', 'wonder experi made place must stop whenev town', 'food bad enough enjoy deal world worst annoy drunk peopl', 'fun chef', 'order doubl cheeseburg got singl patti fall apart pictur upload yeah still suck', 'great place coupl drink watch sport event wall cover tv', 'possibl give zero star', 'descript said yum yum sauc anoth said eel sauc yet anoth said spici mayo well none roll sauc', 'say would hardest decis honestli dish tast suppos tast amaz', 'not roll eye may stay not sure go back tri', 'everyon attent provid excel custom servic', 'horribl wast time money', 'dish quit flavour', 'time side restaur almost empti excus', 'busi either also build freez cold', 'like review said pay eat place', 'drink took close minut come one point', 'serious flavor delight folk', 'much better ayc sushi place went vega', 'light dark enough set mood', 'base sub par servic receiv effort show gratitud busi go back', 'owner realli great peopl', 'noth privileg work eat', 'greek dress creami flavor', 'overal think would take parent place made similar complaint silent felt', 'pizza good peanut sauc tasti', 'tabl servic pretti fast', 'fantast servic', 'well would given godfath zero star possibl', 'know make', 'tough short flavor', 'hope place stick around', 'bar vega not ever recal charg tap water', 'restaur atmospher exquisit', 'good servic clean inexpens boot', 'seafood fresh gener portion', 'plu buck', 'servic not par either', 'thu far visit twice food absolut delici time', 'good year ago', 'self proclaim coffe cafe wildli disappoint', 'veggitarian platter world', 'cant go wrong food', 'beat', 'stop place madison ironman friendli kind staff', 'chef friendli good job', 'better not dedic boba tea spot even jenni pho', 'like patio servic outstand', 'goat taco skimp meat wow flavor', 'think not', 'mac salad pretti bland not get', 'went bachi burger friend recommend not disappoint', 'servic stink', 'wait wait', 'place not qualiti sushi not qualiti restaur', 'would definit recommend wing well pizza', 'great pizza salad', 'thing went wrong burn saganaki', 'wait hour breakfast could done time better home', 'place amaz', 'hate disagre fellow yelper husband disappoint place', 'wait hour never got either pizza mani around us came later', 'know slow', 'staff great food delish incred beer select', 'live neighborhood disappoint back conveni locat', 'know pull pork could soooo delici', 'get incred fresh fish prepar care', 'go gave star rate pleas know third time eat bachi burger write review', 'love fact everyth menu worth', 'never dine place', 'food excel servic good', 'good beer drink select good food select', 'pleas stay away shrimp stir fri noodl', 'potato chip order sad could probabl count mani chip box probabl around', 'food realli bore', 'good servic check', 'greedi corpor never see anoth dime', 'never ever go back', 'much like go back get pass atroci servic never return', 'summer dine charm outdoor patio delight', 'not expect good', 'fantast food', 'order toast english muffin came untoast', 'food good', 'never go back', 'great food price high qualiti hous made', 'bu boy hand rude', 'point friend basic figur place joke mind make publicli loudli known', 'back good bbq lighter fare reason price tell public back old way', 'consid two us left full happi go wrong', 'bread made hous', 'downsid servic', 'also fri without doubt worst fri ever', 'servic except food good review', 'coupl month later return amaz meal', 'favorit place town shawarrrrrrma', 'black eye pea sweet potato unreal', 'disappoint', 'could serv vinaigrett may make better overal dish still good', 'go far mani place never seen restaur serv egg breakfast especi', 'mom got home immedi got sick bite salad', 'server not pleasant deal alway honor pizza hut coupon', 'truli unbeliev good glad went back', 'fantast servic pleas atmospher', 'everyth gross', 'love place', 'great servic food', 'first bathroom locat dirti seat cover not replenish plain yucki', 'burger got gold standard burger kind disappoint', 'omg food delicioso', 'noth authent place', 'spaghetti noth special whatsoev', 'dish salmon best great', 'veget fresh sauc feel like authent thai', 'worth drive tucson', 'select probabl worst seen vega none', 'pretti good beer select', 'place like chipotl better', 'classi warm atmospher fun fresh appet succul steak basebal steak', 'star brick oven bread app', 'eaten multipl time time food delici', 'sat anoth ten minut final gave left', 'terribl', 'everyon treat equal special', 'take min pancak egg', 'delici', 'good side staff genuin pleasant enthusiast real treat', 'sadli gordon ramsey steak place shall sharpli avoid next trip vega', 'alway even wonder food delici', 'best fish ever life', 'bathroom next door nice', 'buffet small food offer bland', 'outstand littl restaur best food ever tast', 'pretti cool would say', 'definit turn doubt back unless someon els buy', 'server great job handl larg rowdi tabl', 'find wast food despic food', 'wife lobster bisqu soup lukewarm', 'would come back sushi crave vega', 'staff great ambianc great', 'deserv star', 'left stomach ach felt sick rest day', 'drop ball', 'dine space tini elegantli decor comfort', 'custom order way like usual eggplant green bean stir fri love', 'bean rice mediocr best', 'best taco town far', 'took back money got outta', 'interest part town place amaz', 'rude inconsider manag', 'staff not friendli wait time serv horribl one even say hi first minut', 'back', 'great dinner', 'servic outshin definit recommend halibut', 'food terribl', 'never ever go back told mani peopl happen', 'recommend unless car break front starv', 'come back everi time vega', 'place deserv one star food', 'disgrac', 'def come back bowl next time', 'want healthi authent ethic food tri place', 'continu come ladi night andddd date night highli recommend place anyon area', 'sever time past experi alway great', 'walk away stuf happi first vega buffet experi', 'servic excel price pretti reason consid vega locat insid crystal shop mall aria', 'summar food incred nay transcend noth bring joy quit like memori pneumat condiment dispens', 'probabl one peopl ever go ian not like', 'kid pizza alway hit lot great side dish option kiddo', 'servic perfect famili atmospher nice see', 'cook perfect servic impecc', 'one simpli disappoint', 'overal disappoint qualiti food bouchon', 'account know get screw', 'great place eat remind littl mom pop shop san francisco bay area', 'today first tast buldogi gourmet hot dog tell ever thought possibl', 'left frustrat', 'definit soon', 'food realli good got full petti fast', 'servic fantast', 'total wast time', 'know kind best ice tea', 'come hungri leav happi stuf', 'servic give star', 'assur disappoint', 'take littl bad servic food suck', 'gave tri eat crust teeth still sore', 'complet gross', 'realli enjoy eat', 'first time go think quickli becom regular', 'server nice even though look littl overwhelm need stay profession friendli end', 'dinner companion told everyth fresh nice textur tast', 'ground right next tabl larg smear step track everywher pile green bird poop', 'furthermor even find hour oper websit', 'tri like place time think done', 'mistak', 'complaint', 'serious good pizza expert connisseur topic', 'waiter jerk', 'strike want rush', 'nicest restaur owner ever come across', 'never come', 'love biscuit', 'servic quick friendli', 'order appet took minut pizza anoth minut', 'absolutley fantast', 'huge awkward lb piec cow th gristl fat', 'definit come back', 'like steiner dark feel like bar', 'wow spici delici', 'not familiar check', 'take busi dinner dollar elsewher', 'love go back', 'anyway fs restaur wonder breakfast lunch', 'noth special', 'day week differ deal delici', 'not mention combin pear almond bacon big winner', 'not back', 'sauc tasteless', 'food delici spici enough sure ask spicier prefer way', 'ribey steak cook perfectli great mesquit flavor', 'think go back anytim soon', 'food gooodd', 'far sushi connoisseur definit tell differ good food bad food certainli bad food', 'insult', 'last time lunch bad', 'chicken wing contain driest chicken meat ever eaten', 'food good enjoy everi mouth enjoy relax venu coupl small famili group etc', 'nargil think great', 'best tater tot southwest', 'love place', 'definit not worth paid', 'vanilla ice cream creami smooth profiterol choux pastri fresh enough', 'im az time new spot', 'manag worst', 'insid realli quit nice clean', 'food outstand price reason', 'think run back carli anytim soon food', 'due fact took minut acknowledg anoth minut get food kept forget thing', 'love margarita', 'first vega buffet not disappoint', 'good though', 'one note ventil could use upgrad', 'great pork sandwich', 'wast time', 'total letdown would much rather go camelback flower shop cartel coffe', 'third chees friend burger cold', 'enjoy pizza brunch', 'steak well trim also perfectli cook', 'group claim would handl us beauti', 'love', 'ask bill leav without eat bring either', 'place jewel la vega exactli hope find nearli ten year live', 'seafood limit boil shrimp crab leg crab leg definit not tast fresh', 'select food not best', 'delici absolut back', 'small famili restaur fine dine establish', 'toro tartar cavier extraordinari like thinli slice wagyu white truffl', 'dont think back long time', 'attach ga station rare good sign', 'awesom', 'back mani time soon', 'menu much good stuff could not decid', 'wors humili worker right front bunch horribl name call', 'conclus fill meal', 'daili special alway hit group', 'tragedi struck', 'pancak also realli good pretti larg', 'first crawfish experi delici', 'monster chicken fri steak egg time favorit', 'waitress sweet funni', 'also tast mom multi grain pumpkin pancak pecan butter amaz fluffi delici', 'rather eat airlin food serious', 'cant say enough good thing place', 'ambianc incred', 'waitress manag friendli', 'would not recommend place', 'overal impress noca', 'gyro basic lettuc', 'terribl servic', 'thoroughli disappoint', 'much pasta love homemad hand made pasta thin pizza', 'give tri happi', 'far best cheesecurd ever', 'reason price also', 'everyth perfect night', 'food good typic bar food', 'drive get', 'first glanc love bakeri cafe nice ambianc clean friendli staff', 'anyway not think go back', 'point finger item menu order disappoint', 'oh thing beauti restaur', 'gone go', 'greasi unhealthi meal', 'first time might last', 'burger amaz', 'similarli deliveri man not say word apolog food minut late', 'way expens', 'sure order dessert even need pack go tiramisu cannoli die', 'first time wait next', 'bartend also nice', 'everyth good tasti', 'place two thumb way', 'best place vega breakfast check sat sun', 'love authent mexican food want whole bunch interest yet delici meat choos need tri place', 'terribl manag', 'excel new restaur experienc frenchman', 'zero star would give zero star', 'great steak great side great wine amaz dessert', 'worst martini ever', 'steak shrimp opinion best entre gc', 'opportun today sampl amaz pizza', 'wait thirti minut seat although vacant tabl folk wait', 'yellowtail carpaccio melt mouth fresh', 'tri go back even empti', 'go eat potato found stranger hair', 'spici enough perfect actual', 'last night second time dine happi decid go back', 'not even hello right', 'dessert bit strang', 'boyfriend came first time recent trip vega could not pleas qualiti food servic', 'realli recommend place go wrong donut place', 'nice ambianc', 'would recommend save room', 'guess mayb went night disgrac', 'howev recent experi particular locat not good', 'know not like restaur someth', 'avoid establish', 'think restaur suffer not tri hard enough', 'tapa dish delici', 'heart place', 'salad bland vinegrett babi green heart palm', 'two felt disgust', 'good time', 'believ place great stop huge belli hanker sushi', 'gener portion great tast', 'never go back place never ever recommend place anyon', 'server went back forth sever time not even much help', 'food delici', 'hour serious', 'consid theft', 'eew locat need complet overhaul', 'recent wit poor qualiti manag toward guest well', 'wait wait wait', 'also came back check us regularli excel servic', 'server super nice check us mani time', 'pizza tast old super chewi not good way', 'swung give tri deepli disappoint', 'servic good compani better', 'staff also friendli effici', 'servic fan quick serv nice folk', 'boy sucker dri', 'rate', 'look authent thai food go els', 'steak recommend', 'pull car wait anoth minut acknowledg', 'great food great servic clean friendli set', 'assur back', 'hate thing much cheap qualiti black oliv', 'breakfast perpar great beauti present giant slice toast lightli dust powder sugar', 'kid play area nasti', 'great place fo take eat', 'waitress friendli happi accomod vegan veggi option', 'omg felt like never eaten thai food dish', 'extrem crumbi pretti tasteless', 'pale color instead nice char flavor', 'crouton also tast homemad extra plu', 'got home see driest damn wing ever', 'regular stop trip phoenix', 'realli enjoy crema café expand even told friend best breakfast', 'not good money', 'miss wish one philadelphia', 'got sit fairli fast end wait minut place order anoth minut food arriv', 'also best chees crisp town', 'good valu great food great servic', 'ask satisfi meal', 'food good', 'awesom', 'want leav', 'made drive way north scottsdal not one bit disappoint', 'not eat', 'owner realli realli need quit soooooo cheap let wrap freak sandwich two paper not one', 'check place coupl year ago not impress', 'chicken got definit reheat ok wedg cold soggi', 'sorri not get food anytim soon', 'absolut must visit', 'cow tongu cheek taco amaz', 'friend not like bloodi mari', 'despit hard rate busi actual rare give star', 'realli want make experi good one', 'not return', 'chicken pho tast bland', 'disappoint', 'grill chicken tender yellow saffron season', 'drive thru mean not want wait around half hour food somehow end go make us wait wait', 'pretti awesom place', 'ambienc perfect', 'best luck rude non custom servic focus new manag', 'grandmoth make roast chicken better one', 'ask multipl time wine list time ignor went hostess got one', 'staff alway super friendli help especi cool bring two small boy babi', 'four star food guy blue shirt great vibe still let us eat', 'roast beef sandwich tast realli good', 'even drastic sick', 'high qualiti chicken chicken caesar salad', 'order burger rare came done', 'promptli greet seat', 'tri go lunch madhous', 'proven dead wrong sushi bar not qualiti great servic fast food impecc', 'wait hour seat not greatest mood', 'good joint', 'macaron insan good', 'not eat', 'waiter attent friendli inform', 'mayb cold would somewhat edibl', 'place lot promis fail deliv', 'bad experi', 'mistak', 'food averag best', 'great food', 'go back anytim soon', 'disappoint order big bay plater', 'great place relax awesom burger beer', 'perfect sit famili meal get togeth friend', 'not much flavor poorli construct', 'patio seat comfort', 'fri rice dri well', 'hand favorit italian restaur', 'scream legit book somethat also pretti rare vega', 'not fun experi', 'atmospher great love duo violinist play song request', 'person love hummu pita baklava falafel baba ganoush amaz eggplant', 'conveni sinc stay mgm', 'owner super friendli staff courteou', 'great', 'eclect select', 'sweet potato tot good onion ring perfect close', 'staff attent', 'chef gener time even came around twice take pictur', 'owner use work nobu place realli similar half price', 'googl mediocr imagin smashburg pop', 'dont go', 'promis disappoint', 'sushi lover avoid place mean', 'great doubl cheeseburg', 'awesom servic food', 'fantast neighborhood gem', 'wait go back', 'plantain worst ever tast', 'great place highli recommend', 'servic slow not attent', 'gave star give star', 'staff spend time talk', 'dessert panna cotta amaz', 'good food great atmospher', 'damn good steak', 'total brunch fail', 'price reason flavor spot sauc home made slaw not drench mayo', 'decor nice piano music soundtrack pleasant', 'steak amaz rge fillet relleno best seafood plate ever', 'good food good servic', 'absolut amaz', 'probabl back honest', 'definit back', 'sergeant pepper beef sandwich auju sauc excel sandwich well', 'hawaiian breez mango magic pineappl delight smoothi tri far good', 'went lunch servic slow', 'much say place walk expect amaz quickli disappoint', 'mortifi', 'needless say never back', 'anyway food definit not fill price pay expect', 'chip came drip greas mostli not edibl', 'realli impress strip steak', 'go sinc everi meal awesom', 'server nice attent serv staff', 'cashier friendli even brought food', 'work hospit industri paradis valley refrain recommend cibo longer', 'atmospher fun', 'would not recommend other', 'servic quick even go order like like', 'mean realli get famou fish chip terribl', 'said mouth belli still quit pleas', 'not thing', 'thumb', 'read pleas go', 'love grill pizza remind legit italian pizza', 'pro larg seat area nice bar area great simpl drink menu best brick oven pizza homemad dough', 'realli nice atmospher', 'tonight elk filet special suck', 'one bite hook', 'order old classic new dish go time sore disappoint everyth', 'cute quaint simpl honest', 'chicken delici season perfect fri outsid moist chicken insid', 'food great alway compliment chef', 'special thank dylan recommend order yummi tummi', 'awesom select beer', 'great food awesom servic', 'one nice thing ad gratuiti bill sinc parti larger expect tip', 'fli appl juic fli', 'han nan chicken also tasti', 'servic thought good', 'food bare lukewarm must sit wait server bring us', 'ryan bar definit one edinburgh establish revisit', 'nicest chines restaur', 'overal like food servic', 'also serv indian naan bread hummu spici pine nut sauc world', 'probabl never come back recommend', 'friend pasta also bad bare touch', 'tri airport experi tasti food speedi friendli servic', 'love decor chines calligraphi wall paper', 'never anyth complain', 'restaur clean famili restaur feel', 'way fri', 'not sure long stood long enough begin feel awkwardli place', 'open sandwich impress not good way', 'not back', 'warm feel servic felt like guest special treat', 'extens menu provid lot option breakfast', 'alway order vegetarian menu dinner wide array option choos', 'watch price inflat portion get smaller manag attitud grow rapidli', 'wonder lil tapa ambienc made feel warm fuzzi insid', 'got enjoy seafood salad fabul vinegrett', 'wonton thin not thick chewi almost melt mouth', 'level spici perfect spice whelm soup', 'sat right time server get go fantast', 'main thing enjoy crowd older crowd around mid', 'side town definit spot hit', 'wait minut get drink longer get arepa', 'great place eat', 'jalapeno bacon soooo good', 'servic poor that nice', 'food good servic good price good', 'place not clean food oh stale', 'chicken dish ok beef like shoe leather', 'servic beyond bad', 'happi', 'tast like dirt', 'one place phoenix would defin go back', 'block amaz', 'close hous low key non fanci afford price good food', 'hot sour egg flower soup absolut star', 'sashimi poor qualiti soggi tasteless', 'great time famili dinner sunday night', 'food not tasti not say real tradit hunan style', 'bother slow servic', 'flair bartend absolut amaz', 'frozen margarita way sugari tast', 'good order twice', 'nutshel restaraunt smell like combin dirti fish market sewer', 'girlfriend veal bad', 'unfortun not good', 'pretti satifi experi', 'join club get awesom offer via email', 'perfect someon like beer ice cold case even colder', 'bland flavorless good way describ bare tepid meat', 'chain fan beat place easili', 'nacho must', 'not come back', 'mani word say place everyth pretti well', 'staff super nice quick even crazi crowd downtown juri lawyer court staff', 'great atmospher friendli fast servic', 'receiv pita huge lot meat thumb', 'food arriv meh', 'pay hot dog fri look like came kid meal wienerschnitzel not idea good meal', 'classic main lobster roll fantast', 'brother law work mall ate day guess sick night', 'good go review place twice herea tribut place tribut event held last night', 'chip salsa realli good salsa fresh', 'place great', 'mediocr food', 'get insid impress place', 'super pissd', 'servic super friendli', 'sad littl veget overcook', 'place nice surpris', 'golden crispi delici', 'high hope place sinc burger cook charcoal grill unfortun tast fell flat way flat', 'could eat bruschetta day devin', 'not singl employe came see ok even need water refil final serv us food', 'lastli mozzarella stick best thing order', 'first time ever came amaz experi still tell peopl awesom duck', 'server neglig need made us feel unwelcom would not suggest place', 'servic terribl though', 'place overpr not consist boba realli overpr', 'pack', 'love place', 'say dessert yummi', 'food terribl', 'season fruit fresh white peach pure', 'kept get wors wors offici done', 'place honestli blown', 'definit would not eat', 'not wast money', 'love put food nice plastic contain oppos cram littl paper takeout box', 'crêpe delic thin moist', 'aw servic', 'ever go', 'food qualiti horribl', 'price think place would much rather gone', 'servic fair best', 'love sushi found kabuki price hip servic', 'favor stay away dish', 'poor servic', 'one tabl thought food averag worth wait', 'best servic food ever maria server good friendli made day', 'excel', 'paid bill not tip felt server terribl job', 'lunch great experi', 'never bland food surpris consid articl read focus much spice flavor', 'food way overpr portion fuck small', 'recent tri caballero back everi week sinc', 'buck head realli expect better food', 'food came good pace', 'ate twice last visit especi enjoy salmon salad', 'back', 'could not believ dirti oyster', 'place deserv star', 'would not recommend place', 'fact go round star awesom', 'disbelief dish qualifi worst version food ever tast', 'bad day not low toler rude custom servic peopl job nice polit wash dish otherwis', 'potato great biscuit', 'probabl would not go', 'flavor perfect amount heat', 'price reason servic great', 'wife hate meal coconut shrimp friend realli not enjoy meal either', 'fella got huevo ranchero look appeal', 'went happi hour great list wine', 'may say buffet pricey think get pay place get quit lot', 'probabl come back', 'worst food servic', 'place pretti good nice littl vibe restaur', 'talk great custom servic cours back', 'hot dish not hot cold dish close room temp watch staff prepar food bare hand glove everyth deep fri oil', 'love fri bean', 'alway pleasur deal', 'plethora salad sandwich everyth tri get seal approv', 'place awesom want someth light healthi summer', 'sushi strip place go', 'servic great even manag came help tabl', 'feel dine room colleg cook cours high class dine servic slow best', 'start review two star edit give one', 'worst sushi ever eat besid costco', 'excel restaur highlight great servic uniqu menu beauti set', 'boyfriend sat bar complet delight experi', 'weird vibe owner', 'hardli meat', 'better bagel groceri store', 'go place gyro', 'love owner chef one authent japanes cool dude', 'burger good pizza use amaz doughi flavorless', 'found six inch long piec wire salsa', 'servic terribl food mediocr', 'defin enjoy', 'order albondiga soup warm tast like tomato soup frozen meatbal', 'three differ occas ask well done medium well three time got bloodiest piec meat plate', 'two bite refus eat anymor', 'servic extrem slow', 'minut wait got tabl', 'serious killer hot chai latt', 'allergi warn menu waitress absolut clue meal not contain peanut', 'boyfriend tri mediterranean chicken salad fell love', 'rotat beer tap also highlight place', 'price bit concern mellow mushroom', 'worst thai ever', 'stay vega must get breakfast least', 'want first say server great perfect servic', 'pizza select good', 'strawberri tea good', 'highli unprofession rude loyal patron', 'overal great experi', 'spend money elsewher', 'regular toast bread equal satisfi occasion pat butter mmmm', 'buffet bellagio far anticip', 'drink weak peopl', 'order not correct', 'also feel like chip bought not made hous', 'disappoint dinner went elsewher dessert', 'chip sal amaz', 'return', 'new fav vega buffet spot', 'serious cannot believ owner mani unexperienc employe run around like chicken head cut', 'sad', 'felt insult disrespect could talk judg anoth human like', 'call steakhous properli cook steak understand', 'not impress concept food', 'thing crazi guacamol like puré', 'realli noth postino hope experi better', 'got food poison buffet', 'brought fresh batch fri think yay someth warm', 'hilari yummi christma eve dinner rememb biggest fail entir trip us', 'needless say go back anytim soon', 'place disgust', 'everi time eat see care teamwork profession degre', 'ri style calamari joke', 'howev much garlic fondu bare edibl', 'could bare stomach meal complain busi lunch', 'bad lost heart finish', 'also took forev bring us check ask', 'one make scene restaur get definit lost love one', 'disappoint experi', 'food par denni say not good', 'want wait mediocr food downright terribl servic place', 'waaaaaayyyyyyyyyi rate say', 'go back', 'place fairli clean food simpli worth', 'place lack style', 'sangria half glass wine full ridicul', 'bother come', 'meat pretti dri slice brisket pull pork', 'build seem pretti neat bathroom pretti trippi eat', 'equal aw', 'probabl not hurri go back', 'slow seat even reserv', 'not good stretch imagin', 'cashew cream sauc bland veget undercook', 'chipolt ranch dip saus tasteless seem thin water heat', 'bit sweet not realli spici enough lack flavor', 'disappoint', 'place horribl way overpr', 'mayb vegetarian fare twice thought averag best', 'busi know', 'tabl outsid also dirti lot time worker not alway friendli help menu', 'ambianc not feel like buffet set douchey indoor garden tea biscuit', 'con spotti servic', 'fri not hot neither burger', 'came back cold', 'food came disappoint ensu', 'real disappoint waiter', 'husband said rude not even apolog bad food anyth', 'reason eat would fill night bing drink get carb stomach', 'insult profound deuchebaggeri go outsid smoke break serv solidifi', 'someon order two taco think may part custom servic ask combo ala cart', 'quit disappoint although blame need place door', 'rave review wait eat disappoint', 'del taco pretti nasti avoid possibl', 'not hard make decent hamburg', 'like', 'hell go back', 'gotten much better servic pizza place next door servic receiv restaur', 'know big deal place back ya', 'immedi said want talk manag not want talk guy shot firebal behind bar', 'ambianc much better', 'unfortun set us disapppoint entre', 'food good', 'server suck wait correct server heimer suck', 'happen next pretti put', 'bad caus know famili own realli want like place', 'overpr get', 'vomit bathroom mid lunch', 'kept look time soon becom minut yet still food', 'place eat circumst would ever return top list', 'start tuna sashimi brownish color obvious fresh', 'food averag', 'sure beat nacho movi would expect littl bit come restaur', 'ha long bay bit flop', 'problem charg sandwich bigger subway sub offer better amount veget', 'shrimp unwrap live mile brushfir liter ice cold', 'lack flavor seem undercook dri', 'realli impress place close', 'would avoid place stay mirag', 'refri bean came meal dri crusti food bland', 'spend money time place els', 'ladi tabl next us found live green caterpillar salad', 'present food aw', 'tell disappoint', 'think food flavor textur lack', 'appetit instantli gone', 'overal not impress would not go back', 'whole experi underwhelm think go ninja sushi next time', 'wast enough life pour salt wound draw time took bring check']
</code>
## Creating the Bag of Words model_____no_output_____
<code>
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 1500) # set after getting number of all words
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, -1].values_____no_output_____len(X[0])_____no_output_____
</code>
## Splitting the dataset into the Training set and Test set_____no_output_____
<code>
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)_____no_output_____
</code>
## Training the Linear Support Vector Machine model on the Training set_____no_output_____
<code>
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)_____no_output_____
</code>
## Predicting the Test set results_____no_output_____
<code>
y_pred = classifier.predict(X_test)
# evaluate performance by comparing the predicted review and the ground truth
print(np.concatenate(
(
y_pred.reshape(len(y_pred), 1),
y_test.reshape(len(y_test), 1)
),
axis=1))[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[1 0]
[1 1]
[1 1]
[1 0]
[1 0]
[1 1]
[0 1]
[1 1]
[1 1]
[0 0]
[1 1]
[0 1]
[0 1]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 0]
[1 0]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 1]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 0]
[1 1]
[0 0]
[1 1]
[0 1]
[0 1]
[0 0]
[1 1]
[1 1]
[0 1]
[1 1]
[0 0]
[1 0]
[1 1]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[1 1]
[1 0]
[0 0]
[1 1]
[1 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 1]
[0 1]
[1 0]
[0 1]
[1 1]
[1 1]
[1 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 1]
[0 1]
[1 1]
[0 0]
[1 0]
[0 1]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[0 1]
[1 1]
[0 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 1]
[1 1]
[0 1]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 1]
[1 1]
[1 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]]
</code>
## Making the Confusion Matrix_____no_output_____
<code>
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy: {0:.2g}'.format(accuracy_score(y_test, y_pred)))
print('Precision: {0:.2g}'.format(precision_score(y_test, y_pred)))
print('Recall: {0:.2g}'.format(recall_score(y_test, y_pred)))
print('F1 Score: {0:.2g}'.format(f1_score(y_test, y_pred)))[[79 18]
[25 78]]
Accuracy: 0.79
Precision: 0.81
Recall: 0.76
F1 Score: 0.78
</code>
|
{
"repository": "mirokuru/ml_toolkit",
"path": "nlp/bag-of-words/my_natural_language_processing_svm.ipynb",
"matched_keywords": [
"STAR",
"Salmon"
],
"stars": 3,
"size": 46262,
"hexsha": "cb8bf1211be467aeece41cfce42273104e61a4c9",
"max_line_length": 36271,
"avg_line_length": 92.1553784861,
"alphanum_fraction": 0.6830660153
}
|
# Notebook from finsberg/IN1910_H21
Path: book/docs/lectures/stochastic_processes/random_walks_and_markov_processes.ipynb
# Random Walks
This week we will discuss a new topic, *random walks*. Random walks are an example of a markov process, and we will also learn what this means, and how we can analyze the behavior of the random walker using a markov chain.
The exercises this week are slightly more extensive then other weeks, and is more of a project based work than earlier exercise sets as well. This is because the plan is to cover some of these exercises in L20, i.e., the lecture on Friday November 8th. It it therefore recommended that you work on the exercises before Thursday. If you cannot attend the lecture on Friday, it is strongly recommended to take a good look at the example solutions, which I will upload during Friday's lecture._____no_output_____## Random Walks
A random walk is a process where we follow some object taking *random steps*. The path the object walks then defines a random path, or trajectory. Random walks are powerful mathematical objects with a large number of use cases, but we will return to this point later, for now let us look at some actual random walks._____no_output_____### The 1D Random Walker
A random walk can refer to many different processes, but let us start of with perhaps the simplest of them all, a 1D random walk on a regular grid. Assume some walker starts of at $x=0$. Now it takes steps to the left or right at random, with equal probability.
<img src="fig/1D_walk.png" width=600>
We denote the position of the walker after $N$ steps by $X_N$. Because the walker is taking random steps, $X_N$ is what we call a *random* or *stochastic variable*, it won't have a specific value in general, but be different for each specific random walk, depending on what steps are actually taken.
For each step the walker takes, we move either 1 step to the left, or 1 step to the right. Thus
$$X_{N+1} = X_{N} + K_N,$$
where $K_N$ is the $N$'th step taken. We assume that all steps are independent of all others, and that each step has an equal chance of being to the left or to the right, so
$$K_N = \begin{cases}
1 & \mbox{with 50} \% \mbox{ chance} \\
-1 & \mbox{with 50}\% \mbox{ chance}
\end{cases}$$
Let us look at how a random walk looks. To draw the step $K_N$ using numpy, we use `np.random.randint(2)`, but this gives us 0 or 1, so we instead use `2*np.random.randint(2) - 1`, which will then give us -1 or 1 with equal probability._____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
nr_steps = 10
X = np.zeros(nr_steps+1)
X[0] = 0
for N in range(nr_steps):
X[N+1] = X[N] + 2*np.random.randint(2) - 1
plt.plot(range(nr_steps+1), X)
plt.xlabel('Nr of steps taken')
plt.ylabel(r'$X_N$')
plt.show()
_____no_output_____
</code>
Simply using `plt.plot` here can be a bit misleading, so we can alternatively change the plotstyle, or we can change to use the `plt.step` function instead:_____no_output_____
<code>
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel(r'$X_N$')
plt.show()_____no_output_____nr_steps = 1000
X = np.zeros(nr_steps+1)
X[0] = 0
for N in range(nr_steps):
X[N+1] = X[N] + 2*np.random.randint(2) - 1
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel('Displacement')
plt.show()_____no_output_____
</code>
### Vectorized Random Walk
As we saw last week, if we want to repeatedly draw and use random numbers, `np.random` can be used in a vectorized way to be more efficient. Let us see how we can do this for a random walk.
Drawing the steps themselves is straight forward:_____no_output_____
<code>
nr_steps = 1000
steps = 2*np.random.randint(2, size=nr_steps) - 1_____no_output_____
</code>
But now we need to combine these steps into the variable $X_N$. Now, if we only want to know the final displacement after all the steps, then we could simply do the sum
$$X_{1000} = \sum_{i=1}^{1000} K_i.$$
However, if we want to plot out the full trajectory of the walk, then we need to compute all the partial sums as well, i.e., find $X_N$ for $N=1, 2, 3, \ldots 1000.$
We can do this with the function `np.cumsum`, which stands for *cumulative sum*. Taking the cumulative sum of a sequence gives a new sequence where element $n$ of the new sequence is the sum of the first $n$ elements of the input. Thus, the cumulative sum of $K_N$ will give $X_N$._____no_output_____
<code>
X = np.zeros(nr_steps + 1)
X[0] = 0
X[1:] = X[0] + np.cumsum(steps)_____no_output_____
</code>
Note that we could have simply said `X = np.cumsum(steps)`, but in that case, $X_0$ wouldn't be 0, it would be -1 or 1. That's not a big deal, but we take the extra step of defining $X_0 = 0$, and then finding the rest of $X_N$ for $N > 100$._____no_output_____
<code>
plt.step(range(nr_steps+1), X, where='mid')
plt.xlabel('Nr of steps taken')
plt.ylabel('Displacement')
plt.show()_____no_output_____
</code>
### Many Walkers
Because the walker is completely random, understanding how it behaves from looking at a single walker isn't that useful. Instead, we can look at a large *ensemble* of walkers, and then perhaps we can gleam some insight into how they behave.
We can also use the vectorization of `np.random` to draw the walks of many different walkers in a vectorized manner:_____no_output_____
<code>
nr_steps = 100
walkers = 5
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()_____no_output_____
</code>
Or with many more steps:_____no_output_____
<code>
nr_steps = 10000
walkers = 5
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X, linewidth=0.5)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()_____no_output_____
</code>
### Very Many Walkers
We have now seen how we can plot 5 walkers. But if we really want to understand the average behavior, we might want to plot a lot more walkers. With our code, this works just fine, but the output won't tell us to much, because it will become too chaotic:_____no_output_____
<code>
nr_steps = 1000
walkers = 1000
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X)
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.show()_____no_output_____
</code>
This plot shows a thousand random walks overlaying each other, but we cannot really see what is going on, because the different lines simply overlap and hide each other.
To fix this, instead of plotting all the walks over each other, we plot the *density* of walkers. We can accomplish this by using the `alpha` keyword to `plt.plot`. This keyword is used to make a line semi-transparent . Here, `alpha=1` is the default, non-transparent line, `alpha=0` is a completely transparent, and thus invisible, line. If we then set for example `alpha=0.1`, we get 10% transparent lines.
With semi-transparent lines, anywhere many lines overlap will give a strong color, if there are fewer lines, we get a weaker color. To emphasise this, let us also only plot black lines, and ignore colors._____no_output_____
<code>
nr_steps = 1000
walkers = 1000
X = np.zeros((nr_steps+1, walkers))
X[0, :] = 0
steps = 2*np.random.randint(2, size=(nr_steps, walkers)) - 1
X[1:, :] = np.cumsum(steps, axis=0)
plt.plot(X, alpha=0.01, color='k')
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.axis((0, 1000, -100, 100))
plt.show()_____no_output_____
</code>
At the beginning, all the walkers are close to the origin, as they simply have not had time to get further away. As time progresses towards the right, the walkers spread out. The highest density of walkers is still found in the middle however, as the net sum of steps will tend towards a mean of 0._____no_output_____### Analyzing the average behavior of a walker
Because the random walk is a random process, prediciting how a single walker will move is impossible. However, if we instead look at a lot of walkers, we can analyze their *average* activity. Because of the law of large numbers, we know that the average behavior for a large number of walkers will converge to a specific behavior.
Compare for example the first plot of a single walker. If you rerun this code, you will get a dramatically different behavior, because one specific walk looks very different from one specific different walk. For the last Figure we made however, rerunning the code won't change much, because the average behavior of 1000 walkers will tend to be the same.
One way to explore the average behavior is of course to do simulation, and then simply taking the sample average. For more complex random walk behaviors, this is the only option. Our random walk however, is quite simple, and so we can also analyze it mathematically. Let us do this.
#### Average Displacement
First we want to know the average displacement of a large number of walkers? For a single walker, the position of the walker after $N$ steps was given by
$$X_N = X_0 + \sum_{i=1}^N K_i.$$
or alternatively:
$$X_{N+1} = X_{N} + K_N.$$
Now, we want to compute the *average* of this variable, which we will denote $\langle X_N \rangle$, another word for this value is the *expected value* or *expectation*. If you have not heard these terms before, simply think of the value as the average of a large number of walkers.
Taking the average of the $X_N$ gives:
$$\langle X_{N+1} \rangle = \langle X_N + K_N \rangle.$$
However, taking an average is a linear operation, and so we can split the right hand side into
$$\langle X_{N+1} \rangle = \langle X_N \rangle + \langle K_N \rangle.$$
Now, we don't know $\langle X_{N} \rangle,$ because this is what we are actually trying to find. However, $\langle K_N \rangle$, we know, because it is simply the average of the two outcomes:
$$\langle K_N \rangle = \frac{1}{2}\cdot1 + \frac{1}{2}\cdot (-1) = \frac{1}{2} - \frac{1}{2} = 0.$$
Because there is an equal chance of taking a step to the left and the right, the *average* displacement for a single step will be 0. Inserting this gives
$$\langle X_{N+1} \rangle = \langle X_{N} \rangle.$$
If the walkers start in $X_0 = 0$, then $\langle X_0 \rangle =0$, which in turn implies $\langle X_1 = 0$, and then $\langle X_2 \rangle = 0$ and so on. Giving
$$\langle X_{N}\rangle = 0.$$
This expression tells us that the average displacement of a large number of walkers will be 0, no matter how many steps they take. Is this not surprising? We have seen that the more steps the walkers take, the longer away from the origin they will tend to move, so why is the average 0?
The average is 0 because we are looking at a completely *uniform* and symmetric walker. The walkers have an equal chance of moving left, or right, from the origin, and the average will therefore tend to be 0, even if the walkers move away from the origin._____no_output_____#### Averaged Square Displacement
The average displacement became 0 because the problem is completely symmetric. However, if we now instead look at the squared displacement $X_N^2$, we get a better feel for how far away from the origin things move, because the square is positive regardless of wether the walker moves away in the positive or negative direction.
We can write out an expression for $X_{N+1}^2$ as
$$X_{N+1}^2 = (X_{N} + K_N)^2 = X_{N}^2 + 2X_N \cdot K_N + K_N^2.$$
Again we care about the average, so we take the average of this expression:
$$\langle X_{N+1}^2 \rangle = \langle X_{N}^2 \rangle + 2\langle X_N \cdot K_N \rangle + \langle K_N^2 \rangle.$$
Now, the term $\langle X_N \cdot K_N \rangle$ will again be zero, because $K_N$ is independent of $X_N$ and has an equal chance of being positive and negative. So we get
$$\langle X_{N+1}^2 \rangle = \langle X_N^2 \rangle + \langle K_N^2 \rangle.$$
Let us compute $\langle K_N^2 \rangle$:
$$\langle K_N^2 \rangle = \frac{1}{2}(1)^2 + \frac{1}{2}(-1)^2 = \frac{1}{2} + \frac{1}{2} = 1.$$
Thus we get
$$\langle X_{N+1}^2 \rangle = \langle X_N^2 \rangle + 1.$$
If we say that $X_0 = 0$, we then get that $\langle X_1 \rangle = 1$, $\langle X_2 \rangle = 2$, and so on:
$$\langle X_N^2 \rangle = N.$$
So we see that while the average displacement does not change over time: $\langle X_N \rangle = 0$, the average squared displacement does! In fact, the squared displacement grows linearily with the number of steps $N$. The longer a random walk carries on for, the further away from the origin the walker will tend to move.
This expression also tells us that the *variance* of the walkers, because the variance of random variable can always be written as
$$\text{Var}(X_N) = \langle X_N^2 \rangle - \langle X_N \rangle^2,$$
and so in this case
$$\text{Var}(X_N) = N - 0^2 = N.$$
So thar variance of $X_N$ is also $N$._____no_output_____#### Root Mean Square Displacement
While it is clear from the expression
$$\langle X_N^2 \rangle = N,$$
that the walkers will tend to move further away from the origin, this is the *squared* displacement. A more intuitive quantity would perhaps be the average absolute *displacement*, i.e., $\langle |X_N| \rangle$. This would be a useful quantity, but it turns out to be a bit tricky to compute.
As an easier solution, we just take the root of the mean squared displacement:
$$\text{RMS} = \sqrt{\langle X_N^2 \rangle} = \sqrt{N}.$$
This quantity is known as the *root mean square* displacement (RMS). It won't be exactly the same as $\langle |X_N| \rangle$, but it will be close to it.
Because the root mean square displacement grows as $\sqrt{N}$, we see that a 1D random walker will tend to be about $\sqrt{N}$ away from the origin after taking $N$ steps.
_____no_output_____### Plotting the RMS
Let us verify our statement. We repeat our density plot with 1000 walkers, but now we also plot in our expression for the RMS: $\sqrt{N}$:_____no_output_____
<code>
N = 1000
walkers = 1000
k = 2*np.random.randint(2, size=(N, walkers)) - 1
X = np.cumsum(k, axis=0)
plt.plot(X, alpha=0.01, color='k')
plt.plot(range(N), np.sqrt(2*np.arange(N)), color='C1')
plt.plot(range(N), -np.sqrt(2*np.arange(N)), color='C1')
plt.xlabel('Nr of Steps')
plt.ylabel('Displacement')
plt.axis((0, 1000, -100, 100))
plt.show()_____no_output_____
</code>
We see that the density of walkers inside the RMS curves is higher than outside it. This makes sense, because the root-mean-square will tend to give outliers more weight. The RMS curves still seem very reasonable, as they clearly indicate the rough region where most walkers will be found. We also see that the scaling seems reasonable.
Instead of plotting it, we can also compute the actual root-mean-square of our 1000 walkers, which is then a *sample mean*, and compare it to our analytical expresison._____no_output_____
<code>
N = 1000
walkers = 1000
k = 2*np.random.randint(2, size=(N, walkers)) - 1
X = np.cumsum(k, axis=0)
RMS = np.sqrt(np.mean(X**2, axis=1))
plt.plot(np.arange(N), np.sqrt(np.arange(N)), '--', label="Analytic Mean")
plt.plot(np.arange(N), RMS, label="Sample mean")
plt.legend()
plt.xlabel('Number of steps')
plt.ylabel('Root Mean Square Displacement')
plt.show()_____no_output_____
</code>
So we see that our analytic expression looks very reasonable._____no_output_____### Flipping Coins and the Law of Large Numbers
So far we have only look at the random walk as a completely theoretical exercise. As an example, let us now couple it to some more concrete situation.
Our 1D random walk is the sum of a discrete random variable that has 2, equally likely outcomes. An example of this is flipping a coin. Thus, our random walk models the process of flipping a coin many times and keeping track of the total number of heads and tails we get.
We looked at this example last week as well:_____no_output_____
<code>
def flip_coins(N):
flips = np.random.randint(2, size=N)
heads = np.sum(flips == 0)
tails = N - heads
return heads, tails
print("Flipping 1000 coins:")
heads, tails = flip_coins(1000)
print("Heads:", heads)
print("Tail:", tails)Flipping 1000 coins:
Heads: 477
Tail: 523
</code>
When we flip $N$ coins, we expect close an equal number of heads and tails, i.e., about $N/2$ of each. But should we expect exactly $N/2$ heads? The answer is *no*. The probability of getting a perfectly even distribution goes *down* with the number of throws $N$. Let us look at some numbers:_____no_output_____
<code>
print(f"{'N':>10} {'Heads':>10}|{'Tails':<6} {'Deviation':>12} {'Ratio':>10}")
print("="*60)
for N in 10, 1000, 10**4, 10**5, 10**6:
for i in range(3):
heads, tails = flip_coins(N)
print(f"{N:>10} {heads:>10}|{tails:<6} {abs(N/2-heads):10} {heads/N:>10.1%}|{tails/N:<6.1%}")
print()
print("="*60) N Heads|Tails Deviation Ratio
============================================================
10 7|3 2.0 70.0%|30.0%
10 4|6 1.0 40.0%|60.0%
10 6|4 1.0 60.0%|40.0%
1000 500|500 0.0 50.0%|50.0%
1000 500|500 0.0 50.0%|50.0%
1000 489|511 11.0 48.9%|51.1%
10000 5020|4980 20.0 50.2%|49.8%
10000 4988|5012 12.0 49.9%|50.1%
10000 5017|4983 17.0 50.2%|49.8%
100000 49994|50006 6.0 50.0%|50.0%
100000 49856|50144 144.0 49.9%|50.1%
100000 50159|49841 159.0 50.2%|49.8%
1000000 499345|500655 655.0 49.9%|50.1%
1000000 499588|500412 412.0 50.0%|50.0%
1000000 499322|500678 678.0 49.9%|50.1%
============================================================
</code>
Here, we explore how the *deviation*, which is the number of flips we are away from a perfectly even split, grows with $N$. What we call the deviation here is equivalent to the displacement of one of our random walkers, and as we have seen, the root mean square displacement grows as $\sqrt{N}$. The more coins we flip $N$, the bigger deviation from the baseline we expect.
Now, isn't this counterdicting the law of large numbers? No, it isn't, but it actually highlights an important point about the law of large numbers. The law of large numbers only guarantees that the *average* of many trials will approach the expected value for large numbers. Thus the law of large numbers states that the *ratio* of heads and tails will become 50%/50% in the long run, it gives not guarantee that we will have the same number of outcomes.
In fact ,we see that this is indeed the case for our results too, while the deviation grows with $N$, we can see for the exact same random sample that the *ratio* of heads and tails approaches 50/50! This is because the ratio is computed from
$$P(\text{heads}) \approx \frac{\text{number of heads}}{N},$$
but we know that the deviation in the number of heads grows as $\sqrt{N}$, but that means the deviation in the *ratio* grows as
$$\frac{\sqrt{N}}{N} = \frac{1}{\sqrt{N}}.$$
And so our results don't contradict the law of large numbers, it proves it.
The law of large numbers only talkes about averages, never about single events. However, it is a very common fallacy to think that the number of heads and tails have to *even out* in the long run. This is known as the *Gambler's Fallacy*._____no_output_____### 2D Random Walk
So far we have only looked at a random walk in one dimension. Let us add another dimension, so we are looking at a random walker moving around in a 2D plane. We will still be looking at a random walk on a regular grid or lattice.
For every step, there are then 4 choices for our walker. If we envison our grid as the streets of a city seen from above, these directions would be *north*, *south*, *west*, and *east*. We now denote the displacement of the walker as
$$\vec{R}_N = (X_N, Y_N).$$
Let us jump right into simulating a random walk._____no_output_____
<code>
possible_steps = [(1, 0), (-1, 0), (0, 1), (0, -1)]
N = 100
R = np.zeros((N+1, 2))
R[0] = (0, 0)
for i in range(N):
step = possible_steps[np.random.randint(4)]
R[i+1] = R[i] + step
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()_____no_output_____
</code>
Here we specify the possible steps, and then draw one of these at random for every step. Performing this vectorized is slightly tricky. to make things a lot simpler, we simply change the possible steps to saying that the walker takes a step in both dimensions for each step, so instead of
$$(1, 0) \quad (-1, 0) \quad (0, 1) \quad (0, -1),$$
as our possibilities, we have
$$(1, 1) \quad (1, -1) \quad (-1, 1) \quad (-1, 1).$$
This makes things a lot easier, cause the steps in the $X$ and $Y$ direction are now decoupled._____no_output_____
<code>
N = 1000
steps = 2*np.random.randint(2, size=(N, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()_____no_output_____
</code>
The only difference with our change to the steps is that our walker now walks a distance $\sqrt{2}$ every step, instead of 1. The plot also looks like the diagonal version of the previous plot._____no_output_____Let us try to plot many more steps:_____no_output_____
<code>
N = 25000
steps = 2*np.random.randint(2, size=(N, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y)
plt.axis('equal')
plt.show()_____no_output_____
</code>
Because our walker is now using 2 spatial dimensions, we cannot plot this walk over time, we can only plot out the total trajectory over time. This has some drawbacks, as it is hard to understand how the walk builds up over time, and how much the walk doubles back over itself._____no_output_____A fix to this is to create an animation of the walk over time. We won't take the time to do this here. But you can click the links under to see such animations:
1. [Animated random walk in 2D with 2500 steps](https://upload.wikimedia.org/wikipedia/commons/f/f3/Random_walk_2500_animated.svg)
2. [Animated random walk in 2D with 25000 steps](https://upload.wikimedia.org/wikipedia/commons/c/cb/Random_walk_25000.svg)_____no_output_____
_____no_output_____### Plotting several walkers
Again we can plot several walks over each other_____no_output_____
<code>
nr_steps = 500
nr_walkers = 5
for walker in range(nr_walkers):
steps = 2*np.random.randint(2, size=(nr_steps, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y, alpha=0.5)
plt.axis('equal')
plt.scatter(0, 0, marker='o', color='black', s=100)
plt.show()_____no_output_____
</code>
Here, we plot 5 random walks over each other, and mark the origin with a black circle._____no_output_____### Analyzing the Mean Displacement
We can now return to analyze the average behavior of the 2D random walker, just like we did for the 1D case. However, it turns out we don't need to reinvent the wheel. We know that
$$\vec{R}_N = (X_N, Y_N).$$
So to find the mean displacement, we find
$$\langle \vec{R}_N \rangle = (\langle X_N \rangle, \langle Y_N \rangle).$$
However, both $X_N$ and $Y_N$ behave exactly like a 1D-walker in their dimension, as they increase by -1 or 1 every step. So we have
$$\langle \vec{R}_N \rangle = (0, 0).$$
We could almost have guessed this, because the 2D problem is, just like the 1D problem, completely symmetric. The average will therefore tend to be the exact origin.
But what about the mean square displacement? In this case, taking the square of the vector means taking the dot product with itself, it is thus the square of the distance to the origin we are computing:
$$\langle |\vec{R_N}|^2 \rangle = \langle X_N^2 \rangle + \langle Y_N^2 \rangle.$$
So again we can simply insert the values we found earlier for the 1D walker:
$$\langle |\vec{R_N}|^2 \rangle = 2N.$$
Thus, the root mean square distance of a 2D random walker to the origin is given by
$$\text{RMS} = \sqrt{\langle |\vec{R_N}|^2 \rangle} = \sqrt{2N}.$$
We can draw this into our 2D plot, to see if this seems reasonable._____no_output_____
<code>
nr_steps = 500
nr_walkers = 5
# Plot random walks
for walker in range(nr_walkers):
steps = 2*np.random.randint(2, size=(nr_steps, 2)) - 1
R = np.cumsum(steps, axis=0)
X = R[:, 0]
Y = R[:, 1]
plt.plot(X, Y, alpha=0.5)
plt.axis('equal')
# Plot origin
plt.scatter(0, 0, marker='o', color='black', s=100)
# Plot analytic RMS
rms = np.sqrt(2*nr_steps)
theta = np.linspace(0, 2*np.pi, 1001)
plt.plot(rms*np.cos(theta), rms*np.sin(theta), 'k--')
# Plot
plt.show()_____no_output_____
</code>
## Why Random Walkers are so interesting
The random walker is an example of a process that is built up of simple, random steps, but whose net behavior can be complex. These kind of processes are found throughout the natural sciences and in mathematics. The list of applications of random walks is therefore very long and varied.
Some examples of processess that can be modelled with random walks are:
* The price of stocks in economics
* Modeling of population dynamics in biology
* The modeling of genetic drift
* The study of polymers in material science use a special type of self-avoiding random walks
* In image processing, images can be segmented by using an algorithm that randomly walks over the image
* Twitter uses a random walk approach to make suggestions of who to follow
These are just *some* examples, and the list goes on and on. If you want more example, there is a more extensive list [here](https://en.wikipedia.org/wiki/Random_walk#Applications)._____no_output_____## Moving from a discrete to a continious model
As a final example, let us show how we can move from a discrete random walk model to a continious one. As we already have seen some examples of, when we move towards a large number of steps $N$, the movement of the random walker doesn't necessarily look so jagged and force anymore, but *seems* more like a continious process. And this is the whole trick to moving to a continious model, letting $N\to\infty$. We obviously cannot do this on a computer, but we can analyze the problem mathematically._____no_output_____To keep things as simple as possible, we can consider the uniform 1D random walker. Instead of talking about the displacement $X_N$, we now define a function $P(x, t)$ that denotes the probability of finding the walker at position $x$ at time $t$.
Because we have a discrete model, we say that the walker moves a length $\Delta x$ each step, so that the walker will be at a position
$$x_i = i\cdot \Delta x,$$
In addition, we assume the walker takes one step every $\Delta t$ timestep, so we can denote a given time as
$$t_j = j\cdot \Delta t.$$
Thus, we are talking about the probability of finding the walker at position $x_i$ at time $t_j$, which is described by the function $P(x_i, t_j)$, or simply $P_{i, j}$ for short.
_____no_output_____Now, our goal isn't necessarily to find an expression for $P$, to find an expression for how it develops over time. Or put more formally, we are trying to find an expression for the time-derivative
$$\frac{\partial P(x, t)}{\partial t},$$
i.e., we are trying to find a differential equation. To find a time derivative, we want to find an expression on the form:
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}.$$
Because then we can take the limit $\Delta t \to 0$ to get a derivative._____no_output_____As we are trying to find the time-derivative of $P(x, t)$, let us write out what we know about stepping forward in time with our model. The probability of finding the walker in position $x_i$ at the *next* time step, must be given by the chance of finding it at the two neighboring grid points at the current time step, so:
$$P(x_i, t_{j+1}) = \frac{1}{2}P(x_{i-1}, t_j) + \frac{1}{2}P(x_{i+1}, t_j).$$
The reasons the two terms have a factor 1/2, is because there is only a 50% chance of a walker in those grid points moving the right direction._____no_output_____Now, to find an expression for the time derivative, we need to subtract $P(x_i, t_j)$ from both sides.
$$P(x_i, t_{j+1}) - P(x_i, t_j) = \frac{1}{2}\big(P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)\big).$$
The next step is then to divide by $\Delta t$ on both sides
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}=\frac{1}{2\Delta t}\big(P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)\big).$$
Now we are getting very close! However, we cannot take the limit $\Delta t \to 0$ just yet, because then the expression on the right will blow up. However, we can fix this by expanding the fraction by a factor of $\Delta x^2$
$$\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}=\frac{\Delta x^2}{2\Delta t}\frac{P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)}{\Delta x^2}.$$_____no_output_____This helps, because we can now take the limit of $\Delta t \to 0$ and $\Delta x^2 \to 0$ at the *same* time. This way, we can enforce the constraint that we do it in such a manner that
$$\frac{\Delta x^2}{2\Delta t} = \text{constant}.$$
Because this expression will be a constant, we name it $D$. We then have
$$\lim_{\substack{\Delta t \to 0 \\ \Delta x \to 0 \\ D={\rm const.}}} \bigg[\frac{P(x_i, t_{j+1}) - P(x_i, t_j)}{\Delta t}= D \frac{P(x_{i-1}, t_j) - 2P(x_i, t_j) + P(x_{i+1}, t_j)}{\Delta x^2}\bigg].$$_____no_output_____Now, the term on the left was equal to the time derivative of $P$. But the expression on the right is also a derivative, it is the second-order derivative with respect to $x$! So we get
$$\frac{\partial P}{\partial t} = D\frac{\partial^2 P}{\partial x^2}.$$_____no_output_____Let us summarize what we have done, we have said our random walker takes steps of $\Delta x$ in time $\Delta t$, and then taken the limit where both of these go to 0. Effectively, we are saying the walker takes infinitesimally small steps, infinitely fast. This is effectively the same as letting the number of steps taken go to infinity ($N \to \infty$). But at the same time, we do this in the manner in which the total displacement of the walker stays bounded._____no_output_____Taking the limit of a simple 1D walker has given us a partial differential equation known as the *Diffusion Equation*, or alternatively the *Heat Equation*. This is one of the most fundamental and important equations in the natural sciences, so it is quite astonishing that it can be derived from a simple random walker!
For more information and more detailed derivations, see for example:
- [Mark Kac's classical paper from 1947](http://www.math.hawaii.edu/~xander/Fa06/Kac--Brownian_Motion.pdf)
In practice, one does not use a 1D diffusion equation, but a 3D one:
$$\frac{\partial u}{\partial t} = \nabla^2 u.$$
But this pde can be found by taking the limit of a 3D random walker in exactly the same manner._____no_output_____### Solving the Diffusion Equation
What is very interesting about what we have just done, is that we have gone from a discrete, numerically solvable problem, into a continiouse partial differential equation. This is the opposite process of what we are used to dealing with when we are looking at numerics!
If we want to solve the diffusion equation numerically, we have to discretize the equation again, and move back to the effective 1D walker. If you want to read how that can be done, take a look at this supplemental notebook: [*Solving the 1D Diffusion Equation*](S19_solving_the_1D_diffusion_equation.ipynb)._____no_output_____
|
{
"repository": "finsberg/IN1910_H21",
"path": "book/docs/lectures/stochastic_processes/random_walks_and_markov_processes.ipynb",
"matched_keywords": [
"biology"
],
"stars": null,
"size": 695214,
"hexsha": "cb8c8816ab7cedae523907242c44e63d510bbb69",
"max_line_length": 149816,
"avg_line_length": 590.6661002549,
"alphanum_fraction": 0.9412195382
}
|
# Notebook from shiftorg/skills
Path: sandbox/data_science/skill_lda.ipynb
# Transform JD text files into an LDA model and pyLDAvis visualization
### Steps:
1. Use spaCy phrase matching to identify skills
2. Parse the job descriptions. A full, readable job description gets turned into a bunch of newline-delimited skills.
3. Create a Gensim corpus and dictionary from the parsed skills
4. Train an LDA model using the corpus and dictionary
5. Visualize the LDA model
6. Compare user input to the LDA model; get out a list of relevant skills_____no_output_____
<code>
# Modeling and visualization
import gensim
from gensim.corpora import Dictionary, MmCorpus
from gensim.models.ldamodel import LdaModel
import pyLDAvis
import pyLDAvis.gensim
# Utilities
import codecs
import pickle
import os
import warnings
# Black magic
import spacy
from spacy.matcher import Matcher
from spacy.attrs import *
nlp = spacy.load('en') _____no_output_____
</code>
### 1. Use spaCy phrase matching to ID skills in job descriptions
**First, we read in a pickled dictionary that contains the word patterns we'll use to extract skills from JDs. Here's what the first few patterns look like:**
``` Python
{
0 : [{"lower": "after"}, {"lower": "effects"}],
1 : [{"lower": "amazon"}, {"lower": "web"}, {"lower": "services"}],
2 : [{"lower": "angular"}, {"lower": "js"}],
3 : [{"lower": "ansible"}],
4 : [{"lower": "bash"}, {"lower": "shell"}],
5 : [{"lower": "business"}, {"lower": "intelligence"}]
}
```
**We generated the pickled dictionary through some (rather heavy) preprocessing steps:**
1. Train a word2vec model on all of the job descriptions. Cluster the word embeddings, identify clusters associated with hard skills, and annotate all of the words in those clusters. Save those words as a "skill repository" (a text document that we'll use as the canonical list of hard tech skills).
2. Clean the skill repository. Inevitably, terms that are not hard skills made it into the word2vec "skill" clusters. Remove them. In this case, we defined a "skill" as "a tool, platform, or language that would make sense as a skill to learn or improve."
3. Use the skill repository to train an Named Entity Recognition model (in our case, using Prodigy). Use the training process to identify hard skills that we previously did not have in our repository. Add the new skills to the repository.
4. Create a Python dictionary of the skills. Format the dictionary so that the values can be ingested as spaCy language patterns.
See spaCy's [matcher documentation](https://spacy.io/api/matcher#init) for more details.
_____no_output_____
<code>
# read pickled dict() object
with open('skill_dict.pkl', 'rb') as f:
skill_dict = pickle.load(f)_____no_output_____%%time
# Read JDs into memory
import os
directory = os.fsencode('../local_data/')
jds = []
for file in os.listdir(directory):
filename = os.fsdecode(file)
path = '../local_data/' + filename
with open(path, 'r') as infile:
jds.append(infile.read())
print(len(jds), "JDs")
import sys
print(sys.getsizeof(jds)/1000000, "Megabytes")_____no_output_____
</code>
### 2. Parse job descriptions
From each JD, generate a list of skills._____no_output_____
<code>
%%time
# Write skill-parsed JDs to file.
# This took about three hours for 106k jobs.
for idx, jd in enumerate(jds):
out_path = '../skill_parsed/'+ str(idx+1) + '.txt'
with open(out_path, 'w') as outfile:
# Creating a matcher object
doc = nlp(jd)
matcher = Matcher(nlp.vocab)
for label, pattern in skill_dict.items():
matcher.add(label, None, pattern)
matches = matcher(doc)
for match in matches:
# match object returns a tuple with (id, startpos, endpos)
output = str(doc[match[1]:match[2]]).replace(' ', '_').lower()
outfile.write(output)
outfile.write('\n')_____no_output_____
</code>
### 3. Generate a Gensim corpus and dictionary from the parsed skill documents_____no_output_____
<code>
%%time
# Load parsed items back into memory
directory = os.fsencode('skill_parsed//')
parsed_jds = []
for file in os.listdir(directory):
filename = os.fsdecode(file)
path = 'skill_parsed/' + filename
# Ran into an encoding issue; changing to latin-1 fixed it
with codecs.open(path, 'r', encoding='latin-1') as infile:
parsed_jds.append(infile.read())CPU times: user 6.09 s, sys: 8.46 s, total: 14.5 s
Wall time: 42.3 s
%%time
'''
Gensim needs documents to be formatted as a list-of-lists, where the inner
lists are simply lists including the tokens (skills) from a given document.
It's important to note that any bigram or trigram skills are already tokenized
with underscores instead of spaces to preserve them as tokens.
'''
nested_dict_corpus = [text.split() for text in parsed_jds]
print(nested_dict_corpus[222:226])[['artificial_intelligence', 'newly', 'artificial_intelligence', 'ai', 'computer_science'], ['excel', 'word'], ['sql', 'word', 'excel', 'power_point', 'statistics', 'computer_science'], ['aws', 'computer_science', 'java', 'amazon_web_services', 'web_services', 'aws', 'azure', 'unix', 'linux', 'agile']]
CPU times: user 264 ms, sys: 286 ms, total: 550 ms
Wall time: 728 ms
from gensim.corpora import Dictionary, MmCorpus
gensim_skills_dict = Dictionary(nested_dict_corpus)
# save the dict
gensim_skills_dict.save('gensim_skills.dict')_____no_output_____corpus = [gensim_skills_dict.doc2bow(text) for text in nested_dict_corpus]_____no_output_____# Save the corpus
gensim.corpora.MmCorpus.serialize('skill_bow_corpus.mm', corpus, id2word=gensim_skills_dict)_____no_output_____# Load up the dictionary
gensim_skills_dict = Dictionary.load('gensim_skills.dict')
# Load the corpus
bow_corpus = MmCorpus('skill_bow_corpus.mm')_____no_output_____
</code>
### 4. Create the LDA model using Gensim_____no_output_____
<code>
%%time
with warnings.catch_warnings():
warnings.simplefilter('ignore')
lda_alpha_auto = LdaModel(bow_corpus,
id2word=gensim_skills_dict,
num_topics=20)
lda_alpha_auto.save('lda/skills_lda')CPU times: user 19.4 s, sys: 366 ms, total: 19.8 s
Wall time: 20 s
# load the finished LDA model from disk
lda = LdaModel.load('lda/skills_lda')_____no_output_____
</code>
### 5. Visualize using pyLDAvis_____no_output_____
<code>
LDAvis_data_filepath = 'lda/ldavis/ldavis'_____no_output_____%%time
LDAvis_prepared = pyLDAvis.gensim.prepare(lda, bow_corpus,
gensim_skills_dict)
with open(LDAvis_data_filepath, 'wb') as f:
pickle.dump(LDAvis_prepared, f)/usr/local/lib/python3.6/site-packages/pyLDAvis/_prepare.py:387: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
See the documentation here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated
topic_term_dists = topic_term_dists.ix[topic_order]
# load the pre-prepared pyLDAvis data from disk
with open(LDAvis_data_filepath, 'rb') as f:
LDAvis_prepared = pickle.load(f)_____no_output_____pyLDAvis.display(LDAvis_prepared)_____no_output_____# Save the file as HTML
pyLDAvis.save_html(LDAvis_prepared, 'lda/html/lda.html')_____no_output_____
</code>
### 6. Compare user input to the LDA model
Output the skills a user has and does not have from various topics._____no_output_____
<code>
# Look at the topics
def explore_topic(topic_number, topn=20):
"""
accept a topic number and print out a
formatted list of the top terms
"""
print(u'{:20} {}'.format(u'term', u'frequency') + u'')
for term, frequency in lda.show_topic(topic_number, topn=40):
print(u'{:20} {:.3f}'.format(term, round(frequency, 3)))
for i in range(20): # Same number as the types of jobs we scraped initially
print("\n\nTopic %s" % i)
explore_topic(topic_number=i)
Topic 0
term frequency
aws 0.193
big_data 0.061
web_services 0.043
java 0.041
kafka 0.036
amazon_web_services 0.032
nosql 0.031
python 0.030
apache 0.028
cassandra 0.027
computer_science 0.026
ec2 0.024
s3 0.024
elasticsearch 0.017
scala 0.017
linux 0.017
mysql 0.016
mongodb 0.014
elastic 0.014
rds 0.013
hive 0.012
lambda 0.011
postgres 0.011
mapreduce 0.011
redis 0.010
etl 0.009
dynamodb 0.009
data_pipeline 0.009
agile 0.008
tuning 0.006
pig 0.006
nginx 0.006
ubuntu 0.006
zookeeper 0.005
ruby 0.005
postgresql 0.005
elastic_search 0.005
sqs 0.005
elb 0.005
private_cloud 0.005
Topic 1
term frequency
c 0.215
.net 0.135
sql 0.107
asp.net 0.058
computer_science 0.046
sql_server 0.045
mvc 0.035
web_services 0.019
tfs 0.018
javascript 0.018
c++ 0.018
api 0.016
custom 0.015
unit_testing 0.014
wcf 0.013
wpf 0.013
iis 0.012
asp 0.011
object_oriented 0.010
agile 0.009
vb 0.009
testng 0.008
jquery 0.007
information_systems 0.007
orm 0.006
smb 0.006
wireshark 0.006
dbs 0.006
uft 0.005
programming_languages 0.005
angular 0.005
restful 0.005
relational_databases 0.005
soa 0.004
neuroscience 0.004
html 0.004
winforms 0.004
version_control 0.004
relational_database 0.004
oo 0.004
Topic 2
term frequency
ruby 0.087
python 0.076
mysql 0.051
rails 0.044
java 0.042
nosql 0.040
mongodb 0.039
javascript 0.039
postgresql 0.038
scala 0.029
computer_science 0.026
django 0.026
sql 0.024
node.js 0.024
relational_databases 0.023
redis 0.021
api 0.020
customize 0.018
php 0.016
d3 0.015
neo4j 0.014
restful 0.013
react_native 0.013
blockchain 0.013
cassandra 0.012
programming_languages 0.011
react.js 0.011
elasticsearch 0.011
caching 0.010
es6 0.010
looker 0.009
single_page 0.009
wms 0.008
custom 0.007
relational_database 0.007
d3.js 0.007
stats 0.007
programming_language 0.006
ember 0.005
rabbitmq 0.005
Topic 3
term frequency
linux 0.190
unix 0.091
java 0.072
python 0.064
computer_science 0.061
c++ 0.048
perl 0.045
shell 0.043
c 0.032
operating_system 0.030
j2ee 0.028
bash 0.019
scripting_language 0.017
apache 0.014
sql 0.013
mysql 0.011
custom 0.011
jboss 0.010
information_systems 0.009
php 0.009
scm 0.009
websphere 0.008
version_control 0.008
tuning 0.008
git 0.008
baseline 0.007
ruby 0.007
customization 0.007
command_line 0.007
weblogic 0.007
eclipse 0.007
redhat 0.005
ecs 0.005
dbms 0.005
subversion 0.004
jdbc 0.004
jms 0.004
lucene 0.004
vbscript 0.004
esb 0.004
Topic 4
term frequency
sql 0.383
sql_server 0.137
azure 0.097
tuning 0.071
vmware 0.041
relational_database 0.022
db2 0.022
couchbase 0.020
computer_science 0.020
mysql 0.017
powershell 0.016
information_systems 0.011
rdbms 0.011
relational_databases 0.010
mainframe 0.009
olap 0.008
iis 0.006
netezza 0.005
postgresql 0.005
vsphere 0.004
nosql 0.004
aurora 0.004
etl 0.004
database_servers 0.004
pentaho 0.004
sitecore 0.004
backbone.js 0.004
testrail 0.003
memcache 0.003
shell 0.003
polymer 0.003
airwatch 0.003
cobol 0.003
custom 0.003
vcenter 0.003
postgres 0.002
loadrunner 0.002
jcl 0.002
operating_system 0.002
programming_languages 0.002
Topic 5
term frequency
agile 0.487
scrum 0.211
custom 0.060
jira 0.059
computer_science 0.047
product_owner 0.036
confluence 0.022
csm 0.016
atlassian 0.014
rally 0.011
continuous_integration 0.007
certified_scrum_master 0.004
trello 0.003
versionone 0.003
prism 0.002
workbench 0.002
acp 0.002
razor 0.002
glue 0.001
aurelia 0.001
information_systems 0.001
bitbucket 0.001
vm 0.001
etcd 0.001
gremlin 0.001
sparql 0.001
certified_scrummaster 0.000
project_management 0.000
atdd 0.000
regression 0.000
baseline 0.000
visio 0.000
dotnetnuke 0.000
drupal 0.000
pivotal 0.000
sharepoint 0.000
product_management 0.000
borland 0.000
java 0.000
c 0.000
Topic 6
term frequency
project_management 0.882
citrix 0.022
sugarcrm 0.012
computer_science 0.012
information_systems 0.011
baseline 0.010
r2 0.006
sharepoint 0.005
visio 0.004
oltp 0.004
trigger 0.003
database_schema 0.003
e2e 0.003
espresso 0.003
r12 0.002
datastage 0.002
sccm 0.002
customization 0.002
mailchimp 0.002
rxjava 0.001
xunit 0.001
visualisation 0.001
elm 0.001
iseries 0.001
openldap 0.001
redgate 0.000
persistence_layer 0.000
openwrt 0.000
sql 0.000
dbms 0.000
operating_system 0.000
custom 0.000
foxpro 0.000
proper 0.000
sql_server 0.000
customized 0.000
excel 0.000
agile 0.000
vm 0.000
customize 0.000
Topic 7
term frequency
product_management 0.498
proper 0.157
information_systems 0.124
computer_science 0.080
metric 0.022
gtm 0.020
business_knowledge 0.013
netapp 0.011
checklist 0.010
spa 0.009
vetting 0.008
blueprint 0.006
pearl 0.006
canvas 0.005
haskell 0.004
standard_operating_procedure 0.004
rpc 0.004
filemaker 0.004
esx 0.003
ucs 0.003
datamart 0.002
ocr 0.002
freebsd 0.002
ntp 0.001
netty 0.001
big_data 0.001
agile 0.000
product_owner 0.000
bde 0.000
bdb 0.000
web_services 0.000
sgi 0.000
virtual_machine 0.000
pivotal 0.000
tuning 0.000
mssql 0.000
linux 0.000
relational_database 0.000
unix 0.000
c 0.000
Topic 8
term frequency
excel 0.393
crm 0.157
microsoft_excel 0.090
word 0.075
gaap 0.074
ms_excel 0.055
spreadsheet 0.038
power_point 0.032
proper 0.031
charles 0.030
scp 0.008
powerpoint 0.005
atf 0.002
control_group 0.002
webpage 0.002
visio 0.001
tcpip 0.001
bagging 0.001
wix 0.001
cmdb 0.001
checklist 0.000
freescale 0.000
zoho 0.000
google_doc 0.000
project_management 0.000
information_systems 0.000
custom 0.000
workflow 0.000
blockchain 0.000
statistics 0.000
r 0.000
google_sheet 0.000
pivotal 0.000
c 0.000
customization 0.000
program_management 0.000
newly 0.000
sql 0.000
autocad 0.000
product_management 0.000
Topic 9
term frequency
program_management 0.436
workflow 0.272
newly 0.078
panda 0.031
alm 0.027
ssa 0.022
lamp 0.020
angular_js 0.019
sybase 0.018
karma 0.015
template 0.013
omniture 0.010
computer_science 0.006
after_effects 0.006
plsql 0.006
zend 0.005
sh 0.005
user_experience_research 0.005
jama 0.002
shard 0.001
smarty 0.001
project_management 0.000
information_systems 0.000
bdb 0.000
jasmine 0.000
haxe 0.000
custom 0.000
metric 0.000
ramda.js 0.000
lodash.js 0.000
timeline.js 0.000
js 0.000
oscommerce 0.000
agile 0.000
pivotal 0.000
sql 0.000
c 0.000
php 0.000
web_services 0.000
javascript 0.000
Topic 10
term frequency
excel 0.281
word 0.271
powerpoint 0.214
sharepoint 0.071
visio 0.045
customized 0.044
autocad 0.023
webdriver 0.013
jmeter 0.008
soapui 0.007
baseline 0.004
greenplum 0.003
mainframe 0.003
ember.js 0.002
eis 0.002
stata 0.002
foss 0.002
udeploy 0.001
nsx 0.001
testcomplete 0.001
project_management 0.000
messaging_protocol 0.000
sphinx 0.000
custom 0.000
customize 0.000
bdb 0.000
wireframing 0.000
spss 0.000
metric 0.000
computer_science 0.000
sql 0.000
xna 0.000
ms_excel 0.000
vetting 0.000
statistics 0.000
tkinter 0.000
vb 0.000
rpc 0.000
python 0.000
linux 0.000
Topic 11
term frequency
docker 0.154
aws 0.061
jenkins 0.056
ci 0.051
python 0.049
puppet 0.048
chef 0.048
ansible 0.045
linux 0.041
continuous_integration 0.038
kubernetes 0.037
azure 0.024
git 0.023
ruby 0.021
bash 0.019
agile 0.013
shell 0.010
maven 0.010
openstack 0.010
github 0.009
vmware 0.008
nagios 0.008
nexus 0.008
powershell 0.008
gcp 0.008
java 0.008
xamarin 0.008
computer_science 0.007
vb.net 0.006
flask 0.006
perl 0.006
mesos 0.006
golang 0.006
gitlab 0.005
svn 0.005
openshift 0.004
teamcity 0.004
gradle 0.003
containerized 0.003
groovy 0.003
Topic 12
term frequency
javascript 0.198
css 0.117
html 0.115
angular 0.048
jquery 0.047
html5 0.043
angularjs 0.033
js 0.031
computer_science 0.028
ajax 0.023
php 0.022
node.js 0.019
css3 0.018
bootstrap 0.018
agile 0.018
mvc 0.016
java 0.015
reactjs 0.013
typescript 0.013
web_services 0.012
sass 0.010
redux 0.009
mysql 0.009
smoke 0.008
nodejs 0.007
sql 0.007
custom 0.006
npm 0.006
wireframing 0.006
object_oriented 0.006
balsamiq 0.005
node 0.005
angular.js 0.005
git 0.004
unit_testing 0.004
clearcase 0.004
drupal 0.004
mocha 0.004
creative_cloud 0.003
oop 0.003
Topic 13
term frequency
visualization 0.139
sql 0.101
excel 0.068
dashboard 0.068
ssis 0.059
ssrs 0.059
hyperion 0.053
vba 0.037
spss 0.031
jmp 0.028
sop 0.025
tableau 0.025
macros 0.024
computer_science 0.020
custom 0.019
qlik 0.019
arcgis 0.017
obiee 0.017
visual_basic 0.015
wireframe 0.014
information_systems 0.013
statistics 0.013
structured_query_language 0.012
powerbi 0.012
sql_server 0.011
relational_database 0.009
python 0.008
adhoc 0.008
webgl 0.008
natural_language_understanding 0.006
bpo 0.006
lua 0.006
sqlite 0.006
mathematica 0.005
minitab 0.005
electron 0.005
bigtable 0.004
relational_databases 0.004
ms_excel 0.004
video_editing 0.003
Topic 14
term frequency
sql 0.109
bi 0.108
business_intelligence 0.092
tableau 0.079
etl 0.073
sas 0.070
statistics 0.070
r 0.063
data_warehouse 0.056
computer_science 0.025
informatica 0.025
python 0.022
visualization 0.021
big_data 0.020
teradata 0.017
cognos 0.014
microstrategy 0.012
power_bi 0.012
information_systems 0.010
qlikview 0.010
relational_databases 0.009
hive 0.007
talend 0.007
regression 0.006
hana 0.006
spotfire 0.005
programming_languages 0.004
custom 0.004
predictive_modeling 0.004
domo 0.003
data_science 0.003
erwin 0.003
paxata 0.002
excel 0.002
sap_hana 0.002
rdbms 0.002
mssql 0.002
toad 0.002
programming_language 0.002
ddl 0.001
Topic 15
term frequency
git 0.104
api 0.104
version_control 0.043
agile 0.039
jenkins 0.038
jira 0.036
continuous_integration 0.036
github 0.033
restful 0.032
node 0.028
javascript 0.027
svn 0.027
pivotal 0.021
angular 0.019
subversion 0.019
ci 0.016
maven 0.016
gradle 0.016
java 0.015
js 0.015
nodejs 0.015
confluence 0.015
gulp 0.012
bitbucket 0.012
unit_testing 0.012
webpack 0.011
grunt 0.011
vue 0.011
computer_science 0.010
atlassian 0.009
jasmine 0.009
branching 0.008
cloud_foundry 0.008
cordova 0.007
stash 0.007
ember 0.007
oauth 0.006
clojure 0.006
intellij 0.005
python 0.005
Topic 16
term frequency
math 0.242
computer_science 0.184
google_cloud 0.171
iaas 0.092
paas 0.056
public_cloud 0.048
labview 0.025
macro 0.024
mfc 0.014
tuning 0.012
scada 0.012
oracle_11 0.011
sdl 0.010
oracle_12c 0.010
rman 0.008
database_server 0.008
snowflake 0.007
vdi 0.007
tile 0.007
c++ 0.006
toad 0.005
prolog 0.005
columnar 0.005
sharding 0.004
ocp 0.004
python 0.003
win32 0.003
cloud_compute 0.002
pascal 0.002
model_view_controller 0.002
big_data 0.002
extract_transform_load 0.002
dataguard 0.001
c 0.001
statistics 0.001
pdb 0.001
linux 0.000
java 0.000
lvm 0.000
concurrency 0.000
Topic 17
term frequency
java 0.181
computer_science 0.100
c 0.085
c++ 0.078
agile 0.049
web_services 0.038
selenium 0.035
python 0.031
object_oriented 0.029
programming_languages 0.024
regression 0.024
sql 0.020
swift 0.018
javascript 0.016
api 0.015
programming_language 0.015
junit 0.013
unit_testing 0.013
continuous_integration 0.012
restful 0.012
soa 0.010
cucumber 0.009
oo 0.008
linux 0.007
jenkins 0.007
eclipse 0.007
oop 0.006
objective_c 0.006
nosql 0.006
code_review 0.005
bdd 0.005
xcode 0.005
maven 0.005
concurrency 0.004
ruby 0.004
git 0.004
relational_databases 0.004
qtp 0.004
revision_control 0.003
ood 0.003
Topic 18
term frequency
machine_learning 0.173
data_science 0.091
python 0.081
big_data 0.075
computer_science 0.059
ai 0.045
statistics 0.043
r 0.036
artificial_intelligence 0.030
deep_learning 0.026
java 0.025
c++ 0.022
ml 0.021
sql 0.018
visualization 0.017
computer_vision 0.016
scala 0.016
c 0.014
regression 0.014
natural_language_processing 0.014
hive 0.013
nlp 0.012
programming_languages 0.012
predictive_modeling 0.009
pig 0.008
nosql 0.008
mapreduce 0.007
numpy 0.005
linux 0.005
bayesian 0.005
programming_language 0.004
hpc 0.004
anomaly_detection 0.004
toolchain 0.003
custom 0.003
stata 0.003
apache 0.003
hypothesis_testing 0.003
cmake 0.003
crucible 0.002
Topic 19
term frequency
sketch 0.132
photoshop 0.132
illustrator 0.110
invision 0.092
axure 0.067
detailed_description 0.065
indesign 0.047
creative_suite 0.042
com 0.039
wordpress 0.039
omnigraffle 0.036
agile 0.026
svm 0.021
vms 0.019
html 0.015
elixir 0.015
xd 0.013
uxpin 0.010
datameer 0.009
dml 0.009
msbuild 0.008
java8 0.007
dreamweaver 0.007
supervised_learning 0.007
flinto 0.005
ffmpeg 0.005
rdf 0.004
vb6 0.003
rdb 0.002
proto.io 0.002
ingres 0.002
fsm 0.002
openroad 0.001
css 0.001
openvms 0.001
computer_science 0.000
web_services 0.000
amazon_web_services 0.000
css3 0.000
bootstrap 0.000
# A stab at naming the topics
topic_names = {1: u'Data Engineering (Big Data Focus)',
2: u'Microsoft OOP Engineering (C, C++, .NET)',
3: u'Web Application Development (Ruby, Rails, JS, Databases)',
4: u'Linux/Unix, Software Engineering, and Scripting',
5: u'Database Administration',
6: u'Project Management (Agile Focus)',
7: u'Project Management (General Software)',
8: u'Product Management',
9: u'General Management & Productivity (Microsoft Office Focus)',
10: u'Software Program Management',
11: u'Project and Program Management',
12: u'DevOps and Cloud Computing/Infrastructure',
13: u'Frontend Software Engineering and Design',
14: u'Business Intelligence',
15: u'Analytics',
16: u'Quality Engineering, Version Control, & Build',
17: u'Big Data Analytics; Hardware & Scientific Computing',
18: u'Software Engineering',
19: u'Data Science, Machine Learning, and AI',
20: u'Design'}_____no_output_____
</code>
#### Ingest user input & transform into list of skills_____no_output_____
<code>
matcher = Matcher(nlp.vocab)
user_input = '''
My skills are Postgresql, and Python.
Experience with Chef Puppet and Docker required.
I also happen to know Blastoise and Charzard. Also NeuRal neTwOrk.
I use Git, Github, svn, Subversion, but not git, github or subversion.
Additionally, I can program using Perl, Java, and Haskell. But not perl, java, or haskell.'''
# Construct matcher object
doc = nlp(user_input)
for label, pattern in skill_dict.items():
matcher.add(label, None, pattern)
# Compare input to pre-defined skill patterns
user_skills = []
matches = matcher(doc)
for match in matches:
if match is not None:
# match object returns a tuple with (id, startpos, endpos)
output = str(doc[match[1]:match[2]]).lower()
user_skills.append(output)
print("*** User skills: *** ")
for skill in user_skills:
print(skill)*** User skills: ***
postgresql
python
chef
puppet
docker
neural network
git
github
svn
subversion
git
github
subversion
perl
java
haskell
perl
java
haskell
</code>
#### Compare user skills to the LDA model_____no_output_____
<code>
def top_match_items(input_doc, lda_model, input_dictionary, num_terms=20):
"""
(1) parse input doc with spaCy, apply text pre-proccessing steps,
(3) create a bag-of-words representation (4) create an LDA representation
"""
doc_bow = gensim_skills_dict.doc2bow(input_doc)
# create an LDA representation
document_lda = lda_model[doc_bow]
# Sort in descending order
sorted_doc_lda = sorted(document_lda, key=lambda review_lda: -review_lda[1])
topic_number, freq = sorted_doc_lda[0][0], sorted_doc_lda[0][1]
highest_probability_topic = topic_names[topic_number+1]
top_topic_skills = []
for term, term_freq in lda.show_topic(topic_number, topn=num_terms):
top_topic_skills.append(term)
return highest_probability_topic, round(freq, 3), top_topic_skills
matched_topic, matched_freq, top_topic_skills = top_match_items(user_skills, lda, gensim_skills_dict)_____no_output_____def common_skills(top_topic_skills, user_skills):
return [item for item in top_topic_skills if item in user_skills]
def non_common_skills(top_topic_skills, user_skills):
return [item for item in top_topic_skills if item not in user_skills]_____no_output_____print("**** User's matched topic and percent match:")
print(matched_topic, matched_freq)
print("\n**** Skills user has in common with topic:")
for skill in common_skills(top_topic_skills, user_skills):
print(skill)
print("\n**** Skills user does NOT have in common with topic:")
for skill in non_common_skills(top_topic_skills, user_skills):
print(skill)**** User's matched topic and percent match:
Quality Engineering, Version Control, & Build 0.35
**** Skills user has in common with topic:
git
github
svn
subversion
java
**** Skills user does NOT have in common with topic:
api
version_control
agile
jenkins
jira
continuous_integration
restful
node
javascript
pivotal
angular
ci
maven
gradle
js
</code>
|
{
"repository": "shiftorg/skills",
"path": "sandbox/data_science/skill_lda.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 2,
"size": 329049,
"hexsha": "cb903efeefe0c0841b4fea4ffd0f65f019d0f500",
"max_line_length": 275318,
"avg_line_length": 209.1856325493,
"alphanum_fraction": 0.6764433261
}
|
# Notebook from HiteshDhola/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Path: neurophysics-neuroscience/python/setup.ipynb
This notebook is used to set up important files for running the notebooks. It will create a "data" folder in the root of the repository, and download approximately 60MB of data._____no_output_____
<code>
import sys
sys.path.append('./src/')
import opencourse as oc_____no_output_____# Download all data
oc.download_all_files()Help on package opencourse:
NAME
opencourse
PACKAGE CONTENTS
bassett_funcs
io
konrad_funcs
FUNCTIONS
open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
Open file and return a stream. Raise IOError upon failure.
file is either a text or byte string giving the name (and the path
if the file isn't in the current working directory) of the file to
be opened or an integer file descriptor of the file to be
wrapped. (If a file descriptor is given, it is closed when the
returned I/O object is closed, unless closefd is set to False.)
mode is an optional string that specifies the mode in which the file
is opened. It defaults to 'r' which means open for reading in text
mode. Other common values are 'w' for writing (truncating the file if
it already exists), 'x' for creating and writing to a new file, and
'a' for appending (which on some Unix systems, means that all writes
append to the end of the file regardless of the current seek position).
In text mode, if encoding is not specified the encoding used is platform
dependent: locale.getpreferredencoding(False) is called to get the
current locale encoding. (For reading and writing raw bytes use binary
mode and leave encoding unspecified.) The available modes are:
========= ===============================================================
Character Meaning
--------- ---------------------------------------------------------------
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' create a new file and open it for writing
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open a disk file for updating (reading and writing)
'U' universal newline mode (deprecated)
========= ===============================================================
The default mode is 'rt' (open for reading text). For binary random
access, the mode 'w+b' opens and truncates the file to 0 bytes, while
'r+b' opens the file without truncation. The 'x' mode implies 'w' and
raises an `FileExistsError` if the file already exists.
Python distinguishes between files opened in binary and text modes,
even when the underlying operating system doesn't. Files opened in
binary mode (appending 'b' to the mode argument) return contents as
bytes objects without any decoding. In text mode (the default, or when
't' is appended to the mode argument), the contents of the file are
returned as strings, the bytes having been first decoded using a
platform-dependent encoding or using the specified encoding if given.
'U' mode is deprecated and will raise an exception in future versions
of Python. It has no effect in Python 3. Use newline to control
universal newlines mode.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to select
line buffering (only usable in text mode), and an integer > 1 to indicate
the size of a fixed-size chunk buffer. When no buffering argument is
given, the default buffering policy works as follows:
* Binary files are buffered in fixed-size chunks; the size of the buffer
is chosen using a heuristic trying to determine the underlying device's
"block size" and falling back on `io.DEFAULT_BUFFER_SIZE`.
On many systems, the buffer will typically be 4096 or 8192 bytes long.
* "Interactive" text files (files for which isatty() returns True)
use line buffering. Other text files use the policy described above
for binary files.
encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent, but any encoding supported by Python can be
passed. See the codecs module for the list of supported encodings.
errors is an optional string that specifies how encoding errors are to
be handled---this argument should not be used in binary mode. Pass
'strict' to raise a ValueError exception if there is an encoding error
(the default of None has the same effect), or pass 'ignore' to ignore
errors. (Note that ignoring encoding errors can lead to data loss.)
See the documentation for codecs.register or run 'help(codecs.Codec)'
for a list of the permitted encoding error strings.
newline controls how universal newlines works (it only applies to text
mode). It can be None, '', '\n', '\r', and '\r\n'. It works as
follows:
* On input, if newline is None, universal newlines mode is
enabled. Lines in the input can end in '\n', '\r', or '\r\n', and
these are translated into '\n' before being returned to the
caller. If it is '', universal newline mode is enabled, but line
endings are returned to the caller untranslated. If it has any of
the other legal values, input lines are only terminated by the given
string, and the line ending is returned to the caller untranslated.
* On output, if newline is None, any '\n' characters written are
translated to the system default line separator, os.linesep. If
newline is '' or '\n', no translation takes place. If newline is any
of the other legal values, any '\n' characters written are translated
to the given string.
If closefd is False, the underlying file descriptor will be kept open
when the file is closed. This does not work when a file name is given
and must be True in that case.
A custom opener can be used by passing a callable as *opener*. The
underlying file descriptor for the file object is then obtained by
calling *opener* with (*file*, *flags*). *opener* must return an open
file descriptor (passing os.open as *opener* results in functionality
similar to passing None).
open() returns a file object whose type depends on the mode, and
through which the standard file operations such as reading and writing
are performed. When open() is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open
a file in a binary mode, the returned class varies: in read binary
mode, it returns a BufferedReader; in write binary and append binary
modes, it returns a BufferedWriter, and in read/write mode, it returns
a BufferedRandom.
It is also possible to use a string or bytearray as a file for both
reading and writing. For strings StringIO can be used like a file
opened in a text mode, and for bytes a BytesIO can be used like a file
opened in a binary mode.
DATA
SEEK_CUR = 1
SEEK_END = 2
SEEK_SET = 0
FILE
/Users/tarrysingh/Downloads/data-science-ipython-notebooks-master/neurophysics-neuroscience/python/src/opencourse/__init__.py
</code>
# Ensure that you have the right dependencies
All of the below packages should import:_____no_output_____
<code>
import mne # <-- Package for electrophysiology analysis
import pandas # <-- Package for representing data as a DataFrame
import bct # <-- Brain Connectivity Toolbox_____no_output_____
</code>
|
{
"repository": "HiteshDhola/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials",
"path": "neurophysics-neuroscience/python/setup.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 3266,
"size": 10993,
"hexsha": "cb3ae57d2c30bd23822f1f6a914de9d8035882e5",
"max_line_length": 185,
"avg_line_length": 47.3836206897,
"alphanum_fraction": 0.5489857182
}
|
# Notebook from simonsfoundation/Th17_TRN_Networks
Path: TRN_Notebooks/ChIP_Atac17_KO_AtacTh_bias25_TFmRNA_TFmRNA.ipynb
<code>
# Visualization of the KO+ChIP Gold Standard from:
# Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells"
# TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load
# NOTE: Default limits networks to TF-TF edges in top 1 TF / gene model (.93 quantile), to see the full
# network hit "restore" (in the drop-down menu in cell below) and set threshold to 0 and hit "threshold"
# You can search for gene names in the search box below the network (hit "Match"), and find regulators ("targeted by")
# Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels
# Change "SVG" to "canvas" to speed up layout operations
# More info about jp_gene_viz and user interface instructions are available on Github:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb
# directory containing gene expression data and network folder
directory = "."
# folder containing networks
netPath = 'Networks'
# network file name
networkFile = 'ChIP_A17_KOall_ATh_bias25_TFmRNA_sp.tsv'
# title for network figure
netTitle = 'ChIP/ATAC(Th17)+KO+ATAC(Th), bias = 25_TFmRNA, TFA = TF mRNA'
# name of gene expression file
expressionFile = 'Th0_Th17_48hTh.txt'
# column of gene expression file to color network nodes
rnaSampleOfInt = 'Th17(48h)'
# edge cutoff -- for Inferelator TRNs, corresponds to signed quantile (rank of edges in 15 TFs / gene models),
# increase from 0 --> 1 to get more significant edges (e.g., .33 would correspond to edges only in 10 TFs / gene
# models)
edgeCutoff = .93_____no_output_____import sys
if ".." not in sys.path:
sys.path.append("..")
from jp_gene_viz import dNetwork
dNetwork.load_javascript_support()
# from jp_gene_viz import multiple_network
from jp_gene_viz import LExpression
LExpression.load_javascript_support()_____no_output_____# Load network linked to gene expression data
L = LExpression.LinkedExpressionNetwork()
L.show() _____no_output_____# Load Network and Heatmap
L.load_network(directory + '/' + netPath + '/' + networkFile)
L.load_heatmap(directory + '/' + expressionFile)
N = L.network
N.set_title(netTitle)
N.threshhold_slider.value = edgeCutoff
N.apply_click(None)
N.draw()
# Add labels to nodes
N.labels_button.value=True
# Limit to TFs only, remove unconnected TFs, choose and set network layout
N.restore_click()
N.tf_only_click()
N.connected_only_click()
N.layout_dropdown.value = 'fruchterman_reingold'
N.layout_click()
# Interact with Heatmap
# Limit genes in heatmap to network genes
L.gene_click(None)
# Z-score heatmap values
L.expression.transform_dropdown.value = 'Z score'
L.expression.apply_transform()
# Choose a column in the heatmap (e.g., 48h Th17) to color nodes
L.expression.col = rnaSampleOfInt
L.condition_click(None)
# Switch SVG layout to get line colors, then switch back to faster canvas mode
N.force_svg(None)('Reading network', './Networks/ChIP_A17_KOall_ATh_bias25_TFmRNA_sp.tsv')
('Loading saved layout', './Networks/ChIP_A17_KOall_ATh_bias25_TFmRNA_sp.tsv.layout.json')
Omitting edges, using canvas, and fast layout default because the network is large
</code>
|
{
"repository": "simonsfoundation/Th17_TRN_Networks",
"path": "TRN_Notebooks/ChIP_Atac17_KO_AtacTh_bias25_TFmRNA_TFmRNA.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": 1,
"size": 30117,
"hexsha": "cb3b04435703fa122b8a691f055034e9ccde0d23",
"max_line_length": 137,
"avg_line_length": 41.6556016598,
"alphanum_fraction": 0.4147159412
}
|
# Notebook from stuarteberg/holoviews
Path: doc/Tutorials/Bokeh_Elements.ipynb
<div class="alert alert-info" role="alert">
This tutorial contains a lot of bokeh plots, which may take a little while to load and render.
</div>
``Element``s are the basic building blocks for any HoloViews visualization. These are the objects that can be composed together using the various [Container](Containers.ipynb) types.
Here in this overview, we show an example of how to build each of these ``Element``s directly out of Python or Numpy data structures. An even more powerful way to use them is by collecting similar ``Element``s into a HoloMap, as described in [Exploring Data](Exploring_Data.ipynb), so that you can explore, select, slice, and animate them flexibly, but here we focus on having small, self-contained examples. Complete reference material for each type can be accessed using our [documentation system](Introduction.ipynb#ParamDoc). This tutorial uses the default matplotlib plotting backend; see the [Bokeh Elements](Bokeh_Elements.ipynb) tutorial for the corresponding bokeh plots.
## Element types
This class hierarchy shows each of the ``Element`` types.
Each type is named for the default or expected way that the underlying data can be visualized. E.g., if your data is wrapped into a ``Surface`` object, it will display as a 3D surface by default, whereas the same data embedded in an ``Image`` object will display as a 2D raster image. But please note that the specification and implementation for each ``Element`` type does not actually include *any* such visualization -- the name merely serves as a semantic indication that you ordinarily think of the data as being laid out visually in that way. The actual plotting is done by a separate plotting subsystem, while the objects themselves focus on storing your data and the metadata needed to describe and use it.
This separation of data and visualization is described in detail in the [Options tutorial](Options.ipynb), which describes all about how to find out the options available for each ``Element`` type and change them if necessary, from either Python or IPython Notebook. When using this tutorial interactively in an IPython/Jupyter notebook session, we suggest adding ``%output info=True`` after the call to ``notebook_extension`` below, which will pop up a detailed list and explanation of the available options for visualizing each ``Element`` type, after that notebook cell is executed. Then, to find out all the options for any of these ``Element`` types, just press ``<Shift-Enter>`` on the corresponding cell in the live notebook.
The types available:
<dl class="dl-horizontal">
<dt><a href="#Element"><code>Element</code></a></dt><dd>The base class of all <code>Elements</code>.</dd>
</dl>
### <a id='ChartIndex'></a> <a href="#Chart Elements"><code>Charts:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Curve"><code>Curve</code></a></dt><dd>A continuous relation between a dependent and an independent variable. <font color='green'>✓</font></dd>
<dt><a href="#ErrorBars"><code>ErrorBars</code></a></dt><dd>A collection of x-/y-coordinates with associated error magnitudes. <font color='green'>✓</font></dd>
<dt><a href="#Spread"><code>Spread</code></a></dt><dd>Continuous version of ErrorBars. <font color='green'>✓</font></dd>
<dt><a href="#Area"><code>Area</code></a></dt><dd>Area under the curve or between curves. <font color='green'>✓</font></dd>
<dt><a href="#Bars"><code>Bars</code></a></dt><dd>Data collected and binned into categories. <font color='green'>✓</font></dd>
<dt><a href="#Histogram"><code>Histogram</code></a></dt><dd>Data collected and binned in a continuous space using specified bin edges. <font color='green'>✓</font></dd>
<dt><a href="#BoxWhisker"><code>BoxWhisker</code></a></dt><dd>Distributions of data varying by 0-N key dimensions.<font color='green'>✓</font></dd>
<dt><a href="#Scatter"><code>Scatter</code></a></dt><dd>Discontinuous collection of points indexed over a single dimension. <font color='green'>✓</font></dd>
<dt><a href="#Points"><code>Points</code></a></dt><dd>Discontinuous collection of points indexed over two dimensions. <font color='green'>✓</font></dd>
<dt><a href="#VectorField"><code>VectorField</code></a></dt><dd>Cyclic variable (and optional auxiliary data) distributed over two-dimensional space. <font color='green'>✓</font></dd>
<dt><a href="#Spikes"><code>Spikes</code></a></dt><dd>A collection of horizontal or vertical lines at various locations with fixed height (1D) or variable height (2D). <font color='green'>✓</font></dd>
<dt><a href="#SideHistogram"><code>SideHistogram</code></a></dt><dd>Histogram binning data contained by some other <code>Element</code>. <font color='green'>✓</font></dd>
</dl>
### <a id='Chart3DIndex'></a> <a href="#Chart3D Elements"><code>Chart3D Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Surface"><code>Surface</code></a></dt><dd>Continuous collection of points in a three-dimensional space. <font color='red'>✗</font></dd>
<dt><a href="#Scatter3D"><code>Scatter3D</code></a></dt><dd>Discontinuous collection of points in a three-dimensional space. <font color='red'>✗</font></dd>
<dt><a href="#TriSurface"><code>TriSurface</code></a></dt><dd>Continuous but irregular collection of points interpolated into a Surface using Delaunay triangulation. <font color='red'>✗</font></dd>
</dl>
### <a id='RasterIndex'></a> <a href="#Raster Elements"><code>Raster Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Raster"><code>Raster</code></a></dt><dd>The base class of all rasters containing two-dimensional arrays. <font color='green'>✓</font></dd>
<dt><a href="#QuadMesh"><code>QuadMesh</code></a></dt><dd>Raster type specifying 2D bins with two-dimensional array of values. <font color='green'>✓</font></dd>
<dt><a href="#HeatMap"><code>HeatMap</code></a></dt><dd>Raster displaying sparse, discontinuous data collected in a two-dimensional space. <font color='green'>✓</font></dd>
<dt><a href="#Image"><code>Image</code></a></dt><dd>Raster containing a two-dimensional array covering a continuous space (sliceable). <font color='green'>✓</font></dd>
<dt><a href="#RGB"><code>RGB</code></a></dt><dd>Image with 3 (R,G,B) or 4 (R,G,B,Alpha) color channels. <font color='green'>✓</font></dd>
<dt><a href="#HSV"><code>HSV</code></a></dt><dd>Image with 3 (Hue, Saturation, Value) or 4 channels. <font color='green'>✓</font></dd>
</dl>
### <a id='TabularIndex'></a> <a href="#Tabular Elements"><code>Tabular Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#ItemTable"><code>ItemTable</code></a></dt><dd>Ordered collection of key-value pairs (ordered dictionary). <font color='green'>✓</font></dd>
<dt><a href="#Table"><code>Table</code></a></dt><dd>Collection of arbitrary data with arbitrary key and value dimensions. <font color='green'>✓</font></dd>
</dl>
### <a id='AnnotationIndex'></a> <a href="#Annotation Elements"><code>Annotations:</code></a>
<dl class="dl-horizontal">
<dt><a href="#VLine"><code>VLine</code></a></dt><dd>Vertical line annotation. <font color='green'>✓</font></dd>
<dt><a href="#HLine"><code>HLine</code></a></dt><dd>Horizontal line annotation. <font color='green'>✓</font></dd>
<dt><a href="#Spline"><code>Spline</code></a></dt><dd>Bezier spline (arbitrary curves). <font color='green'>✓</font></dd>
<dt><a href="#Text"><code>Text</code></a></dt><dd>Text annotation on an <code>Element</code>. <font color='green'>✓</font></dd>
<dt><a href="#Arrow"><code>Arrow</code></a></dt><dd>Arrow on an <code>Element</code> with optional text label. <font color='red'>✗</font></dd>
</dl>
### <a id='PathIndex'></a> <a href="#Path Elements"><code>Paths:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Path"><code>Path</code></a></dt><dd>Collection of paths. <font color='green'>✓</font></dd>
<dt><a href="#Contours"><code>Contours</code></a></dt><dd>Collection of paths, each with an associated value. <font color='green'>✓</font></dd>
<dt><a href="#Polygons"><code>Polygons</code></a></dt><dd>Collection of filled, closed paths with an associated value. <font color='green'>✓</font></dd>
<dt><a href="#Bounds"><code>Bounds</code></a></dt><dd>Box specified by corner positions. <font color='green'>✓</font></dd>
<dt><a href="#Box"><code>Box</code></a></dt><dd>Box specified by center position, radius, and aspect ratio. <font color='green'>✓</font></dd>
<dt><a href="#Ellipse"><code>Ellipse</code></a></dt><dd>Ellipse specified by center position, radius, and aspect ratio. <font color='green'>✓</font></dd>
</dl>_____no_output_____## ``Element`` <a id='Element'></a>_____no_output_____**The basic or fundamental types of data that can be visualized.**
``Element`` is the base class for all the other HoloViews objects shown in this section.
All ``Element`` objects accept ``data`` as the first argument to define the contents of that element. In addition to its implicit type, each element object has a ``group`` string defining its category, and a ``label`` naming this particular item, as described in the [Introduction](Introduction.ipynb#value).
When rich display is off, or if no visualization has been defined for that type of ``Element``, the ``Element`` is presented with a default textual representation:_____no_output_____
<code>
import holoviews as hv
hv.notebook_extension(bokeh=True)
hv.Element(None, group='Value', label='Label')_____no_output_____
</code>
In addition, ``Element`` has key dimensions (``kdims``), value dimensions (``vdims``), and constant dimensions (``cdims``) to describe the semantics of indexing within the ``Element``, the semantics of the underlying data contained by the ``Element``, and any constant parameters associated with the object, respectively.
Dimensions are described in the [Introduction](Introduction.ipynb).
The remaining ``Element`` types each have a rich, graphical display as shown below._____no_output_____## ``Chart`` Elements <a id='Chart Elements'></a>_____no_output_____**Visualization of a dependent variable against an independent variable**
The first large class of ``Elements`` is the ``Chart`` elements. These objects have at least one fully indexable, sliceable key dimension (typically the *x* axis in a plot), and usually have one or more value dimension(s) (often the *y* axis) that may or may not be indexable depending on the implementation. The key dimensions are normally the parameter settings for which things are measured, and the value dimensions are the data points recorded at those settings.
As described in the [Columnar Data tutorial](Columnar_Data.ipynb), the data can be stored in several different internal formats, such as a NumPy array of shape (N, D), where N is the number of samples and D the number of dimensions. A somewhat larger list of formats can be accepted, including any of the supported internal formats, or
1. As a list of length N containing tuples of length D.
2. As a tuple of length D containing iterables of length N._____no_output_____### ``Curve`` <a id='Curve'></a>_____no_output_____
<code>
import numpy as np
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
hv.Curve(points)_____no_output_____
</code>
A ``Curve`` is a set of values provided for some set of keys from a [continuously indexable 1D coordinate system](Continuous_Coordinates.ipynb), where the plotted values will be connected up because they are assumed to be samples from a continuous relation._____no_output_____### ``ErrorBars`` <a id='ErrorBars'></a>_____no_output_____
<code>
np.random.seed(7)
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors)_____no_output_____
</code>
``ErrorBars`` is a set of x-/y-coordinates with associated error values. Error values may be either symmetric or asymmetric, and thus can be supplied as an Nx3 or Nx4 array (or any of the alternative constructors Chart Elements allow)._____no_output_____
<code>
%%opts ErrorBars
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2, np.random.rand()/4) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors, vdims=['y', 'yerrneg', 'yerrpos'])_____no_output_____
</code>
### ``Area`` <a id='Area'></a>_____no_output_____** *Area under the curve* **
By default the Area Element draws just the area under the curve, i.e. the region between the curve and the origin._____no_output_____
<code>
xs = np.linspace(0, np.pi*4, 40)
hv.Area((xs, np.sin(xs)))_____no_output_____
</code>
** * Area between curves * **
When supplied a second value dimension the area is defined as the area between two curves._____no_output_____
<code>
X = np.linspace(0,3,200)
Y = X**2 + 3
Y2 = np.exp(X) + 2
Y3 = np.cos(X)
hv.Area((X, Y, Y2), vdims=['y', 'y2']) * hv.Area((X, Y, Y3), vdims=['y', 'y3'])_____no_output_____
</code>
#### Stacked areas
Areas are also useful to visualize multiple variables changing over time, but in order to be able to compare them the areas need to be stacked. Therefore the ``operation`` module provides the ``stack_area`` operation which makes it trivial to stack multiple Area in an (Nd)Overlay.
In this example we will generate a set of 5 arrays representing percentages and create an Overlay of them. Then we simply call the ``stack_area`` operation on the Overlay to get a stacked area chart._____no_output_____
<code>
values = np.random.rand(5, 20)
percentages = (values/values.sum(axis=0)).T*100
overlay = hv.Overlay([hv.Area(percentages[:, i], vdims=[hv.Dimension('value', unit='%')]) for i in range(5)])
overlay + hv.Area.stack(overlay)_____no_output_____
</code>
### ``Spread`` <a id='Spread'></a>_____no_output_____``Spread`` elements have the same data format as the ``ErrorBars`` element, namely x- and y-values with associated symmetric or asymmetric errors, but are interpreted as samples from a continuous distribution (just as ``Curve`` is the continuous version of ``Scatter``). These are often paired with an overlaid ``Curve`` to show both the mean (as a curve) and the spread of values; see the [Columnar Data tutorial](Columnar_Data.ipynb) for examples. _____no_output_____##### Symmetric_____no_output_____
<code>
np.random.seed(42)
xs = np.linspace(0, np.pi*2, 20)
err = 0.2+np.random.rand(len(xs))
hv.Spread((xs, np.sin(xs), err))_____no_output_____
</code>
##### Asymmetric_____no_output_____
<code>
%%opts Spread (fill_color='indianred' fill_alpha=1)
xs = np.linspace(0, np.pi*2, 20)
hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))),
vdims=['y', 'yerrneg', 'yerrpos'])_____no_output_____
</code>
### ``Bars`` <a id='Bars'></a>_____no_output_____
<code>
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, kdims=[hv.Dimension('Car occupants', values='initial')], vdims=['Count'])
bars + bars[['one', 'two', 'three']]_____no_output_____
</code>
``Bars`` is an ``NdElement`` type, so by default it is sorted. To preserve the initial ordering specify the ``Dimension`` with values set to 'initial', or you can supply an explicit list of valid dimension keys.
``Bars`` support up to two key dimensions which can be laid by ``'group'`` and ``'stack'`` dimensions. By default the key dimensions are mapped onto the first, second ``Dimension`` of the ``Bars`` object, but this behavior can be overridden via the ``group_index`` and ``stack_index`` options._____no_output_____
<code>
%%opts Bars [group_index=0 stack_index=1]
from itertools import product
np.random.seed(3)
groups, stacks = ['A', 'B'], ['a', 'b']
keys = product(groups, stacks)
hv.Bars([k+(np.random.rand()*100.,) for k in keys],
kdims=['Group', 'Stack'], vdims=['Count'])_____no_output_____
</code>
### ``BoxWhisker`` <a id='BoxWhisker'></a>_____no_output_____The ``BoxWhisker`` Element allows representing distributions of data varying by 0-N key dimensions. To represent the distribution of a single variable, we can create a BoxWhisker Element with no key dimensions and a single value dimension:_____no_output_____
<code>
hv.BoxWhisker(np.random.randn(200), kdims=[], vdims=['Value'])_____no_output_____
</code>
BoxWhisker Elements support any number of dimensions and may also be rotated. To style the boxes and whiskers, supply ``boxprops``, ``whiskerprops``, and ``flierprops``._____no_output_____
<code>
%%opts BoxWhisker [invert_axes=True width=600]
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
kdims=['Group', 'Category'], vdims=['Value']).sort()_____no_output_____
</code>
### ``Histogram`` <a id='Histogram'></a>_____no_output_____
<code>
np.random.seed(1)
data = [np.random.normal() for i in range(10000)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges)_____no_output_____
</code>
``Histogram``s partition the `x` axis into discrete (but not necessarily regular) bins, showing counts in each as a bar.
Almost all Element types, including ``Histogram``, may be projected onto a polar axis by supplying ``projection='polar'`` as a plot option._____no_output_____
<code>
%%opts Histogram [projection='polar' show_grid=True]
data = [np.random.rand()*np.pi*2 for i in range(100)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges, kdims=['Angle'])_____no_output_____
</code>
### ``Scatter`` <a id='Scatter'></a>_____no_output_____
<code>
%%opts Scatter (color='k', marker='s', s=10)
np.random.seed(42)
points = [(i, np.random.random()) for i in range(20)]
hv.Scatter(points) + hv.Scatter(points)[12:20]_____no_output_____
</code>
Scatter is the discrete equivalent of Curve, showing *y* values for discrete *x* values selected. See [``Points``](#Points) for more information.
The marker shape specified above can be any supported by [matplotlib](http://matplotlib.org/api/markers_api.html), e.g. ``s``, ``d``, or ``o``; the other options select the color and size of the marker. For convenience with the [bokeh backend](Bokeh_Backend), the matplotlib marker options are supported using a compatibility function in HoloViews._____no_output_____### ``Points`` <a id='Points'></a>_____no_output_____
<code>
np.random.seed(12)
points = np.random.rand(50,2)
hv.Points(points) + hv.Points(points)[0.6:0.8,0.2:0.5]_____no_output_____
</code>
As you can see, ``Points`` is very similar to ``Scatter``, and can produce some plots that look identical. However, the two ``Element``s are very different semantically. For ``Scatter``, the dots each show a dependent variable *y* for some *x*, such as in the ``Scatter`` example above where we selected regularly spaced values of *x* and then created a random number as the corresponding *y*. I.e., for ``Scatter``, the *y* values are the data; the *x*s are just where the data values are located. For ``Points``, both *x* and *y* are independent variables, known as ``key_dimensions`` in HoloViews:_____no_output_____
<code>
for o in [hv.Points(points,name="Points "), hv.Scatter(points,name="Scatter")]:
for d in ['key','value']:
print("%s %s_dimensions: %s " % (o.name, d, o.dimensions(d,label=True)))_____no_output_____
</code>
The ``Scatter`` object expresses a dependent relationship between *x* and *y*, making it useful for combining with other similar ``Chart`` types, while the ``Points`` object expresses the relationship of two independent keys *x* and *y* with optional ``vdims`` (zero in this case), which makes ``Points`` objects meaningful to combine with the ``Raster`` types below.
Of course, the ``vdims`` need not be empty for ``Points``; here is an example with two additional quantities for each point, as ``value_dimension``s *z* and α visualized as the color and size of the dots, respectively:_____no_output_____
<code>
%%opts Points [color_index=2 size_index=3 scaling_factor=50]
np.random.seed(10)
data = np.random.rand(100,4)
points = hv.Points(data, vdims=['z', 'alpha'])
points + points[0.3:0.7, 0.3:0.7].hist()_____no_output_____
</code>
Such a plot wouldn't be meaningful for ``Scatter``, but is a valid use for ``Points``, where the *x* and *y* locations are independent variables representing coordinates, and the "data" is conveyed by the size and color of the dots.
### ``Spikes`` <a id='Spikes'></a>_____no_output_____Spikes represent any number of horizontal or vertical line segments with fixed or variable heights. There are a number of disparate uses for this type. First of all, they may be used as a rugplot to give an overview of a one-dimensional distribution. They may also be useful in more domain-specific cases, such as visualizing spike trains for neurophysiology or spectrograms in physics and chemistry applications.
In the simplest case, a Spikes object represents coordinates in a 1D distribution:_____no_output_____
<code>
%%opts Spikes (line_alpha=0.4) [spike_length=0.1]
xs = np.random.rand(50)
ys = np.random.rand(50)
hv.Points((xs, ys)) * hv.Spikes(xs)_____no_output_____
</code>
When supplying two dimensions to the Spikes object, the second dimension will be mapped onto the line height. Optionally, you may also supply a cmap and color_index to map color onto one of the dimensions. This way we can, for example, plot a mass spectrogram:_____no_output_____
<code>
%%opts Spikes (cmap='Reds')
hv.Spikes(np.random.rand(20, 2), kdims=['Mass'], vdims=['Intensity'])_____no_output_____
</code>
Another possibility is to draw a number of spike trains as you would encounter in neuroscience. Here we generate 10 separate random spike trains and distribute them evenly across the space by setting their ``position``. By also declaring some ``yticks``, each spike train can be labeled individually:_____no_output_____
<code>
%%opts Spikes [spike_length=0.1] NdOverlay [show_legend=False]
hv.NdOverlay({i: hv.Spikes(np.random.randint(0, 100, 10), kdims=['Time']).opts(plot=dict(position=0.1*i))
for i in range(10)}).opts(plot=dict(yticks=[((i+1)*0.1-0.05, i) for i in range(10)]))_____no_output_____
</code>
Finally, we may use ``Spikes`` to visualize marginal distributions as adjoined plots using the ``<<`` adjoin operator:_____no_output_____
<code>
%%opts Spikes (line_alpha=0.2)
points = hv.Points(np.random.randn(500, 2))
points << hv.Spikes(points['y']) << hv.Spikes(points['x'])_____no_output_____
</code>
### ``VectorField`` <a id='VectorField'></a>_____no_output_____
<code>
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = (x,y,sine_rings, exp_falloff)
hv.VectorField(vector_data)_____no_output_____
</code>
As you can see above, here the *x* and *y* positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).
Using the IPython ``%%opts`` cell-magic (described in the [Options tutorial](Options), along with the Python equivalent), we can also use color as a redundant indicator to the direction or magnitude:_____no_output_____
<code>
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')_____no_output_____
</code>
### ``SideHistogram`` <a id='SideHistogram'></a>_____no_output_____The ``.hist`` method conveniently adjoins a histogram to the side of any ``Chart``, ``Surface``, or ``Raster`` component, as well as many of the container types (though it would be reporting data from one of these underlying ``Element`` types). For a ``Raster`` using color or grayscale to show values (see ``Raster`` section below), the side histogram doubles as a color bar or key._____no_output_____
<code>
import numpy as np
np.random.seed(42)
points = [(i, np.random.normal()) for i in range(800)]
hv.Scatter(points).hist()_____no_output_____
</code>
## ``Chart3D`` Elements <a id='Chart3D Elements'></a>_____no_output_____### ``Surface`` <a id='Surface'></a>_____no_output_____
<code>
%%opts Surface (cmap='jet' rstride=20, cstride=2)
hv.Surface(np.sin(np.linspace(0,100*np.pi*2,10000)).reshape(100,100))_____no_output_____
</code>
Surface is used for a set of gridded points whose associated value dimension represents samples from a continuous surface; it is the equivalent of a ``Curve`` but with two key dimensions instead of just one._____no_output_____### ``Scatter3D`` <a id='Scatter3D'></a>_____no_output_____
<code>
%%opts Scatter3D [azimuth=40 elevation=20]
x,y = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))_____no_output_____
</code>
``Scatter3D`` is the equivalent of ``Scatter`` but for two key dimensions, rather than just one.
### ``TriSurface`` <a id='TriSurface'></a>_____no_output_____The ``TriSurface`` Element renders any collection of 3D points as a Surface by applying Delaunay triangulation. It thus supports arbitrary, non-gridded data, but it does not support indexing to find data values, since finding the closest ones would require a search._____no_output_____
<code>
%%opts TriSurface [fig_size=200] (cmap='hot_r')
hv.TriSurface((x.flat,y.flat,heights.flat))_____no_output_____
</code>
## ``Raster`` Elements <a id='Raster Elements'></a>_____no_output_____**A collection of raster image types**
The second large class of ``Elements`` is the raster elements. Like ``Points`` and unlike the other ``Chart`` elements, ``Raster Elements`` live in a 2D key-dimensions space. For the ``Image``, ``RGB``, and ``HSV`` elements, the coordinates of this two-dimensional key space are defined in a [continuously indexable coordinate system](Continuous_Coordinates.ipynb)._____no_output_____### ``Raster`` <a id='Raster'></a>_____no_output_____A ``Raster`` is the base class for image-like ``Elements``, but may be used directly to visualize 2D arrays using a color map. The coordinate system of a ``Raster`` is the raw indexes of the underlying array, with integer values always starting from (0,0) in the top left, with default extents corresponding to the shape of the array. The ``Image`` subclass visualizes similarly, but using a continuous Cartesian coordinate system suitable for an array that represents some underlying continuous region._____no_output_____
<code>
x,y = np.mgrid[-50:51, -50:51] * 0.1
hv.Raster(np.sin(x**2+y**2))_____no_output_____
</code>
### ``QuadMesh`` <a id='QuadMesh'></a>_____no_output_____The basic ``QuadMesh`` is a 2D grid of bins specified as x-/y-values specifying a regular sampling or edges, with arbitrary sampling and an associated 2D array containing the bin values. The coordinate system of a ``QuadMesh`` is defined by the bin edges, therefore any index falling into a binned region will return the appropriate value. Unlike ``Image`` objects, slices must be inclusive of the bin edges._____no_output_____
<code>
n = 21
xs = np.logspace(1, 3, n)
ys = np.linspace(1, 10, n)
hv.QuadMesh((xs, ys, np.random.rand(n-1, n-1)))_____no_output_____
</code>
QuadMesh may also be used to represent an arbitrary mesh of quadrilaterals by supplying three separate 2D arrays representing the coordinates of each quadrilateral in a 2D space. Note that when using ``QuadMesh`` in this mode, slicing and indexing semantics and most operations will currently not work._____no_output_____
<code>
coords = np.linspace(-1.5,1.5,n)
X,Y = np.meshgrid(coords, coords);
Qx = np.cos(Y) - np.cos(X)
Qz = np.sin(Y) + np.sin(X)
Z = np.sqrt(X**2 + Y**2)
hv.QuadMesh((Qx, Qz, Z))_____no_output_____
</code>
### ``HeatMap`` <a id='HeatMap'></a>_____no_output_____A ``HeatMap`` displays like a typical raster image, but the input is a dictionary indexed with two-dimensional keys, not a Numpy array or Pandas dataframe. As many rows and columns as required will be created to display the values in an appropriate grid format. Values unspecified are left blank, and the keys can be any Python datatype (not necessarily numeric). One typical usage is to show values from a set of experiments, such as a parameter space exploration, and many other such visualizations are shown in the [Containers](Containers.ipynb) and [Exploring Data](Exploring_Data.ipynb) tutorials. Each value in a ``HeatMap`` is labeled explicitly by default, and so this component is not meant for very large numbers of samples. With the default color map, high values (in the upper half of the range present) are colored orange and red, while low values (in the lower half of the range present) are colored shades of blue._____no_output_____
<code>
data = {(chr(65+i),chr(97+j)): i*j for i in range(5) for j in range(5) if i!=j}
hv.HeatMap(data).sort()_____no_output_____
</code>
### ``Image`` <a id='Image'></a>_____no_output_____Like ``Raster``, a HoloViews ``Image`` allows you to view 2D arrays using an arbitrary color map. Unlike ``Raster``, an ``Image`` is associated with a [2D coordinate system in continuous space](Continuous_Coordinates.ipynb), which is appropriate for values sampled from some underlying continuous distribution (as in a photograph or other measurements from locations in real space). Slicing, sampling, etc. on an ``Image`` all use this continuous space, whereas the corresponding operations on a ``Raster`` work on the raw array coordinates._____no_output_____
<code>
x,y = np.mgrid[-50:51, -50:51] * 0.1
bounds=(-1,-1,1,1) # Coordinate system: (left, bottom, top, right)
(hv.Image(np.sin(x**2+y**2), bounds=bounds)
+ hv.Image(np.sin(x**2+y**2), bounds=bounds)[-0.5:0.5, -0.5:0.5])_____no_output_____
</code>
Notice how, because our declared coordinate system is continuous, we can slice with any floating-point value we choose. The appropriate range of the samples in the input numpy array will always be displayed, whether or not there are samples at those specific floating-point values.
It is also worth noting that the name ``Image`` can clash with other common libraries, which is one reason to avoid unqualified imports like ``from holoviews import *``. For instance, the Python Imaging Libray provides an ``Image`` module, and IPython itself supplies an ``Image`` class in ``IPython.display``. Python namespaces allow you to avoid such problems, e.g. using ``from PIL import Image as PILImage`` or using ``import holoviews as hv`` and then ``hv.Image()``, as we do in these tutorials._____no_output_____### ``RGB`` <a id='RGB'></a>_____no_output_____The ``RGB`` element is an ``Image`` that supports red, green, blue channels:_____no_output_____
<code>
x,y = np.mgrid[-50:51, -50:51] * 0.1
r = 0.5*np.sin(np.pi +3*x**2+y**2)+0.5
g = 0.5*np.sin(x**2+2*y**2)+0.5
b = 0.5*np.sin(np.pi/2+x**2+y**2)+0.5
hv.RGB(np.dstack([r,g,b]))_____no_output_____
</code>
You can see how the RGB object is created from the original channels:_____no_output_____
<code>
%%opts Image (cmap='gray')
hv.Image(r,label="R") + hv.Image(g,label="G") + hv.Image(b,label="B")_____no_output_____
</code>
``RGB`` also supports an optional alpha channel, which will be used as a mask revealing or hiding any ``Element``s it is overlaid on top of:_____no_output_____
<code>
%%opts Image (cmap='gray')
mask = 0.5*np.sin(0.2*(x**2+y**2))+0.5
rgba = hv.RGB(np.dstack([r,g,b,mask]))
bg = hv.Image(0.5*np.cos(x*3)+0.5, label="Background") * hv.VLine(x=0,label="Background")
overlay = bg*rgba
overlay.label="RGBA Overlay"
bg + hv.Image(mask,label="Mask") + overlay_____no_output_____
</code>
### ``HSV`` <a id='HSV'></a>_____no_output_____HoloViews makes it trivial to work in any color space that can be converted to ``RGB`` by making a simple subclass of ``RGB`` as appropriate. For instance, we also provide the HSV (hue, saturation, value) color space, which is useful for plotting cyclic data (as the Hue) along with two additional dimensions (controlling the saturation and value of the color, respectively):_____no_output_____
<code>
x,y = np.mgrid[-50:51, -50:51] * 0.1
h = 0.5 + np.sin(0.2*(x**2+y**2)) / 2.0
s = 0.5*np.cos(y*3)+0.5
v = 0.5*np.cos(x*3)+0.5
hsv = hv.HSV(np.dstack([h, s, v]))
hsv_____no_output_____
</code>
You can see how this is created from the original channels:_____no_output_____
<code>
%%opts Image (cmap='gray')
hv.Image(h, label="H") + hv.Image(s, label="S") + hv.Image(v, label="V")_____no_output_____
</code>
# ``Tabular`` Elements <a id='Tabular Elements'></a>_____no_output_____**General data structures for holding arbitrary information**_____no_output_____## ``ItemTable`` <a id='ItemTable'></a>_____no_output_____An ``ItemTable`` is an ordered collection of key, value pairs. It can be used to directly visualize items in a tabular format where the items may be supplied as an ``OrderedDict`` or a list of (key,value) pairs. A standard Python dictionary can be easily visualized using a call to the ``.items()`` method, though the entries in such a dictionary are not kept in any particular order, and so you may wish to sort them before display. One typical usage for an ``ItemTable`` is to list parameter values or measurements associated with an adjacent ``Element``._____no_output_____
<code>
hv.ItemTable([('Age', 10), ('Weight',15), ('Height','0.8 meters')])_____no_output_____
</code>
## ``Table`` <a id='Table'></a>_____no_output_____A table is more general than an ``ItemTable``, as it allows multi-dimensional keys and multidimensional values._____no_output_____
<code>
keys = [('M',10), ('M',16), ('F',12)]
values = [(15, 0.8), (18, 0.6), (10, 0.8)]
table = hv.Table(zip(keys,values),
kdims = ['Gender', 'Age'],
vdims=['Weight', 'Height'])
table_____no_output_____
</code>
Note that you can use select using tables, and once you select using a full, multidimensional key, you get an ``ItemTable`` (shown on the right):_____no_output_____
<code>
table.select(Gender='M') + table.select(Gender='M', Age=10)_____no_output_____
</code>
The ``Table`` is used as a common data structure that may be converted to any other HoloViews data structure using the ``TableConversion`` class.
The functionality of the ``TableConversion`` class may be conveniently accessed using the ``.to`` property. For more extended usage of table conversion see the [Columnar Data](Columnnar_Data.ipynb) and [Pandas Conversion](Pandas_Conversion.ipynb) Tutorials._____no_output_____
<code>
table.select(Gender='M').to.curve(kdims=["Age"], vdims=["Weight"])_____no_output_____
</code>
# ``Annotation`` Elements <a id='Annotation Elements'></a>_____no_output_____**Useful information that can be overlaid onto other components**_____no_output_____Annotations are components designed to be overlaid on top of other ``Element`` objects. To demonstrate annotation and paths, we will be drawing many of our elements on top of an RGB Image:_____no_output_____
<code>
scene = hv.RGB.load_image('../assets/penguins.png')_____no_output_____
</code>
### ``VLine`` and ``HLine`` <a id='VLine'></a><a id='HLine'></a>_____no_output_____
<code>
scene * hv.VLine(-0.05) + scene * hv.HLine(-0.05)_____no_output_____
</code>
### ``Spline`` <a id='Spline'></a>_____no_output_____The ``Spline`` annotation is used to draw Bezier splines using the same semantics as [matplotlib splines](http://matplotlib.org/api/path_api.html). In the overlay below, the spline is in dark blue and the control points are in light blue._____no_output_____
<code>
points = [(-0.3, -0.3), (0,0), (0.25, -0.25), (0.3, 0.3)]
codes = [1,4,4,4]
scene * hv.Spline((points,codes)) * hv.Curve(points)_____no_output_____
</code>
### Text and Arrow <a id='Text'></a><a id='Arrow'></a>_____no_output_____
<code>
scene * hv.Text(0, 0.2, 'Adult\npenguins') + scene * hv.Arrow(0,-0.1, 'Baby penguin', 'v')_____no_output_____
</code>
# Paths <a id='Path Elements'></a>_____no_output_____**Line-based components that can be overlaid onto other components**
Paths are a subclass of annotations that involve drawing line-based components on top of other elements. Internally, Path Element types hold a list of Nx2 arrays, specifying the x/y-coordinates along each path. The data may be supplied in a number of ways, including:
1. A list of Nx2 numpy arrays.
2. A list of lists containing x/y coordinate tuples.
3. A tuple containing an array of length N with the x-values and a second array of shape NxP, where P is the number of paths.
4. A list of tuples each containing separate x and y values._____no_output_____## ``Path`` <a id='Path'></a>_____no_output_____A ``Path`` object is actually a collection of paths which can be arbitrarily specified. Although there may be multiple unconnected paths in a single ``Path`` object, they will all share the same style. Only by overlaying multiple ``Path`` objects do you iterate through the defined color cycle (or any other style options that have been defined)._____no_output_____
<code>
angle = np.linspace(0, 2*np.pi, 100)
baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2))
adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)]
adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)]
scene * hv.Path([adultL, adultR, baby]) * hv.Path([baby])_____no_output_____
</code>
## ``Contours`` <a id='Contours'></a>_____no_output_____A ``Contours`` object is similar to ``Path`` object except each of the path elements is associated with a numeric value, called the ``level``. Sadly, our penguins are too complicated to give a simple example so instead we will simply mark the first couple of rings of our earlier ring pattern:_____no_output_____
<code>
x,y = np.mgrid[-50:51, -50:51] * 0.1
def circle(radius, x=0, y=0):
angles = np.linspace(0, 2*np.pi, 100)
return np.array( list(zip(x+radius*np.sin(angles), y+radius*np.cos(angles))))
hv.Image(np.sin(x**2+y**2)) * hv.Contours([circle(0.22)], level=0) * hv.Contours([circle(0.33)], level=1)_____no_output_____
</code>
## ``Polygons`` <a id='Polygons'></a>_____no_output_____A ``Polygons`` object is similar to a ``Contours`` object except that each supplied path is closed and filled. Just like ``Contours``, optionally a ``level`` may be supplied; the Polygons will then be colored according to the supplied ``cmap``. Non-finite values such as ``np.NaN`` or ``np.inf`` will default to the supplied ``facecolor``.
Polygons with values can be used to build heatmaps with arbitrary shapes._____no_output_____
<code>
%%opts Polygons (cmap='hot' line_color='black' line_width=2)
np.random.seed(35)
hv.Polygons([np.random.rand(4,2)], level=0.5) *\
hv.Polygons([np.random.rand(4,2)], level=1.0) *\
hv.Polygons([np.random.rand(4,2)], level=1.5) *\
hv.Polygons([np.random.rand(4,2)], level=2.0)_____no_output_____
</code>
Polygons without a value are useful as annotation, but also allow us to draw arbitrary shapes._____no_output_____
<code>
def rectangle(x=0, y=0, width=1, height=1):
return np.array([(x,y), (x+width, y), (x+width, y+height), (x, y+height)])
(hv.Polygons([rectangle(width=2), rectangle(x=6, width=2)]).opts(style={'fill_color': '#a50d0d'})
* hv.Polygons([rectangle(x=2, height=2), rectangle(x=5, height=2)]).opts(style={'fill_color': '#ffcc00'})
* hv.Polygons([rectangle(x=3, height=2, width=2)]).opts(style={'fill_color': 'cyan'}))_____no_output_____
</code>
## ``Bounds`` <a id='Bounds'></a>_____no_output_____A bounds is a rectangular area specified as a tuple in ``(left, bottom, right, top)`` format. It is useful for denoting a region of interest defined by some bounds, whereas ``Box`` (below) is useful for drawing a box at a specific location._____no_output_____
<code>
scene * hv.Bounds(0.2) * hv.Bounds((0.2, 0.2, 0.45, 0.45,))_____no_output_____
</code>
## ``Box`` <a id='Box'></a> and ``Ellipse`` <a id='Ellipse'></a>_____no_output_____A ``Box`` is similar to a ``Bounds`` except you specify the box position, width, and aspect ratio instead of the coordinates of the box corners. An ``Ellipse`` is specified just as for ``Box``, but has a rounded shape._____no_output_____
<code>
scene * hv.Box( -0.25, 0.3, 0.3, aspect=0.5) * hv.Box( 0, -0.2, 0.1) + \
scene * hv.Ellipse(-0.25, 0.3, 0.3, aspect=0.5) * hv.Ellipse(0, -0.2, 0.1)_____no_output_____
</code>
|
{
"repository": "stuarteberg/holoviews",
"path": "doc/Tutorials/Bokeh_Elements.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 1,
"size": 54084,
"hexsha": "cb3b30d2fe623aff0c08c952c605d8b5e0421f7a",
"max_line_length": 938,
"avg_line_length": 38.0070274069,
"alphanum_fraction": 0.5991420753
}
|
# Notebook from thoughtworks/antiviral-peptide-predictions-using-gan
Path: notebooks/AVP_viz.ipynb
<code>
import pandas as pd
# import altair as alt
import Bio
from Bio import SeqIO
sequence = []
_____no_output_____!pip install BioCollecting Bio
Downloading https://files.pythonhosted.org/packages/58/69/c18c38b14c93664207eafc06199a0a9d396fe32b25d21b4f0cb7fb1f0542/bio-0.0.1-py3-none-any.whl
Requirement already satisfied: intervaltree in /usr/local/lib/python3.6/dist-packages (from Bio) (2.1.0)
Collecting biopython
[?25l Downloading https://files.pythonhosted.org/packages/76/02/8b606c4aa92ff61b5eda71d23b499ab1de57d5e818be33f77b01a6f435a8/biopython-1.78-cp36-cp36m-manylinux1_x86_64.whl (2.3MB)
[K |████████████████████████████████| 2.3MB 6.6MB/s
[?25hRequirement already satisfied: plac in /usr/local/lib/python3.6/dist-packages (from Bio) (1.1.3)
Requirement already satisfied: attrs in /usr/local/lib/python3.6/dist-packages (from Bio) (20.1.0)
Requirement already satisfied: sortedcontainers in /usr/local/lib/python3.6/dist-packages (from intervaltree->Bio) (2.2.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from biopython->Bio) (1.18.5)
Installing collected packages: biopython, Bio
Successfully installed Bio-0.0.1 biopython-1.78
sequences = pd.read_csv('avp_sequences.csv')
_____no_output_____sequences_____no_output_____from Bio.SeqUtils.ProtParam import ProteinAnalysis
_____no_output_____aa_freq = pd.DataFrame(columns=['A','C','D','E','F','G','H','I','K','L','M','N','P','Q','R','S','T','V','W','Y'])
for seq in sequences.Sequence:
# print(seq)
X = ProteinAnalysis(seq)
# print(X.count_amino_acids())
# print(list(X.count_amino_acids().items()))
counts = pd.DataFrame(X.count_amino_acids(), index=[0]).loc[0]
aa_freq = aa_freq.append(counts)
_____no_output_____aa_freq = aa_freq.append(pd.DataFrame(X.count_amino_acids(), index=[0]).loc[0])_____no_output_____aa_freq_____no_output__________no_output_____import seaborn as sns
sns.distplot(aa_freq.A, hist=False, label="A")
sns.distplot(aa_freq.C, hist=False, label="C")
sns.distplot(aa_freq.D, hist=False, label="D")
sns.distplot(aa_freq.E, hist=False, label="E")
sns.distplot(aa_freq.F, hist=False, label="F")
sns.distplot(aa_freq.G, hist=False, label="G")
sns.distplot(aa_freq.H, hist=False, label="H")
sns.distplot(aa_freq.I, hist=False, label="I")
sns.distplot(aa_freq.K, hist=False, label="K")
sns.distplot(aa_freq.L, hist=False, label="L")
sns.distplot(aa_freq.M, hist=False, label="M")
sns.distplot(aa_freq.N, hist=False, label="N")
sns.distplot(aa_freq.P, hist=False, label="P")
sns.distplot(aa_freq.Q, hist=False, label="Q")
sns.distplot(aa_freq.R, hist=False, label="R")
sns.distplot(aa_freq.S, hist=False, label="S")
sns.distplot(aa_freq.T, hist=False, label="T")
sns.distplot(aa_freq.V, hist=False, label="V")
sns.distplot(aa_freq.W, hist=False, label="W")
sns.distplot(aa_freq.Y, hist=False, label="Y")
plt.xlim(0,20)
_____no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output_____
</code>
|
{
"repository": "thoughtworks/antiviral-peptide-predictions-using-gan",
"path": "notebooks/AVP_viz.ipynb",
"matched_keywords": [
"BioPython"
],
"stars": 2,
"size": 51293,
"hexsha": "cb3e3ac0aa40c7388ea9160d09f87da21b1704ed",
"max_line_length": 25822,
"avg_line_length": 63.1687192118,
"alphanum_fraction": 0.6315091728
}
|
# Notebook from tienyuliu/IDS-703-Final-Project
Path: Label Clustering.ipynb
#### TFIDF_____no_output_____
<code>
# loading libraries
import pandas as pd
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english")
import nltk
import re
from sklearn.cluster import SpectralClustering
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
from sklearn.model_selection import GridSearchCV
from collections import Counter
import ast_____no_output_____# importing data
ted_main = pd.read_csv('ted_main.csv')
ted_main['tags'] = ted_main['tags'].apply(lambda x: ast.literal_eval(x))
transcripts = pd.read_csv('transcripts.csv')
ted_merged = pd.merge(left=transcripts,
right=ted_main,
left_on='url',
right_on='url')
transcript = ted_merged.transcript_____no_output_____def tokenize(text):
tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
filtered_tokens = []
for token in tokens:
if re.search('[a-zA-Z]', token):
filtered_tokens.append(token)
# stems = [stemmer.stem(t) for t in filtered_tokens]
return filtered_tokens_____no_output_____doc = transcript.tolist()
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
min_df=0.2, stop_words='english',
use_idf=True, tokenizer=tokenize, ngram_range=(1,3))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(doc) #fit the vectorizer to synopses
print(tfidf_matrix.shape)Wall time: 1min 29s
(2467, 364)
</code>
#### Spectural Clustering_____no_output_____
<code>
n_cluster = range(2,11)
best_param = []
list_score = []
for n in n_cluster:
model = SpectralClustering(n_clusters=n)
model.fit(tfidf_matrix)
label = model.labels_
list_score.append(silhouette_score(tfidf_matrix, label))
list_score = np.array(list_score)
best_param.append(n_cluster[list_score.argmax()])
print(best_param)[8]
model = SpectralClustering(n_clusters=8)
model.fit(tfidf_matrix)
label = model.labels_
clusters = label.tolist()
Counter(clusters)_____no_output_____
</code>
#### KMeans Clustering_____no_output_____
<code>
n_cluster = list(range(2,11))
param_grid = {'n_clusters': n_cluster}
kmeans = KMeans()
kmeans_cv = GridSearchCV(kmeans, param_grid)
kmeans_cv.fit(tfidf_matrix)
print("Tuned Kmeans Parameter: {}".format(kmeans_cv.best_params_))Tuned Kmeans Parameter: {'n_clusters': 10}
km_model = KMeans(n_clusters=8)
km_model.fit(tfidf_matrix)
km_label = km_model.labels_
km_clusters = km_label.tolist()
Counter(km_clusters)_____no_output_____import warnings
warnings.filterwarnings("ignore")
ted_merged['cluster'] = clusters
ted_w_cluster = ted_merged[['title','transcript','tags','cluster']]
ted_w_cluster[ted_w_cluster['cluster']==7][:50]_____no_output_____ted_w_cluster_____no_output_____c0_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 0]['tags'].tolist() for item in sub_list]
c1_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 1]['tags'].tolist() for item in sub_list]
c2_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 2]['tags'].tolist() for item in sub_list]
c3_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 3]['tags'].tolist() for item in sub_list]
c4_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 4]['tags'].tolist() for item in sub_list]
c5_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 5]['tags'].tolist() for item in sub_list]
c6_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 6]['tags'].tolist() for item in sub_list]
c7_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 7]['tags'].tolist() for item in sub_list]
# c8_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 8]['tags'].tolist() for item in sub_list]
# c9_tag = [item for sub_list in ted_w_cluster[ted_w_cluster.cluster == 9]['tags'].tolist() for item in sub_list]_____no_output_____c0_tag_stat = pd.Series(Counter(c0_tag))
c1_tag_stat = pd.Series(Counter(c1_tag))
c2_tag_stat = pd.Series(Counter(c2_tag))
c3_tag_stat = pd.Series(Counter(c3_tag))
c4_tag_stat = pd.Series(Counter(c4_tag))
c5_tag_stat = pd.Series(Counter(c5_tag))
c6_tag_stat = pd.Series(Counter(c6_tag))
c7_tag_stat = pd.Series(Counter(c7_tag))
# c8_tag_stat = pd.Series(Counter(c8_tag))
# c9_tag_stat = pd.Series(Counter(c9_tag))_____no_output_____print(c0_tag_stat.nlargest(10))
print ("")
print (c1_tag_stat.nlargest(10))
print ("")
print (c2_tag_stat.nlargest(10))
print ("")
print (c3_tag_stat.nlargest(10))
print ("")
print (c4_tag_stat.nlargest(10))
print ("")
print (c5_tag_stat.nlargest(10))
print ("")
print(c6_tag_stat.nlargest(10))
print ("")
print(c7_tag_stat.nlargest(10))
print ("")
# print(c8_tag_stat.nlargest(10))
# print ("")
# print(c9_tag_stat.nlargest(10))
# print ("")entertainment 87
culture 75
humor 64
technology 51
TEDx 49
science 39
performance 39
music 36
comedy 36
design 36
dtype: int64
global issues 198
business 115
economics 91
technology 75
politics 59
TEDx 54
culture 52
social change 52
health 49
society 49
dtype: int64
design 106
cities 65
architecture 55
technology 42
culture 32
art 32
collaboration 24
creativity 23
business 22
innovation 22
dtype: int64
technology 53
data 38
science 25
health 17
TEDx 16
communication 13
business 12
computers 12
global issues 12
medicine 12
dtype: int64
science 86
technology 57
environment 50
exploration 36
nature 30
TEDx 30
design 28
water 27
global issues 26
biology 25
dtype: int64
technology 355
science 277
design 163
culture 133
TEDx 133
biology 112
innovation 99
business 93
brain 88
global issues 87
dtype: int64
culture 149
global issues 108
TEDx 100
social change 88
entertainment 85
children 77
technology 75
society 70
humanity 69
storytelling 66
dtype: int64
women 64
global issues 24
feminism 21
culture 20
Gender equality 20
activism 18
TEDx 18
inequality 17
social change 16
society 15
dtype: int64
</code>
|
{
"repository": "tienyuliu/IDS-703-Final-Project",
"path": "Label Clustering.ipynb",
"matched_keywords": [
"biology"
],
"stars": null,
"size": 74694,
"hexsha": "cb3ec35a933301e41bf7ad42f4f9c1bf8a9a0104",
"max_line_length": 122,
"avg_line_length": 45.3515482696,
"alphanum_fraction": 0.4629153613
}
|
# Notebook from exowanderer/exoplanet
Path: paper/figures/texp.ipynb
<code>
%matplotlib inline_____no_output_____%run notebook_setup_____no_output_____import numpy as np
import matplotlib.pyplot as plt
import exoplanet as xo
# The light curve calculation requires an orbit
orbit = xo.orbits.KeplerianOrbit(period=1)
# Compute a limb-darkened light curve using starry
texp = 0.02
t = np.linspace(0.0, 0.06, 1000)
u = [0.3, 0.2]
star = xo.StarryLightCurve(u)
light_curve_instant = star.get_light_curve(
orbit=orbit, r=0.1, t=t).eval()
light_curve_exact = star.get_light_curve(
orbit=orbit, r=0.1, t=t, texp=texp, oversample=1000).eval()
fig, axes = plt.subplots(4, 1, figsize=(5, 10), sharex=True)
ax = axes[0]
ax.plot(t, light_curve_instant * 1e3, ":k")
ax.plot(t, light_curve_exact * 1e3, "k")
ax.set_ylabel("relative flux [ppt]")
for n in [3, 7, 15, 51][::-1]:
for order in range(3):
ax = axes[order+1]
light_curve = star.get_light_curve(order=order,
orbit=orbit, r=0.1, t=t, texp=texp, oversample=n).eval()
ax.plot(t, np.log10(np.abs(light_curve - light_curve_exact)),
label="{0}".format(n))
# integrated = xo.light_curves.LimbDarkLightCurve(u)
# ax = axes[-1]
# for tol in [-5, -4, -3, -2]:
# light_curve, num_eval = theano.function([], integrated.get_light_curve(
# orbit=orbit, r=0.1, t=t, texp=texp, tol=10**tol, return_num_eval=True))()
# print(tol, num_eval / len(t))
# ax.plot(t, np.log10(np.abs(light_curve - light_curve_exact)),
# label="$10^{{{0}}},\,{1:.0f}$".format(tol, num_eval/len(t)), zorder=-tol)
for i, ax in enumerate(axes[1:]):
if i <= 2:
ax.annotate("order = {0}".format(i), (0, 1), xycoords="axes fraction",
ha="left", va="top",
xytext=(5, -10), textcoords="offset points",
fontsize=10)
for loc, name in [(-3, "ppt"), (-6, "ppm"), (-9, "ppb")]:
ax.axhline(loc, color="k", alpha=0.3, lw=1)
ax.annotate(name, (t.max(), loc), va="top", ha="right",
xytext=(-3, -2), textcoords="offset points",
fontsize=10, alpha=0.3)
ax.set_ylim(-10.2, -2.7)
ax.set_ylabel("log$_{10}$(flux error)")
ax.legend(fontsize=9, ncol=4, loc=3)
ax.set_xlabel("time [days]")
ax.set_xlim(t.min(), t.max())
fig.subplots_adjust(hspace=0.0);_____no_output_____
</code>
|
{
"repository": "exowanderer/exoplanet",
"path": "paper/figures/texp.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 3782,
"hexsha": "cb3ee7431bf42b98049e669efe8ee6cd1191cd70",
"max_line_length": 99,
"avg_line_length": 31.781512605,
"alphanum_fraction": 0.4978847171
}
|
# Notebook from zealseeker/deepchem
Path: examples/tutorials/11_Learning_Unsupervised_Embeddings_for_Molecules.ipynb
# Tutorial Part 11: Learning Unsupervised Embeddings for Molecules
In this example, we will use a `SeqToSeq` model to generate fingerprints for classifying molecules. This is based on the following paper, although some of the implementation details are different: Xu et al., "Seq2seq Fingerprint: An Unsupervised Deep Molecular Embedding for Drug Discovery" (https://doi.org/10.1145/3107411.3107424).
Many types of models require their inputs to have a fixed shape. Since molecules can vary widely in the numbers of atoms and bonds they contain, this makes it hard to apply those models to them. We need a way of generating a fixed length "fingerprint" for each molecule. Various ways of doing this have been designed, such as Extended-Connectivity Fingerprints (ECFPs). But in this example, instead of designing a fingerprint by hand, we will let a `SeqToSeq` model learn its own method of creating fingerprints.
A `SeqToSeq` model performs sequence to sequence translation. For example, they are often used to translate text from one language to another. It consists of two parts called the "encoder" and "decoder". The encoder is a stack of recurrent layers. The input sequence is fed into it, one token at a time, and it generates a fixed length vector called the "embedding vector". The decoder is another stack of recurrent layers that performs the inverse operation: it takes the embedding vector as input, and generates the output sequence. By training it on appropriately chosen input/output pairs, you can create a model that performs many sorts of transformations.
In this case, we will use SMILES strings describing molecules as the input sequences. We will train the model as an autoencoder, so it tries to make the output sequences identical to the input sequences. For that to work, the encoder must create embedding vectors that contain all information from the original sequence. That's exactly what we want in a fingerprint, so perhaps those embedding vectors will then be useful as a way to represent molecules in other models!
## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/11_Learning_Unsupervised_Embeddings_for_Molecules.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. This notebook will take a few hours to run on a GPU machine, so we encourage you to run it on Google colab unless you have a good GPU machine available._____no_output_____
<code>
!wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
!chmod +x Anaconda3-2019.10-Linux-x86_64.sh
!bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
import deepchem as dc_____no_output_____
</code>
Let's start by loading the data. We will use the MUV dataset. It includes 74,501 molecules in the training set, and 9313 molecules in the validation set, so it gives us plenty of SMILES strings to work with._____no_output_____
<code>
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_muv()
train_dataset, valid_dataset, test_dataset = datasets
train_smiles = train_dataset.ids
valid_smiles = valid_dataset.ids/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=FutureWarning)
RDKit WARNING: [15:40:18] Enabling RDKit 2019.09.3 jupyter extensions
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
</code>
We need to define the "alphabet" for our `SeqToSeq` model, the list of all tokens that can appear in sequences. (It's also possible for input and output sequences to have different alphabets, but since we're training it as an autoencoder, they're identical in this case.) Make a list of every character that appears in any training sequence._____no_output_____
<code>
tokens = set()
for s in train_smiles:
tokens = tokens.union(set(c for c in s))
tokens = sorted(list(tokens))_____no_output_____
</code>
Create the model and define the optimization method to use. In this case, learning works much better if we gradually decrease the learning rate. We use an `ExponentialDecay` to multiply the learning rate by 0.9 after each epoch._____no_output_____
<code>
from deepchem.models.optimizers import Adam, ExponentialDecay
max_length = max(len(s) for s in train_smiles)
batch_size = 100
batches_per_epoch = len(train_smiles)/batch_size
model = dc.models.SeqToSeq(tokens,
tokens,
max_length,
encoder_layers=2,
decoder_layers=2,
embedding_dimension=256,
model_dir='fingerprint',
batch_size=batch_size,
learning_rate=ExponentialDecay(0.004, 0.9, batches_per_epoch))WARNING:tensorflow:From /Users/bharath/opt/anaconda3/envs/deepchem/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:Entity <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Stack.call of <deepchem.models.layers.Stack object at 0x1a3cc7b0f0>>: AssertionError: Bad argument number for Name: 3, expecting 4
</code>
Let's train it! The input to `fit_sequences()` is a generator that produces input/output pairs. On a good GPU, this should take a few hours or less._____no_output_____
<code>
def generate_sequences(epochs):
for i in range(epochs):
for s in train_smiles:
yield (s, s)
model.fit_sequences(generate_sequences(40))Ending global_step 999: Average loss 72.0029
Ending global_step 1999: Average loss 40.7221
Ending global_step 2999: Average loss 31.5364
Ending global_step 3999: Average loss 26.4576
Ending global_step 4999: Average loss 22.814
Ending global_step 5999: Average loss 19.5248
Ending global_step 6999: Average loss 16.4594
Ending global_step 7999: Average loss 18.8898
Ending global_step 8999: Average loss 13.476
Ending global_step 9999: Average loss 11.5528
Ending global_step 10999: Average loss 10.1594
Ending global_step 11999: Average loss 10.6434
Ending global_step 12999: Average loss 6.57057
Ending global_step 13999: Average loss 6.46177
Ending global_step 14999: Average loss 7.53559
Ending global_step 15999: Average loss 4.95809
Ending global_step 16999: Average loss 4.35039
Ending global_step 17999: Average loss 3.39137
Ending global_step 18999: Average loss 3.5216
Ending global_step 19999: Average loss 3.08579
Ending global_step 20999: Average loss 2.80738
Ending global_step 21999: Average loss 2.92217
Ending global_step 22999: Average loss 2.51032
Ending global_step 23999: Average loss 1.86265
Ending global_step 24999: Average loss 1.67088
Ending global_step 25999: Average loss 1.87016
Ending global_step 26999: Average loss 1.61166
Ending global_step 27999: Average loss 1.40708
Ending global_step 28999: Average loss 1.4488
Ending global_step 29801: Average loss 1.33917
TIMING: model fitting took 5619.924 s
</code>
Let's see how well it works as an autoencoder. We'll run the first 500 molecules from the validation set through it, and see how many of them are exactly reproduced._____no_output_____
<code>
predicted = model.predict_from_sequences(valid_smiles[:500])
count = 0
for s,p in zip(valid_smiles[:500], predicted):
if ''.join(p) == s:
count += 1
print('reproduced', count, 'of 500 validation SMILES strings')reproduced 363 of 500 validation SMILES strings
</code>
Now we'll trying using the encoder as a way to generate molecular fingerprints. We compute the embedding vectors for all molecules in the training and validation datasets, and create new datasets that have those as their feature vectors. The amount of data is small enough that we can just store everything in memory._____no_output_____
<code>
train_embeddings = model.predict_embeddings(train_smiles)
train_embeddings_dataset = dc.data.NumpyDataset(train_embeddings,
train_dataset.y,
train_dataset.w,
train_dataset.ids)
valid_embeddings = model.predict_embeddings(valid_smiles)
valid_embeddings_dataset = dc.data.NumpyDataset(valid_embeddings,
valid_dataset.y,
valid_dataset.w,
valid_dataset.ids)_____no_output_____
</code>
For classification, we'll use a simple fully connected network with one hidden layer._____no_output_____
<code>
classifier = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=256,
layer_sizes=[512])
classifier.fit(train_embeddings_dataset, nb_epoch=10)Ending global_step 999: Average loss 829.805
Ending global_step 1999: Average loss 450.42
Ending global_step 2999: Average loss 326.079
Ending global_step 3999: Average loss 265.199
Ending global_step 4999: Average loss 246.724
Ending global_step 5999: Average loss 224.64
Ending global_step 6999: Average loss 202.624
Ending global_step 7460: Average loss 213.885
TIMING: model fitting took 19.780 s
</code>
Find out how well it worked. Compute the ROC AUC for the training and validation datasets._____no_output_____
<code>
import numpy as np
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean, mode="classification")
train_score = classifier.evaluate(train_embeddings_dataset, [metric], transformers)
valid_score = classifier.evaluate(valid_embeddings_dataset, [metric], transformers)
print('Training set ROC AUC:', train_score)
print('Validation set ROC AUC:', valid_score)computed_metrics: [0.97828427249789751, 0.98705973960125326, 0.966007068438685, 0.9874401066031584, 0.97794394675150698, 0.98021719680962449, 0.95318452689781941, 0.97185747562764213, 0.96389538770053473, 0.96798988621997473, 0.9690779239145807, 0.98544402211472004, 0.97762497271338133, 0.96843239633294886, 0.97753648081489997, 0.96504683675485614, 0.93547151958366914]
computed_metrics: [0.90790686952512678, 0.79891461649782913, 0.61900937081659968, 0.75241212956581671, 0.58678903240426017, 0.72765072765072758, 0.34929006085192693, 0.83986814712005553, 0.82379943502824859, 0.61844636844636847, 0.863620199146515, 0.68106930272108857, 0.98020477815699669, 0.85073580939032944, 0.781015678254942, 0.75399733510992673, nan]
Training set ROC AUC: {'mean-roc_auc_score': 0.97132433878689139}
Validation set ROC AUC: {'mean-roc_auc_score': 0.74592061629292239}
</code>
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!_____no_output_____
|
{
"repository": "zealseeker/deepchem",
"path": "examples/tutorials/11_Learning_Unsupervised_Embeddings_for_Molecules.ipynb",
"matched_keywords": [
"STAR",
"drug discovery"
],
"stars": 1,
"size": 27464,
"hexsha": "cb3f4b0c771988923ddaba3287d0b76d1940f55a",
"max_line_length": 682,
"avg_line_length": 52.4122137405,
"alphanum_fraction": 0.6575881154
}
|
# Notebook from mzager/dv-pipelines
Path: single-cell/covid-19-atlases/Covid-19-Atlas-E2E-2.ipynb
<code>
import argparse
import logging
from operator import mul
import time
import os
import pubweb.singlecell # import AnnDataSparse
from pubweb.hdf5 import Hdf5
from pubweb.commands.convert.singlecell.anndata import ImportAnndata
from pubweb.commands.convert.singlecell.cellranger import ImportCellRanger
from pubweb.commands.validate.dimensions import ValidateDimensions
from pubweb.commands.annotate.geneid import AnnotateGeneId
from pubweb.commands.annotate.geneset import AnnotateGeneset
from pubweb.commands.export.lists import ExportLists
from pubweb.commands.export.attributes import ExportAttributes
from pubweb.commands.export.tables import ExportTables
from pubweb.commands.export.projections import ExportProjections
from pubweb.commands.export.spatial import ExportSpatial
from pubweb.commands.export.matrix_sparse import ExportMatrixSparse
from pubweb.commands.export.matrix_dense import ExportMatrixDense
from pubweb.commands.summarize.genes import SummarizeGenes
from pubweb.commands.summarize.genemap import SummarizeGeneMap
from pubweb.commands.summarize.colors import SummarizeColors
from pubweb.commands.summarize.manifest import SummerizeManifest
_____no_output_____import importlib
importlib.reload(pubweb.singlecell)
importlib.reload(pubweb.hdf5)
importlib.reload(pubweb.commands.convert.singlecell.anndata)
importlib.reload(pubweb.commands.convert.singlecell.cellranger)
importlib.reload(pubweb.commands.validate.dimensions)
importlib.reload(pubweb.commands.annotate.geneid)
importlib.reload(pubweb.commands.annotate.geneset)
importlib.reload(pubweb.commands.export)
importlib.reload(pubweb.commands.export.lists)
importlib.reload(pubweb.commands.export.attributes)
importlib.reload(pubweb.commands.export.tables)
importlib.reload(pubweb.commands.export.projections)
importlib.reload(pubweb.commands.export.spatial)
importlib.reload(pubweb.commands.export.matrix_sparse)
importlib.reload(pubweb.commands.export.matrix_dense)
importlib.reload(pubweb.commands.summarize.genes)
importlib.reload(pubweb.commands.summarize.genemap)
importlib.reload(pubweb.commands.summarize.colors)
importlib.reload(pubweb.commands.summarize.manifest)
_____no_output_____logging.basicConfig(level='DEBUG')_____no_output_____datasetName='lung-upper-airway-h1299'
inputFile = '/data/notebooks/input/convert.hdf5'
outputFolder = '/data/notebooks/pubweb'
species = 'human'
overwriteHdf5 = True
python_wd = '/opt/pubweb'
_____no_output_____#dir(pubweb.singlecell)_____no_output_____
</code>
<code>
# anndatasparse
outputFile = f'{outputFolder}/pubweb.hdf5'
if os.path.exists(outputFile) and overwriteHdf5:
os.remove(outputFile)
hdf5 = Hdf5.load(outputFile, "a")_____no_output_____hdf5.uri_____no_output_____%time hdf5 | ImportAnndata(inputFile, datasetName)
# 345CPU times: user 464 ms, sys: 6.55 s, total: 7.01 s
Wall time: 6.97 s
hdf5.getDatasets()_____no_output_____hdf5.h5py['pubweb/lung-upper-airway-h1299/matrix']_____no_output_____%time hdf5 | AnnotateGeneId(species=species)
# 1min28sINFO:root:AnnotateGeneId: pubweb/lung-upper-airway-h1299/features/gene
# save hdf5_geneid
print(type(hdf5))<class 'pubweb.hdf5.LocalHdf5'>
hdf5.getDatasetsWithPath('pubweb/lung-upper-airway-h1299')_____no_output_____hdf5.getDatasets()_____no_output_____%time hdf5 | ExportMatrixDense(outputFolder)
# 14.1sExport Matrix
Writing cols 0 to 100
Writing cols 100 to 200
Writing cols 200 to 300
Writing cols 300 to 400
Writing cols 400 to 500
Writing cols 500 to 600
Writing cols 600 to 700
Writing cols 700 to 800
Writing cols 800 to 900
Writing cols 900 to 1000
Writing cols 1000 to 1100
Writing cols 1100 to 1200
Writing cols 1200 to 1300
Writing cols 1300 to 1400
Writing cols 1400 to 1500
Writing cols 1500 to 1600
Writing cols 1600 to 1700
Writing cols 1700 to 1800
Writing cols 1800 to 1900
Writing cols 1900 to 2000
Writing cols 2000 to 2100
Writing cols 2100 to 2200
Writing cols 2200 to 2300
Writing cols 2300 to 2400
Writing cols 2400 to 2500
Writing cols 2500 to 2600
Writing cols 2600 to 2700
Writing cols 2700 to 2800
Writing cols 2800 to 2900
Writing cols 2900 to 3000
Writing cols 3000 to 3100
Writing cols 3100 to 3200
Writing cols 3200 to 3300
Writing cols 3300 to 3400
Writing cols 3400 to 3500
Writing cols 3500 to 3600
Writing cols 3600 to 3700
Writing cols 3700 to 3800
Writing cols 3800 to 3900
Writing cols 3900 to 4000
Writing cols 4000 to 4100
Writing cols 4100 to 4200
Writing cols 4200 to 4300
Writing cols 4300 to 4400
Writing cols 4400 to 4500
Writing cols 4500 to 4600
Writing cols 4600 to 4700
Writing cols 4700 to 4800
Writing cols 4800 to 4900
Writing cols 4900 to 5000
Writing cols 5000 to 5100
Writing cols 5100 to 5200
Writing cols 5200 to 5300
Writing cols 5300 to 5400
Writing cols 5400 to 5500
Writing cols 5500 to 5600
Writing cols 5600 to 5700
Writing cols 5700 to 5800
Writing cols 5800 to 5900
Writing cols 5900 to 6000
Writing cols 6000 to 6100
Writing cols 6100 to 6200
Writing cols 6200 to 6300
Writing cols 6300 to 6400
Writing cols 6400 to 6500
Writing cols 6500 to 6600
Writing cols 6600 to 6700
Writing cols 6700 to 6800
Writing cols 6800 to 6900
Writing cols 6900 to 7000
Writing cols 7000 to 7100
Writing cols 7100 to 7200
Writing cols 7200 to 7300
Writing cols 7300 to 7400
Writing cols 7400 to 7500
Writing cols 7500 to 7600
Writing cols 7600 to 7700
Writing cols 7700 to 7800
Writing cols 7800 to 7900
Writing cols 7900 to 8000
Writing cols 8000 to 8100
Writing cols 8100 to 8200
Writing cols 8200 to 8300
Writing cols 8300 to 8400
Writing cols 8400 to 8500
Writing cols 8500 to 8600
Writing cols 8600 to 8700
Writing cols 8700 to 8800
Writing cols 8800 to 8900
Writing cols 8900 to 9000
Writing cols 9000 to 9100
Writing cols 9100 to 9200
Writing cols 9200 to 9300
Writing cols 9300 to 9400
Writing cols 9400 to 9500
Writing cols 9500 to 9600
Writing cols 9600 to 9700
Writing cols 9700 to 9800
Writing cols 9800 to 9900
Writing cols 9900 to 10000
Writing cols 10000 to 10100
Writing cols 10100 to 10200
Writing cols 10200 to 10300
Writing cols 10300 to 10400
Writing cols 10400 to 10500
Writing cols 10500 to 10600
Writing cols 10600 to 10700
Writing cols 10700 to 10800
Writing cols 10800 to 10900
Writing cols 10900 to 11000
Writing cols 11000 to 11100
Writing cols 11100 to 11200
Writing cols 11200 to 11300
Writing cols 11300 to 11400
Writing cols 11400 to 11500
Writing cols 11500 to 11600
Writing cols 11600 to 11700
Writing cols 11700 to 11800
Writing cols 11800 to 11900
Writing cols 11900 to 12000
Writing cols 12000 to 12100
Writing cols 12100 to 12200
Writing cols 12200 to 12300
Writing cols 12300 to 12400
Writing cols 12400 to 12500
Writing cols 12500 to 12600
Writing cols 12600 to 12700
Writing cols 12700 to 12800
Writing cols 12800 to 12900
Writing cols 12900 to 13000
Writing cols 13000 to 13100
Writing cols 13100 to 13200
Writing cols 13200 to 13300
Writing cols 13300 to 13400
Writing cols 13400 to 13500
Writing cols 13500 to 13600
Writing cols 13600 to 13700
Writing cols 13700 to 13800
Writing cols 13800 to 13900
Writing cols 13900 to 14000
Writing cols 14000 to 14100
Writing cols 14100 to 14200
Writing cols 14200 to 14300
Writing cols 14300 to 14400
Writing cols 14400 to 14500
Writing cols 14500 to 14600
Writing cols 14600 to 14700
Writing cols 14700 to 14800
Writing cols 14800 to 14900
Writing cols 14900 to 15000
Writing cols 15000 to 15100
Writing cols 15100 to 15200
Writing cols 15200 to 15300
Writing cols 15300 to 15400
Writing cols 15400 to 15500
Writing cols 15500 to 15600
Writing cols 15600 to 15700
Writing cols 15700 to 15800
Writing cols 15800 to 15900
Writing cols 15900 to 16000
Writing cols 16000 to 16100
Writing cols 16100 to 16200
Writing cols 16200 to 16300
Writing cols 16300 to 16400
Writing cols 16400 to 16500
Writing cols 16500 to 16600
Writing cols 16600 to 16700
Writing cols 16700 to 16800
Writing cols 16800 to 16900
Writing cols 16900 to 17000
Writing cols 17000 to 17100
Writing cols 17100 to 17200
Writing cols 17200 to 17300
Writing cols 17300 to 17400
Writing cols 17400 to 17500
Writing cols 17500 to 17600
Writing cols 17600 to 17700
Writing cols 17700 to 17800
Writing cols 17800 to 17900
Writing cols 17900 to 18000
Writing cols 18000 to 18100
Writing cols 18100 to 18200
Writing cols 18200 to 18300
Writing cols 18300 to 18400
Writing cols 18400 to 18500
Writing cols 18500 to 18600
Writing cols 18600 to 18700
Writing cols 18700 to 18800
Writing cols 18800 to 18900
Writing cols 18900 to 19000
Writing cols 19000 to 19100
Writing cols 19100 to 19200
Writing cols 19200 to 19300
Writing cols 19300 to 19400
Writing cols 19400 to 19500
Writing cols 19500 to 19600
Writing cols 19600 to 19700
Writing cols 19700 to 19800
Writing cols 19800 to 19900
Writing cols 19900 to 20000
Writing cols 20000 to 20100
Writing cols 20100 to 20200
Writing cols 20200 to 20300
Writing cols 20300 to 20400
Writing cols 20400 to 20500
Writing cols 20500 to 20600
Writing cols 20600 to 20700
Writing cols 20700 to 20800
Writing cols 20800 to 20900
Writing cols 20900 to 21000
Writing cols 21000 to 21100
Writing cols 21100 to 21200
Writing cols 21200 to 21300
Writing cols 21300 to 21400
Writing cols 21400 to 21500
Writing cols 21500 to 21600
Writing cols 21600 to 21700
Writing cols 21700 to 21800
Writing cols 21800 to 21900
Writing cols 21900 to 22000
Writing cols 22000 to 22100
Writing cols 22100 to 22200
Writing cols 22200 to 22300
Writing cols 22300 to 22400
Writing cols 22400 to 22500
Writing cols 22500 to 22600
Writing cols 22600 to 22700
Writing cols 22700 to 22800
Writing cols 22800 to 22900
Writing cols 22900 to 23000
Writing cols 23000 to 23100
Writing cols 23100 to 23200
Writing cols 23200 to 23300
Writing cols 23300 to 23400
Writing cols 23400 to 23500
Writing cols 23500 to 23600
Writing cols 23600 to 23700
Writing cols 23700 to 23800
Writing cols 23800 to 23900
Writing cols 23900 to 24000
Writing cols 24000 to 24100
Writing cols 24100 to 24200
Writing cols 24200 to 24300
Writing cols 24300 to 24400
Writing cols 24400 to 24500
Writing cols 24500 to 24600
Writing cols 24600 to 24700
Writing cols 24700 to 24800
Writing cols 24800 to 24900
Writing cols 24900 to 25000
Writing cols 25000 to 25100
Writing cols 25100 to 25200
Writing cols 25200 to 25300
Writing cols 25300 to 25400
Writing cols 25400 to 25500
Writing cols 25500 to 25600
Writing cols 25600 to 25700
Writing cols 25700 to 25800
Writing cols 25800 to 25900
Writing cols 25900 to 26000
Writing cols 26000 to 26100
Writing cols 26100 to 26200
Writing cols 26200 to 26300
Writing cols 26300 to 26400
Writing cols 26400 to 26500
Writing cols 26500 to 26600
Writing cols 26600 to 26700
Writing cols 26700 to 26800
Writing cols 26800 to 26900
Writing cols 26900 to 27000
Writing cols 27000 to 27072
CPU times: user 837 ms, sys: 4.18 s, total: 5.02 s
Wall time: 14.1 s
%time hdf5 | ExportProjections(outputFolder)
# 3min3sExport Dataset Projections
CPU times: user 164 µs, sys: 282 µs, total: 446 µs
Wall time: 429 µs
%time hdf5 | ExportTables(outputFolder)
# 426usExport Dataset Tables
CPU times: user 424 µs, sys: 0 ns, total: 424 µs
Wall time: 408 µs
%time hdf5 | ExportLists(outputFolder)
#480usExport Dataset Lists
CPU times: user 154 µs, sys: 264 µs, total: 418 µs
Wall time: 401 µs
%time hdf5 | ExportAttributes(outputFolder)
# 2min 7 sDEBUG:root:data has shape (81736,)
DEBUG:root:data has shape (81736,)
%time hdf5 | SummarizeColors(outputFolder)
# 59.4msINFO:root:Reading from /data/notebooks/pubweb/features/pw_symbol/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/pw_ensembl/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/gene/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/vst_variance_expected/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/vst_mean/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/pw_hcid/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/vst_variable/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/Selected/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/vst_variance/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/vst_variance_standardized/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/features/pw_entrez/metadata.json for /data/notebooks/pubweb/summary/color/features
INFO:root:Reading from /data/notebooks/pubweb/observations/infect/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/id/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/nFeature_RNA/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/sample_name/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/method/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/nCount_Unspliced/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/nCount_RNA/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/strain/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/orig_ident/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/nFeature_Unspliced/metadata.json for /data/notebooks/pubweb/summary/color/observations
INFO:root:Reading from /data/notebooks/pubweb/observations/sample_id/metadata.json for /data/notebooks/pubweb/summary/color/observations
%time hdf5 | SummerizeManifest(outputFolder)
# 4.2msmatrix: /data/notebooks/pubweb/matrix
placeholder
CPU times: user 79 µs, sys: 2.94 ms, total: 3.02 ms
Wall time: 2.37 ms
</code>
|
{
"repository": "mzager/dv-pipelines",
"path": "single-cell/covid-19-atlases/Covid-19-Atlas-E2E-2.ipynb",
"matched_keywords": [
"CellRanger"
],
"stars": 3,
"size": 67658,
"hexsha": "cb3fb07790d001105671a93e389fb736a7a40878",
"max_line_length": 20497,
"avg_line_length": 37.6295884316,
"alphanum_fraction": 0.6541133347
}
|
# Notebook from JeffreyNederend/CFDPython
Path: lessons/02_Step_2.ipynb
[@LorenaABarba](https://twitter.com/LorenaABarba)_____no_output_____12 steps to Navier–Stokes
======
***_____no_output_____This Jupyter notebook continues the presentation of the **12 steps to Navier–Stokes**, the practical module taught in the interactive CFD class of [Prof. Lorena Barba](http://lorenabarba.com). You should have completed [Step 1](./01_Step_1.ipynb) before continuing, having written your own Python script or notebook and having experimented with varying the parameters of the discretization and observing what happens.
_____no_output_____Step 2: Nonlinear Convection
-----
***_____no_output_____Now we're going to implement nonlinear convection using the same methods as in step 1. The 1D convection equation is:
$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0$$
Instead of a constant factor $c$ multiplying the second term, now we have the solution $u$ multiplying it. Thus, the second term of the equation is now *nonlinear*. We're going to use the same discretization as in Step 1 — forward difference in time and backward difference in space. Here is the discretized equation.
$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n-u_{i-1}^n}{\Delta x} = 0$$
Solving for the only unknown term, $u_i^{n+1}$, yields:
$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n)$$_____no_output_____As before, the Python code starts by loading the necessary libraries. Then, we declare some variables that determine the discretization in space and time (you should experiment by changing these parameters to see what happens). Then, we create the initial condition $u_0$ by initializing the array for the solution using $u = 2\ @\ 0.5 \leq x \leq 1$ and $u = 1$ everywhere else in $(0,2)$ (i.e., a hat function)._____no_output_____
<code>
import numpy # we're importing numpy
from matplotlib import pyplot # and our 2D plotting library
%matplotlib inline
nx = 41
dx = 2 / (nx - 1)
nt = 20 #nt is the number of timesteps we want to calculate
dt = .025 #dt is the amount of time each timestep covers (delta t)
u = numpy.ones(nx) #as before, we initialize u with every value equal to 1.
u[int(.5 / dx) : int(1 / dx + 1)] = 2 #then set u = 2 between 0.5 and 1 as per our I.C.s
un = numpy.ones(nx) #initialize our placeholder array un, to hold the time-stepped solution_____no_output_____
</code>
The code snippet below is *unfinished*. We have copied over the line from [Step 1](./01_Step_1.ipynb) that executes the time-stepping update. Can you edit this code to execute the nonlinear convection instead?_____no_output_____
<code>
for n in range(nt): #iterate through time
un = u.copy() ##copy the existing values of u into un
for i in range(1, nx): ##now we'll iterate through the u array
u[i] = un[i]*(1 - (dt/dx)*(un[i]-un[i-1]))
###This is the line from Step 1, copied exactly. Edit it for our new equation.
###then uncomment it and run the cell to evaluate Step 2
###u[i] = un[i] - c * dt / dx * (un[i] - un[i-1])
pyplot.plot(numpy.linspace(0, 2, nx), u) ##Plot the results_____no_output_____
</code>
What do you observe about the evolution of the hat function under the nonlinear convection equation? What happens when you change the numerical parameters and run again?_____no_output_____## Learn More_____no_output_____For a careful walk-through of the discretization of the convection equation with finite differences (and all steps from 1 to 4), watch **Video Lesson 4** by Prof. Barba on YouTube._____no_output_____
<code>
from IPython.display import YouTubeVideo
YouTubeVideo('y2WaK7_iMRI')_____no_output_____from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()_____no_output_____
</code>
> (The cell above executes the style for this notebook.)_____no_output_____
|
{
"repository": "JeffreyNederend/CFDPython",
"path": "lessons/02_Step_2.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 29769,
"hexsha": "cb40988e511ccf2be4e49d4d5196217528b2849d",
"max_line_length": 9852,
"avg_line_length": 89.9365558912,
"alphanum_fraction": 0.7991198898
}
|
# Notebook from cdrakesmith/CGATPipelines
Path: CGATPipelines/pipeline_docs/pipeline_bamstats/Jupyter_report/CGAT_idx_stats_report.ipynb
# <font color='firebrick'><center>Idx Stats Report</center></font>
### This report provides information from the output of samtools idxstats tool. It outputs the number of mapped reads per chromosome/contig.
<br>
_____no_output_____
<code>
from IPython.display import display, Markdown
from IPython.display import HTML
import IPython.core.display as di
import csv
import numpy as np
import zlib
import CGAT.IOTools as IOTools
import itertools as ITL
import os
import string
import pandas as pd
import sqlite3
import matplotlib as mpl
from matplotlib.backends.backend_pdf import PdfPages # noqa: E402
#mpl.use('Agg') # noqa: E402
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import matplotlib.font_manager as font_manager
import matplotlib.lines as mlines
from matplotlib.colors import ListedColormap
from matplotlib import cm
from matplotlib import rc, font_manager
import CGAT.Experiment as E
import math
from random import shuffle
import matplotlib as mpl
import datetime
import seaborn as sns
import nbformat
%matplotlib inline
##################################################
#Plot customization
#plt.ioff()
plt.style.use('seaborn-white')
#plt.style.use('ggplot')
title_font = {'size':'20','color':'darkblue', 'weight':'bold', 'verticalalignment':'bottom'} # Bottom vertical alignment for more space
axis_font = {'size':'18', 'weight':'bold'}
#For summary page pdf
'''To add description page
plt.figure()
plt.axis('off')
plt.text(0.5,0.5,"my title",ha='center',va='center')
pdf.savefig()
'''
#Panda data frame cutomization
pd.options.display.width = 80
pd.set_option('display.max_colwidth', -1)
chr_feature=['total_reads','total_mapped_reads',
'chr1','chr2','chr3','chr4',
'chr5','chr6','chr7','chr8',
'chr9','chr10','chr11','chr12',
'chr13','chr14','chr15','chr16',
'chr17','chr18','chr19','chrX',
'chrY','chrM']
chr_index=['Total reads','Total mapped reads',
'chr1','chr2','chr3','chr4',
'chr5','chr6','chr7','chr8',
'chr9','chr10','chr11','chr12',
'chr13','chr14','chr15','chr16',
'chr17','chr18','chr19','chrX',
'chrY','chrM']
colors_category = ['red','green','darkorange','yellowgreen', 'pink', 'gold', 'lightskyblue',
'orchid','darkgoldenrod','skyblue','b', 'red',
'darkorange','grey','violet','magenta','cyan',
'hotpink','mediumslateblue']
threshold = 5
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
def y_fmt(y, pos):
decades = [1e9, 1e6, 1e3, 1e0, 1e-3, 1e-6, 1e-9 ]
suffix = ["G", "M", "k", "" , "m" , "u", "n" ]
if y == 0:
return str(0)
for i, d in enumerate(decades):
if np.abs(y) >=d:
val = y/float(d)
signf = len(str(val).split(".")[1])
if signf == 0:
return '{val:d} {suffix}'.format(val=int(val), suffix=suffix[i])
else:
if signf == 1:
#print(val, signf)
if str(val).split(".")[1] == "0":
return '{val:d} {suffix}'.format(val=int(round(val)), suffix=suffix[i])
tx = "{"+"val:.{signf}f".format(signf = signf) +"} {suffix}"
return tx.format(val=val, suffix=suffix[i])
#return y
return y
def getTables(dbname):
'''
Retrieves the names of all tables in the database.
Groups tables into dictionaries by annotation
'''
dbh = sqlite3.connect(dbname)
c = dbh.cursor()
statement = "SELECT name FROM sqlite_master WHERE type='table'"
c.execute(statement)
tables = c.fetchall()
print(tables)
c.close()
dbh.close()
return
def readDBTable(dbname, tablename):
'''
Reads the specified table from the specified database.
Returns a list of tuples representing each row
'''
dbh = sqlite3.connect(dbname)
c = dbh.cursor()
statement = "SELECT * FROM %s" % tablename
c.execute(statement)
allresults = c.fetchall()
c.close()
dbh.close()
return allresults
def getDBColumnNames(dbname, tablename):
dbh = sqlite3.connect(dbname)
res = pd.read_sql('SELECT * FROM %s' % tablename, dbh)
dbh.close()
return res.columns
def plotBar(df,samplename):
fig, ax = plt.subplots()
ax.set_frame_on(True)
ax.xaxis.set_major_formatter(FuncFormatter(y_fmt))
colors=['yellowgreen','darkorange']
for ii in range(0,df.shape[0]):
plt.barh(ii,df['chrX'][ii],color=colors[0], align="center",height=0.6,edgecolor=colors[0])
plt.barh(ii,df['chrY'][ii],color=colors[1], align="center",height=0.6,edgecolor=colors[0])
fig = plt.gcf()
fig.set_size_inches(20,14)
plt.yticks(fontsize =20,weight='bold')
plt.yticks(range(df.shape[0]),df['track'])
plt.xticks(fontsize =20,weight='bold')
ax.grid(which='major', linestyle='-', linewidth='0.3')
plt.ylabel("Sample",labelpad=65,fontsize =25,weight='bold')
plt.xlabel("\nMapped reads",fontsize =25,weight='bold')
plt.title("Reads mapped to X and Y chromosome\n",fontsize =30,weight='bold',color='darkblue')
plt.gca().invert_yaxis()
legend_properties = {'weight':'bold','size':'20'}
leg = plt.legend(chr_feature[21:23],title="Contigs",prop=legend_properties,bbox_to_anchor=(1.14,0.65),frameon=True)
leg.get_frame().set_edgecolor('k')
leg.get_frame().set_linewidth(2)
leg.get_title().set_fontsize(25)
leg.get_title().set_fontweight('bold')
plt.tight_layout()
#plt.savefig(''.join([samplename,'.png']),bbox_inches='tight',pad_inches=0.6)
plt.show()
return fig
def displayTable(plotdf,name):
# Display table
styles = [
hover(),
dict(selector="th", props=[("font-size", "130%"),
("text-align", "center"),
]),
dict(selector="td", props=[("font-size", "120%"),
("text-align", "center"),
]),
dict(selector="caption", props=[("caption-side", "top"),
("text-align", "center"),
("font-size", "100%")])
]
df1 = (plotdf.style.set_table_styles(styles).set_caption(name))
display(df1)
print("\n\n")
def plot_idxstats(newdf,df,samplename):
fig,ax = plt.subplots()
ax.grid(which='major', linestyle='-', linewidth='0.25')
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
index=list(range(newdf.shape[1]))
colors = plt.cm.plasma(np.linspace(0,1,newdf.shape[0]))
for ii in range(0,newdf.shape[0]):
plt.plot(index,newdf.iloc[ii],linewidth=2,color=colors[ii],linestyle="-",marker='o',fillstyle='full',markersize=8)
fig = plt.gcf()
fig.set_size_inches(11,8)
plt.xticks(index,chr_feature[2:24],fontsize = 14,weight='bold')
plt.yticks(fontsize = 14,weight='bold')
labels = ax.get_xticklabels()
plt.setp(labels, rotation=40)
legend_properties = {'weight':'bold','size':'14'}
leg = plt.legend(df['track'],title="Sample",prop=legend_properties,bbox_to_anchor=(1.42,1.01),frameon=True)
leg.get_frame().set_edgecolor('k')
leg.get_frame().set_linewidth(2)
leg.get_title().set_fontsize(16)
leg.get_title().set_fontweight('bold')
plt.xlabel('\nContigs',**axis_font)
plt.ylabel('Mapped Reads',**axis_font,labelpad=40)
plt.title("Mapped reads per contig", **title_font)
plt.tight_layout()
#plt.savefig(''.join([samplename,'.png']),bbox_inches='tight',pad_inches=0.6)
print("\n\n")
plt.show()
return fig
def idxStatsReport(dbname, tablename):
trans = pd.DataFrame(readDBTable(dbname,tablename))
trans.columns = getDBColumnNames(dbname,tablename)
df=trans
#print(df)
#newdf = df[df.columns[0:25]]
newdf = df[chr_feature[2:24]]
#print(newdf)
plotdf = df[chr_feature]
plotdf.columns = chr_index
plotdf.index = [df['track']]
#del plotdf.index.name
#pdf=PdfPages("idx_stats_summary.pdf")
displayTable(plotdf,"Idx Full Stats")
fig = plot_idxstats(newdf,df,"idx_full_stats")
#pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6)
print("\n\n\n")
fig = plotBar(df,"idxStats_X_Y_mapped_reads")
#pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6)
#pdf.close()
#getTables("csvdb")
idxStatsReport("../csvdb","idxstats_reads_per_chromosome")
_____no_output_____
</code>
|
{
"repository": "cdrakesmith/CGATPipelines",
"path": "CGATPipelines/pipeline_docs/pipeline_bamstats/Jupyter_report/CGAT_idx_stats_report.ipynb",
"matched_keywords": [
"SAMtools"
],
"stars": 49,
"size": 186564,
"hexsha": "cb41193364ee68ccc9fd1f8d2943f54b6e424653",
"max_line_length": 87960,
"avg_line_length": 294.7298578199,
"alphanum_fraction": 0.8854816578
}
|
# Notebook from monocongo/datascience_portfolio
Path: dataquest/notebooks/project_star_wars_analysis/Star_Wars_Analysis.ipynb
<code>
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt_____no_output_____star_wars = pd.read_csv("star_wars.csv", encoding="ISO-8859-1")
star_wars.head(3)_____no_output_____
</code>
Remove all rows where the `RespondentID` column is not null (NaN)._____no_output_____
<code>
star_wars = star_wars[pd.notnull(star_wars["RespondentID"])]
star_wars.head()_____no_output_____
</code>
Convert column string values from "Yes"/"No" to corresponding booleans by mapping a dictionary to each value of the Series:_____no_output_____
<code>
yes_no = {
"Yes": True,
"No": False
}
star_wars["Have you seen any of the 6 films in the Star Wars franchise?"] = \
star_wars["Have you seen any of the 6 films in the Star Wars franchise?"].map(yes_no)
star_wars["Do you consider yourself to be a fan of the Star Wars film franchise?"] = \
star_wars["Do you consider yourself to be a fan of the Star Wars film franchise?"].map(yes_no)_____no_output_____star_wars.head()_____no_output_____star_wars.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 1186 entries, 1 to 1186
Data columns (total 38 columns):
RespondentID 1186 non-null float64
Have you seen any of the 6 films in the Star Wars franchise? 1186 non-null bool
Do you consider yourself to be a fan of the Star Wars film franchise? 836 non-null object
Which of the following Star Wars films have you seen? Please select all that apply. 673 non-null object
Unnamed: 4 571 non-null object
Unnamed: 5 550 non-null object
Unnamed: 6 607 non-null object
Unnamed: 7 758 non-null object
Unnamed: 8 738 non-null object
Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film. 835 non-null object
Unnamed: 10 836 non-null object
Unnamed: 11 835 non-null object
Unnamed: 12 836 non-null object
Unnamed: 13 836 non-null object
Unnamed: 14 836 non-null object
Please state whether you view the following characters favorably, unfavorably, or are unfamiliar with him/her. 829 non-null object
Unnamed: 16 831 non-null object
Unnamed: 17 831 non-null object
Unnamed: 18 823 non-null object
Unnamed: 19 825 non-null object
Unnamed: 20 814 non-null object
Unnamed: 21 826 non-null object
Unnamed: 22 820 non-null object
Unnamed: 23 812 non-null object
Unnamed: 24 827 non-null object
Unnamed: 25 830 non-null object
Unnamed: 26 821 non-null object
Unnamed: 27 814 non-null object
Unnamed: 28 826 non-null object
Which character shot first? 828 non-null object
Are you familiar with the Expanded Universe? 828 non-null object
Do you consider yourself to be a fan of the Expanded Universe?Âæ 213 non-null object
Do you consider yourself to be a fan of the Star Trek franchise? 1068 non-null object
Gender 1046 non-null object
Age 1046 non-null object
Household Income 858 non-null object
Education 1036 non-null object
Location (Census Region) 1043 non-null object
dtypes: bool(1), float64(1), object(36)
memory usage: 353.3+ KB
</code>
Convert column string values from the name of the movie to True or Nan to False. Use a mapping dictionary for this whcih we'll create from the column name (if the column name is the value as well then it corresponds to True, otherwise it's NaN and corresponds to False):_____no_output_____
<code>
print("BEFORE MAPPING")
star_wars[star_wars.columns[3:9]]BEFORE MAPPING
def t_or_f(value):
if value is np.NaN:
return False
else:
return True
for col in star_wars.columns[3:9]:
# mapper = {col : True, np.NaN: False}
star_wars[col] = star_wars[col].map(t_or_f)_____no_output_____star_wars[star_wars.columns[3:9]]_____no_output_____star_wars = star_wars.rename(columns={
"Which of the following Star Wars films have you seen? Please select all that apply.": "seen1",
"Unnamed: 4": "seen2",
"Unnamed: 5": "seen3",
"Unnamed: 6": "seen4",
"Unnamed: 7": "seen5",
"Unnamed: 8": "seen6",
"Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.": "favorite1",
"Unnamed: 10": "favorite2",
"Unnamed: 11": "favorite3",
"Unnamed: 12": "favorite4",
"Unnamed: 13": "favorite5",
"Unnamed: 14": "favorite6"
})_____no_output_____star_wars.head(3)_____no_output_____star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)_____no_output_____means = star_wars[star_wars.columns[9:15]].mean(axis=0)
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
ax = sns.barplot(x=star_wars.columns[9:15],
y=means)_____no_output_____seens = star_wars[star_wars.columns[3:9]].sum()_____no_output_____ax = sns.barplot(x=star_wars.columns[3:9],
y=seens)_____no_output_____males = star_wars[star_wars["Gender"] == "Male"]
females = star_wars[star_wars["Gender"] == "Female"]
means_female = females[females.columns[9:15]].mean(axis=0)
means_male = males[males.columns[9:15]].mean(axis=0)
seens_female = females[females.columns[3:9]].sum()
seens_male = males[males.columns[3:9]].sum()
fig = plt.figure(figsize=(12, 9))
ax1 = fig.add_subplot(221)
ax1.set_title("Male Ranking")
ax2 = fig.add_subplot(223)
ax2.set_title("Male Totals")
ax3 = fig.add_subplot(222)
ax3.set_title("Female Ranking")
ax4 = fig.add_subplot(224)
ax4.set_title("Female Totals")
sns.barplot(x=star_wars.columns[9:15],
y=means_male,
ax=ax1)
sns.barplot(x=star_wars.columns[9:15],
y=means_female,
ax=ax3)
sns.barplot(x=star_wars.columns[3:9],
y=seens_male,
ax=ax2)
sns.barplot(x=star_wars.columns[3:9],
y=seens_female,
ax=ax4)_____no_output_____
</code>
|
{
"repository": "monocongo/datascience_portfolio",
"path": "dataquest/notebooks/project_star_wars_analysis/Star_Wars_Analysis.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 190389,
"hexsha": "cb4148bf89fcd4ee60f6ca64dc11c3cdd578c9c8",
"max_line_length": 30550,
"avg_line_length": 64.0609017497,
"alphanum_fraction": 0.5085272784
}
|
# Notebook from williecostello/BetterReads
Path: notebooks/04_optimizing_goodreads.ipynb
# BetterReads: Optimizing GoodReads review data
This notebook explores how to achieve the best results with the BetterReads algorithm when using review data scraped from GoodReads. It is a short follow-up to the exploration performed in the `03_optimizing_reviews.ipynb` notebook.
We have two options when scraping review data from GoodReads: For any given book, we can either scrape 1,500 reviews, with 300 reviews for each star rating (1 to 5), or we can scrape just the top 300 reviews, of any rating. (This is due to some quirks in the way that reviews are displayed on the GoodReads website; for more information, see my [GoodReadsReviewsScraper script](https://github.com/williecostello/GoodReadsReviewsScraper).)
There are advantages and disadvantages to both options. If we scrape 1,500 reviews, we obviously have more review data to work with; however, the data is artifically class-balanced, such that, for example, we'll still see a good number of negative reviews even if the vast majority of the book's reviews are positive. If we scrape just the top 300 reviews, we will have a more representative dataset, but much less data to work with.
We saw in the `03_optimizing_reviews.ipynb` notebook that the BetterReads algorithm can achieve meaningful and representative results from a dataset with less than 100 reviews. So we should not dismiss the 300 review option simply because it involves less data. We should only dismiss it if its smaller dataset leads to worse results. So let's try these two options out on a particular book and see how the algorithm performs._____no_output_____
<code>
import numpy as np
import pandas as pd
import random
from sklearn.cluster import KMeans
import tensorflow_hub as hub_____no_output_____# Loads Universal Sentence Encoder locally, from downloaded module
embed = hub.load('../../Universal Sentence Encoder/module/')
# Loads Universal Sentence Encoder remotely, from Tensorflow Hub
# embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")_____no_output_____
</code>
## Which set of reviews should we use?
For this notebook we'll work with a new example: Sally Rooney's *Conversations with Friends*.
<img src='https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1500031338l/32187419._SY475_.jpg' width=250 align=center>
We have prepared two datasets, one of 1,500 reviews and another of 300 reviews, as described above. Both datasets were scraped from GoodReads at the same time, so there is some overlap between them. (Note that the total number of reviews in both datasets is less than advertised, since non-English and very short reviews are dropped during data cleaning.)_____no_output_____
<code>
# Set path for processed file
file_path_1500 = 'data/32187419_conversations_with_friends.csv'
file_path_300 = 'data/32187419_conversations_with_friends_top_300.csv'
# Read in processed file as dataframe
df_1500 = pd.read_csv(file_path_1500)
df_300 = pd.read_csv(file_path_300)
print(f'The first dataset consists of {df_1500.shape[0]} sentences from {df_1500["review_index"].nunique()} reviews')
print(f'The second dataset consists of {df_300.shape[0]} sentences from {df_300["review_index"].nunique()} reviews')The first dataset consists of 8604 sentences from 1190 reviews
The second dataset consists of 2874 sentences from 293 reviews
</code>
As we can see above, in comparison to the smaller dataset, the bigger dataset contains approximately three times the number of sentences from four times the number of reviews. And as we can see below, the bigger dataset contains approximately the same number of reviews for each star rating, while the smaller dataset is much more heavily skewed toward 5 star and 4 star reviews._____no_output_____
<code>
df_1500.groupby('review_index')['rating'].mean().value_counts().sort_index()_____no_output_____df_300.groupby('review_index')['rating'].mean().value_counts().sort_index()_____no_output_____
</code>
On [the book's actual GoodReads page](https://www.goodreads.com/book/show/32187419-conversations-with-friends), its average review rating is listed as 3.82 stars. This is nearly the same as the average review rating of our smaller dataset. The bigger dataset's average review rating, in contrast, is just less than 3. This confirms our earlier suspicion that the smaller dataset presents a more representative sample of the book's full set of reviews._____no_output_____
<code>
df_300.groupby('review_index')['rating'].mean().mean()_____no_output_____df_1500.groupby('review_index')['rating'].mean().mean()_____no_output_____
</code>
Let's see how these high-level differences affect the output of our algorithm._____no_output_____
<code>
def load_sentences(file_path):
'''
Function to load and embed a book's sentences
'''
# Read in processed file as dataframe
df = pd.read_csv(file_path)
# Copy sentence column to new variable
sentences = df['sentence'].copy()
# Vectorize sentences
sentence_vectors = embed(sentences)
return sentences, sentence_vectors_____no_output_____def get_clusters(sentences, sentence_vectors, k, n):
'''
Function to extract the n most representative sentences from k clusters, with density scores
'''
# Instantiate the model
kmeans_model = KMeans(n_clusters=k, random_state=24)
# Fit the model
kmeans_model.fit(sentence_vectors);
# Set the number of cluster centre points to look at when calculating density score
centre_points = int(len(sentences) * 0.02)
# Initialize list to store mean inner product value for each cluster
cluster_density_scores = []
# Initialize dataframe to store cluster centre sentences
df = pd.DataFrame()
# Loop through number of clusters
for i in range(k):
# Define cluster centre
centre = kmeans_model.cluster_centers_[i]
# Calculate inner product of cluster centre and sentence vectors
ips = np.inner(centre, sentence_vectors)
# Find the sentences with the highest inner products
top_indices = pd.Series(ips).nlargest(n).index
top_sentences = list(sentences[top_indices])
centre_ips = pd.Series(ips).nlargest(centre_points)
density_score = round(np.mean(centre_ips), 5)
# Append the cluster density score to master list
cluster_density_scores.append(density_score)
# Create new row with cluster's top 10 sentences and density score
new_row = pd.Series([top_sentences, density_score])
# Append new row to master dataframe
df = df.append(new_row, ignore_index=True)
# Rename dataframe columns
df.columns = ['sentences', 'density']
# Sort dataframe by density score, from highest to lowest
df = df.sort_values(by='density', ascending=False).reset_index(drop=True)
# Loop through number of clusters selected
for i in range(k):
# Save density / similarity score & sentence list to variables
sim_score = round(df.loc[i]["density"], 3)
sents = df.loc[i]['sentences'].copy()
print(f'Cluster #{i+1} sentences (density score: {sim_score}):\n')
print(*sents, sep='\n')
print('\n')
model_density_score = round(np.mean(cluster_density_scores), 5)
print(f'Model density score: {model_density_score}')_____no_output_____# Load and embed sentences
sentences_1500, sentence_vectors_1500 = load_sentences(file_path_1500)
sentences_300, sentence_vectors_300 = load_sentences(file_path_300)_____no_output_____# Get cluster sentences for bigger dataset
get_clusters(sentences_1500, sentence_vectors_1500, k=6, n=8)Cluster #1 sentences (density score: 0.437):
Sally Rooney has a really interesting way of writing, which I deeply appreciate.
i just cannot get over how well Sally Rooney writes.
I think that Sally Rooney is a fantastic writer.
I'm very happy I read Rooney's Normal People first and loved it so deeply, bc I feel certain I would actively avoid Sally Rooney if this book was the first piece of writing I read by her.
Sally Rooney is a brilliant writer, and I was really looking forward to this from reading her short fiction.
I can only write that I love it even more than "Normal people" and I can't wait for more book by Sally Rooney.
I love how Sally Rooney writes - naturally and simply.
Well-written because it’s Sally Rooney and so even her debut is brilliant.
Cluster #2 sentences (density score: 0.392):
I really just couldn't get with this book.
I enjoyed this book way more than I thought I would at the beginning.
Don’t get me wrong I did enjoy this book, but I think I expected more from it?
Reading this book is delightful, I didn’t want it to end.
Not sure I'm a fan of the writing style of this book, but it was an easy read.
I have never read a book that as I was reading it was so forgettable.
I really don’t know how I feel about this book but the writing is undeniably good.
That being said, I actually did enjoy reading this book and devoured it quickly!
Cluster #3 sentences (density score: 0.38):
I think the merits of this book lie in the writing and the characters (although I also thought the characters were somewhat insufferable and pretentious).
Unbelievably even more than I disliked the characters, I did not enjoy the writing style of this book.
I even felt in the beginning that the book felt not very special, with the odd writing style and the slightly unlikeable characters.
I understand that a book having unlikable characters does not make the book unlikeable but they have to be compelling for the reader to want to follow them through the story and the protagonist and side characters were very much lacking in this regard.
The author's writing is pretty good, I just didn't really like the characters and never really seemed to connect with them, which then makes me not as engaged with the plot line.
Such deeply unlikeable characters, and whilst that doesn't normally stop me from enjoying a book in this instance it did and I found their conversations to be so self indulgent and dull, finishing it was a struggle.
I found the book boring and the characters so self absorbed and overly dramatic.
I will say that the writing is good, but the characters are weird and pretty horrible “people”.
Cluster #4 sentences (density score: 0.353):
I think my main problem with the novel is that it seems like it should have been about the two young women, Frances and Bobbi, but it was actually about Frances and her totally predictable affair with Nick, so handsome!
There were times I was curious to see how it would play out with Frances & Nick, as well as Frances' relationship with Bobbi.
Frances and Bobbi are great characters, but Frances spends so much of the book just involved with Nick, and unlike Connell, he's simply too blank, too opaque, too ideal of a guy in many ways, to be interesting in any way.
It is just Bobbi and Frances being horrible, Frances sleeping with Nick, that is about it.
And I genuinely was invested in the plot between Bobbi and Frances and Frances and her parents but woo, did not care for Nick or Melissa.
The relationship between Bobbi and Frances is enjoyable to read, they are forging a different path and creating their own definition of relationship, but Frances and Nick is a snoozefest.
When Frances writes a story in which she and her friend Bobbi are easily identified, Bobbi is truly shocked at how Frances sees their relationship in the story.
Frances publishes a story about Bobbi, and Bobbi feels betrayed because Frances could never say those things to her.
Cluster #5 sentences (density score: 0.225):
I didn’t really find this to be an enjoyable read.
I didn’t quite enjoy this as much as normal people but I still thought it was a entertaining read
but actually I enjoyed it more than I thought I would.
I didn’t get into it at all, it was just blah blah blah to me.
Maybe I didn't like it because I couldn't relate to it?
I wasn't expecting to like it - full of moany twenty-somethings - I'd heard.
I didn't think I was going to like it but I liked it quite a lot.
I won’t even begin to try to intellectualize why I liked it.
Cluster #6 sentences (density score: 0.215):
This feels both true and difficult, as I’ve never read a writer who so intimately seems to understand modern, young relationships, feelings and fears as she does.
Whereas Normal People spoke a bit more to the gravitational pull of a romantic relationship, Conversations With Friends captured the main character’s dysfunction and yearning to just be seen and valued by those around her.
She writes about relationships with so much care and detail that it becomes hard to separate yourself from the characters.
This is how Frances feels and thinks and talks, all in one, and though there are a lot of things about her that are not objectively relatable to me, she has become one of the most relatable characters I've ever read.
Conversations with Friends is a tiresome story of an emotionally unavailable and slightly manipulative young woman and her romantic entanglements.
Her characters can be so swooningly affectionate with one another--and so ferociously cutting and so perfectly empathetic--that even at their most toxic moments (and there are lots), watching their relationships unfold feels like a privilege.
The plot offers nothing new either, there's been plenty of books on naive young adults pursuing unhealthy relationships before, as well as characters who make drama out of nothing and try to drag others in to their narcissism.
The way she writes relationships and conversations between the characters, making them normal and not artefact at all but at the same time not being trivial, it's exquisite.
Model density score: 0.33358
# Get cluster sentences for smaller dataset
get_clusters(sentences_300, sentence_vectors_300, k=6, n=8)Cluster #1 sentences (density score: 0.44):
i just cannot get over how well Sally Rooney writes.
I finished CONVERSATIONS WITH FRIENDS by Sally Rooney this morning and once again I am in awe of Rooney's writing.
Rooney really seems to understand the lives of her chracters.
I'm looking forward to reading anything else that Sally Rooney writes.
Sally Rooney has become one of my favorite writers.
Rooney is an excellent writer; I desperately hope she is just getting started.
Sally Rooney makes me feel like I could do anything in life as long as I wrote about it well.
I can’t wait to read whatever Sally Rooney comes out with next!
Cluster #2 sentences (density score: 0.365):
There were times I was curious to see how it would play out with Frances & Nick, as well as Frances' relationship with Bobbi.
The book follows Frances and her best friend Bobbi, who become entangled with a married couple, Nick and Melissa.
Frances and Nick end up in a relationship and the conversations between them are low-key and unemotional on the surface; however, Frances is concealing her thoughts from herself.
Bobbie is interested in Melissa, while Frances falls in love with Nick and they start having an affair.
This book revolves around two college students in Dublin named Frances and Bobbi and their relationship with Melissa & Nick who are a married couple they meet early in the story.
In short, this novel focuses on friends Frances and Bobbi and the interesting relationship that they share with a married couple.
Their lives become entwined, but we mostly follow the relationships between Frances and Bobbi, and Frances and Nick, after they start having an affair.
Another problem is that the main protagonists, Frances and her older, married paramour, Nick are just not very interesting.
Cluster #3 sentences (density score: 0.361):
Turns out that I absolutely loved this book.
Ok, this book is interesting and I am not disappointed to have read it.
I am delightfully surprised by how much I loved this book.
Anyway, I loved this book so much!
I’ve had this book on my TBR list for a while, so I was really excited when I found out that it would be the next book group read.
I really didn't find this book too interesting.
I loved everything about this book.
I heard very good reviews of this book before reading it so I’m not sure if I’m overly influenced by those .
Cluster #4 sentences (density score: 0.339):
I think the merits of this book lie in the writing and the characters (although I also thought the characters were somewhat insufferable and pretentious).
I even felt in the beginning that the book felt not very special, with the odd writing style and the slightly unlikeable characters.
The author's writing is pretty good, I just didn't really like the characters and never really seemed to connect with them, which then makes me not as engaged with the plot line.
It’s really interesting because some of her characters are unlikeable at times, but they feel realistic and they always develop as the story goes on and it’s really quite entertaining to read about.
the characters aren’t particularly likeable, and the situation they are in isn’t particularly common, but it is interesting and I can tell it’s very well written.
I liked the way it facilitated the story, but unlike in some other novels, the writing style isn't a notable part of the experience of Conversations with Friends.
I really liked the ending and the way the protagonist was portrayed by the author (I saw a lot of myself in her, or rather I saw the worst side of myself) but at the same time I was frustrated with her character arc.
I enjoy reading about self-centered, unlikeable characters but they have to be interesting which was not the case for me.
Cluster #5 sentences (density score: 0.22):
I'm glad I perserverd though and then it really drew me in.
but actually I enjoyed it more than I thought I would.
I won’t even begin to try to intellectualize why I liked it.
I wasn't expecting to like it - full of moany twenty-somethings - I'd heard.
I felt like I was supposed to be predisposed to like this.
Sooooo, it took me a while to gather my thoughts on this one and I still have mixed feelings about it.
I can see why people would hate this, but I loved it.
The weird thing, though, is that I did really enjoy it.
Cluster #6 sentences (density score: 0.205):
Whereas Normal People spoke a bit more to the gravitational pull of a romantic relationship, Conversations With Friends captured the main character’s dysfunction and yearning to just be seen and valued by those around her.
Her characters can be so swooningly affectionate with one another--and so ferociously cutting and so perfectly empathetic--that even at their most toxic moments (and there are lots), watching their relationships unfold feels like a privilege.
She's insightful about the emotions involved in falling in love when one is both young and doing one's best to not admit to any sort of emotional entanglement.
The way she writes relationships and conversations between the characters, making them normal and not artefact at all but at the same time not being trivial, it's exquisite.
Altogether Conversations With Friends is an intelligent character study on falling in love, cultivating a relationship, and all of the simplicities and complexities that come with it.
The plot offers nothing new either, there's been plenty of books on naive young adults pursuing unhealthy relationships before, as well as characters who make drama out of nothing and try to drag others in to their narcissism.
It’s a beautiful and subtle novel with emotionally charged characters and nuances that felt so natural, mirroring the everyday aspects and constants in somebody’s lives: as simple as having a conversation with a friend.
Sally Rooney’s novel, Conversations With Friends, reveals the complexities of relationships in the onset of love, platonically and otherwise, with a direct honesty and realism that made it difficult not to relate to.
Model density score: 0.32165
</code>
Let's summarize our results. The bigger dataset's sentence clusters can be summed up as follows:
1. Fantastic writing
1. Reading experience (?)
1. Unlikeable characters
1. Plot synopsis
1. Not enjoyable
1. Thematic elements: relationships & emotions
The smaller dataset's clusters can be summed up like this:
1. Fantastic writing
1. Plot synopsis
1. Loved it
1. Unlikeable characters
1. Reading experience
1. Thematic elements: Relationships & emotions
As we can see, the two sets of results are broadly similar; there are no radical differences between the two sets of clusters. The only major difference is that the bigger dataset includes a cluster of sentences expressing dislike of the book, whereas the smaller dataset includes a cluster of sentences expressing love of the book. But this was to be expected, given the relative proportions of positive and negative reviews between the two datasets.
Given these results, we feel that the smaller dataset is preferable. Its clusters seem slightly more internally coherent and to better capture the general sentiment toward the book._____no_output_____
|
{
"repository": "williecostello/BetterReads",
"path": "notebooks/04_optimizing_goodreads.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 27291,
"hexsha": "cb41b71beb58d9a72819d12a7fa7141700f9f539",
"max_line_length": 460,
"avg_line_length": 50.7267657993,
"alphanum_fraction": 0.6596680224
}
|
# Notebook from wconnell/metrx
Path: notebook/.ipynb_checkpoints/2020.03.30_feat_sel_shuff_dynamic-checkpoint.ipynb
<code>
%load_ext autoreload
%autoreload 2_____no_output_____import os
import sys
sys.path.append("..")
import datetime
import pathlib
from collections import OrderedDict
import numpy as np
import pandas as pd_____no_output_____# Pytorch
import torch
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
# Custom
from dutils import Experiment
from trainer import fit
import visualization as vis
from tcga_datasets import SiameseDataset
# Models
from tcga_networks import EmbeddingNet, SiameseNet
from losses import ContrastiveLoss
# Metrics
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_mutual_info_score as ANMI_____no_output_____def getTCGA(disease):
path = "/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP/TCGA/TCGA_{}_counts.tsv.gz"
files = [path.format(d) for d in disease]
return files
def readGCP(files, biotype='protein_coding', mean=True):
"""
Paths to count matrices.
"""
data_dict = {}
for f in files:
key = os.path.basename(f).split("_")[1]
data = pd.read_csv(f, sep='\t', index_col=0)
# transcript metadata
meta = pd.DataFrame([row[:-1] for row in data.index.str.split("|")],
columns=['ENST', 'ENSG', 'OTTHUMG', 'OTTHUMT', 'GENE-NUM', 'GENE', 'BP', 'BIOTYPE'])
meta = pd.MultiIndex.from_frame(meta)
data.index = meta
# subset transcripts
data = data.xs(key=biotype, level='BIOTYPE')
data = data.droplevel(['ENST', 'ENSG', 'OTTHUMG', 'OTTHUMT', 'GENE-NUM', 'BP'])
# average gene expression of splice variants
data = data.T
if mean:
data = data.groupby(by=data.columns, axis=1).mean()
data_dict[key] = data
return data_dict
def uq_norm(df, q=0.75):
"""
Upper quartile normalization of GEX for samples.
"""
quantiles = df.quantile(q=q, axis=1)
norm = df.divide(quantiles, axis=0)
return norm
def process_TCGA(disease=['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM']):
base="/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP"
# get files
tcga_files = getTCGA(disease)
# read meta/data
tcga_meta = pd.read_csv(os.path.join(base, "TCGA/TCGA_GDC_ID_MAP.tsv"), sep="\t")
tcga_raw = readGCP(tcga_files, mean=True)
# combine samples
tcga_raw = pd.concat(tcga_raw.values())
# Upper quartile normalization
tcga_raw = uq_norm(tcga_raw)
# log norm
tcga = tcga_raw.transform(np.log1p)
return tcga, tcga_meta_____no_output_____def generate_fsets(data, n_features, steps=5):
r = np.linspace(start=1, stop=n_features, num=steps, dtype='int')
idx = [np.random.choice(data.shape[1], size=i, replace=False) for i in r]
return idx_____no_output_____def feature_training(train_data, train_labels, test_data, test_labels, feature_idx, embedding, exp_dir, cuda=True):
# Meta data
meta_data = {"n_features":[],
"model":[],
"ANMI":[]}
# Params
batch_size = 8
kwargs = {'num_workers': 10, 'pin_memory': True} if cuda else {'num_workers': 10}
# Feature Index
for batch, feat in enumerate(feature_idx):
print("Number features: {}\n".format(len(feat)))
exp_data = {'feature_idx':feat}
# Define data
siamese_train_dataset = SiameseDataset(data=train_data.iloc[:,feat],
labels=train_labels,
train=True)
siamese_test_dataset = SiameseDataset(data=test_data.iloc[:,feat],
labels=test_labels,
train=False)
# Loaders
siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Instantiate model
n_samples, n_features = siamese_train_dataset.train_data.shape
for i in range(3):
nmodel = 'model_{}'.format(i)
print("\t{}".format(nmodel))
embedding_net = EmbeddingNet(n_features, embedding)
model = SiameseNet(embedding_net)
if cuda:
model.cuda()
# Parameters
margin = 1.
loss_fn = ContrastiveLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 10
log_interval = round(len(siamese_train_dataset)/1/batch_size)
# Train
train_loss, val_loss = fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler,
n_epochs, cuda, log_interval)
# Test Embeddings
val_embeddings_baseline, val_labels_baseline = vis.extract_embeddings(siamese_test_dataset.test_data, siamese_test_dataset.labels, model)
# Evaluation
n_clusters = len(np.unique(test_labels))
kmeans = KMeans(n_clusters=n_clusters)
siamese_clusters = kmeans.fit_predict(val_embeddings_baseline)
anmi = ANMI(siamese_clusters, val_labels_baseline)
# Store
meta_data['n_features'].append(len(feat))
meta_data['model'].append(nmodel)
meta_data['ANMI'].append(anmi)
exp_data[nmodel] = {'data': (val_embeddings_baseline, val_labels_baseline),
'loss': (train_loss, val_loss),
'ANMI': anmi}
pd.to_pickle(exp_data, os.path.join(exp_dir, "model_{}.pkl".format(len(feat))))
pd.to_pickle(meta_data, os.path.join(exp_dir, "model_meta_data.pkl"))_____no_output_____def main(disease, sample_type, **kwargs):
# GPUs
os.environ["CUDA_VISIBLE_DEVICES"] = kwargs['device']
cuda = torch.cuda.is_available()
print("Cuda is available: {}".format(cuda))
# Read / write / process
tcga, tcga_meta = process_TCGA(disease)
# Feature design
feature_idx = generate_fsets(tcga, n_features=kwargs['n_features'], steps=kwargs['steps'])
# Experiment design
hierarchy = OrderedDict({'Disease':disease,
'Sample Type':sample_type})
# Define experiment
exp = Experiment(meta_data=tcga_meta,
hierarchy=hierarchy,
index='CGHubAnalysisID',
cases='Case ID',
min_samples=20)
# Train / Test split
exp.train_test_split(cases='Case ID')
# Return data
train_data, train_labels = exp.get_data(tcga, subset="train", dtype=np.float32)
test_data, test_labels = exp.get_data(tcga, subset="test", dtype=np.float32)
# randomize labels
np.random.shuffle(train_labels)
# Path *fix*
dtime = datetime.datetime.today().strftime("%Y.%m.%d_%H:%M")
exp_dir = "/srv/nas/mk2/projects/pan-cancer/experiments/feature_sel/{}_{}_{}_{}_{}-{}".format(dtime,
kwargs['note'],
len(exp.labels_dict),
kwargs['embedding'],
kwargs['n_features'],
kwargs['steps'])
pathlib.Path(exp_dir).mkdir(parents=True, exist_ok=False)
print('Saving to: \n{}'.format(exp_dir))
# Meta data
experiments = {'experiment': exp,
'train':(train_data, train_labels),
'test': (test_data, test_labels)}
pd.to_pickle(experiments, os.path.join(exp_dir, "experiment_meta_data.pkl"))
# Training
feature_training(train_data, train_labels, test_data, test_labels, feature_idx, kwargs['embedding'], exp_dir)_____no_output_____
</code>
### Setup_____no_output_____
<code>
base="/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP"
# read meta/data
tcga_meta = pd.read_csv(os.path.join(base, "TCGA/TCGA_GDC_ID_MAP.tsv"), sep="\t")
# select disease
disease = tcga_meta[tcga_meta['Sample Type']=='Solid Tissue Normal']['Disease'].value_counts()
disease = list(disease[disease>=20].index)
disease_____no_output_____disease = ['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM']
sample_type = ['Primary Tumor', 'Solid Tissue Normal']
params = {"device":"4",
"note":"shuffle",
"n_features":50,
"steps":50,
"embedding":2}_____no_output_____main(disease=disease, sample_type=sample_type, **params)Cuda is available: True
Saving to:
/srv/nas/mk2/projects/pan-cancer/experiments/feature_sel/2020.03.29_22:18_shuffle_11_2_50-50
Number features: 1
model_0
Train: [0/2931 (0%)] Loss: 0.374927
Train: [1098/2931 (100%)] Loss: 0.193504
Epoch: 1/10. Train set: Average loss: 0.1940
Epoch: 1/10. Validation set: Average loss: 0.1557
Train: [0/2931 (0%)] Loss: 0.222659
Train: [1098/2931 (100%)] Loss: 0.181222
Epoch: 2/10. Train set: Average loss: 0.1813
Epoch: 2/10. Validation set: Average loss: 0.1508
Train: [0/2931 (0%)] Loss: 0.212997
Train: [1098/2931 (100%)] Loss: 0.183003
Epoch: 3/10. Train set: Average loss: 0.1831
Epoch: 3/10. Validation set: Average loss: 0.1537
Train: [0/2931 (0%)] Loss: 0.189994
Train: [1098/2931 (100%)] Loss: 0.180737
Epoch: 4/10. Train set: Average loss: 0.1808
Epoch: 4/10. Validation set: Average loss: 0.1913
Train: [0/2931 (0%)] Loss: 0.186037
Train: [1098/2931 (100%)] Loss: 0.179655
Epoch: 5/10. Train set: Average loss: 0.1797
Epoch: 5/10. Validation set: Average loss: 0.1603
Train: [0/2931 (0%)] Loss: 0.170767
Train: [1098/2931 (100%)] Loss: 0.179679
Epoch: 6/10. Train set: Average loss: 0.1797
Epoch: 6/10. Validation set: Average loss: 0.1545
Train: [0/2931 (0%)] Loss: 0.134427
Train: [1098/2931 (100%)] Loss: 0.183294
Epoch: 7/10. Train set: Average loss: 0.1832
Epoch: 7/10. Validation set: Average loss: 0.1731
Train: [0/2931 (0%)] Loss: 0.252801
Train: [1098/2931 (100%)] Loss: 0.178871
Epoch: 8/10. Train set: Average loss: 0.1791
Epoch: 8/10. Validation set: Average loss: 0.1582
Train: [0/2931 (0%)] Loss: 0.182920
Train: [1098/2931 (100%)] Loss: 0.178184
Epoch: 9/10. Train set: Average loss: 0.1782
Epoch: 9/10. Validation set: Average loss: 0.1501
Train: [0/2931 (0%)] Loss: 0.190192
Train: [1098/2931 (100%)] Loss: 0.173972
Epoch: 10/10. Train set: Average loss: 0.1740
Epoch: 10/10. Validation set: Average loss: 0.1503
model_1
Train: [0/2931 (0%)] Loss: 0.187336
Train: [1098/2931 (100%)] Loss: 0.183202
Epoch: 1/10. Train set: Average loss: 0.1832
Epoch: 1/10. Validation set: Average loss: 0.1597
Train: [0/2931 (0%)] Loss: 0.213644
Train: [1098/2931 (100%)] Loss: 0.190787
Epoch: 2/10. Train set: Average loss: 0.1908
Epoch: 2/10. Validation set: Average loss: 0.2095
Train: [0/2931 (0%)] Loss: 0.173457
Train: [1098/2931 (100%)] Loss: 0.187406
Epoch: 3/10. Train set: Average loss: 0.1874
Epoch: 3/10. Validation set: Average loss: 0.1529
Train: [0/2931 (0%)] Loss: 0.164775
Train: [1098/2931 (100%)] Loss: 0.181925
Epoch: 4/10. Train set: Average loss: 0.1819
Epoch: 4/10. Validation set: Average loss: 0.1545
Train: [0/2931 (0%)] Loss: 0.220412
Train: [1098/2931 (100%)] Loss: 0.181454
Epoch: 5/10. Train set: Average loss: 0.1816
Epoch: 5/10. Validation set: Average loss: 0.1558
Train: [0/2931 (0%)] Loss: 0.161692
Train: [1098/2931 (100%)] Loss: 0.178918
Epoch: 6/10. Train set: Average loss: 0.1789
Epoch: 6/10. Validation set: Average loss: 0.1530
Train: [0/2931 (0%)] Loss: 0.227426
Train: [1098/2931 (100%)] Loss: 0.183900
Epoch: 7/10. Train set: Average loss: 0.1840
Epoch: 7/10. Validation set: Average loss: 0.1539
Train: [0/2931 (0%)] Loss: 0.142838
Train: [1098/2931 (100%)] Loss: 0.182443
Epoch: 8/10. Train set: Average loss: 0.1823
Epoch: 8/10. Validation set: Average loss: 0.1921
Train: [0/2931 (0%)] Loss: 0.252340
Train: [1098/2931 (100%)] Loss: 0.183801
Epoch: 9/10. Train set: Average loss: 0.1840
Epoch: 9/10. Validation set: Average loss: 0.1522
Train: [0/2931 (0%)] Loss: 0.138726
Train: [1098/2931 (100%)] Loss: 0.180158
Epoch: 10/10. Train set: Average loss: 0.1800
Epoch: 10/10. Validation set: Average loss: 0.1521
model_2
Train: [0/2931 (0%)] Loss: 0.187464
Train: [1098/2931 (100%)] Loss: 0.195713
Epoch: 1/10. Train set: Average loss: 0.1957
Epoch: 1/10. Validation set: Average loss: 0.1539
Train: [0/2931 (0%)] Loss: 0.337188
Train: [1098/2931 (100%)] Loss: 0.188781
Epoch: 2/10. Train set: Average loss: 0.1892
Epoch: 2/10. Validation set: Average loss: 0.1695
Train: [0/2931 (0%)] Loss: 0.127360
Train: [1098/2931 (100%)] Loss: 0.188006
Epoch: 3/10. Train set: Average loss: 0.1878
Epoch: 3/10. Validation set: Average loss: 0.1525
Train: [0/2931 (0%)] Loss: 0.096825
Train: [1098/2931 (100%)] Loss: 0.182501
Epoch: 4/10. Train set: Average loss: 0.1823
Epoch: 4/10. Validation set: Average loss: 0.1553
Train: [0/2931 (0%)] Loss: 0.172454
Train: [1098/2931 (100%)] Loss: 0.183237
Epoch: 5/10. Train set: Average loss: 0.1832
Epoch: 5/10. Validation set: Average loss: 0.1562
Train: [0/2931 (0%)] Loss: 0.180744
Train: [1098/2931 (100%)] Loss: 0.183042
Epoch: 6/10. Train set: Average loss: 0.1830
Epoch: 6/10. Validation set: Average loss: 0.1502
Train: [0/2931 (0%)] Loss: 0.208963
Train: [1098/2931 (100%)] Loss: 0.180374
Epoch: 7/10. Train set: Average loss: 0.1805
Epoch: 7/10. Validation set: Average loss: 0.1564
Train: [0/2931 (0%)] Loss: 0.175874
Train: [1098/2931 (100%)] Loss: 0.181164
Epoch: 8/10. Train set: Average loss: 0.1812
Epoch: 8/10. Validation set: Average loss: 0.1509
Train: [0/2931 (0%)] Loss: 0.128747
Train: [1098/2931 (100%)] Loss: 0.175506
Epoch: 9/10. Train set: Average loss: 0.1754
Epoch: 9/10. Validation set: Average loss: 0.1516
Train: [0/2931 (0%)] Loss: 0.203135
Train: [1098/2931 (100%)] Loss: 0.168607
Epoch: 10/10. Train set: Average loss: 0.1687
Epoch: 10/10. Validation set: Average loss: 0.1515
Number features: 2
model_0
Train: [0/2931 (0%)] Loss: 0.249222
Train: [1098/2931 (100%)] Loss: 0.171710
Epoch: 1/10. Train set: Average loss: 0.1719
Epoch: 1/10. Validation set: Average loss: 0.1652
Train: [0/2931 (0%)] Loss: 0.220624
Train: [1098/2931 (100%)] Loss: 0.180322
Epoch: 2/10. Train set: Average loss: 0.1804
Epoch: 2/10. Validation set: Average loss: 0.1642
Train: [0/2931 (0%)] Loss: 0.153036
Train: [1098/2931 (100%)] Loss: 0.172054
Epoch: 3/10. Train set: Average loss: 0.1720
Epoch: 3/10. Validation set: Average loss: 0.1671
Train: [0/2931 (0%)] Loss: 0.211337
Train: [1098/2931 (100%)] Loss: 0.165912
Epoch: 4/10. Train set: Average loss: 0.1660
Epoch: 4/10. Validation set: Average loss: 0.1638
Train: [0/2931 (0%)] Loss: 0.202172
Train: [1098/2931 (100%)] Loss: 0.171318
Epoch: 5/10. Train set: Average loss: 0.1714
Epoch: 5/10. Validation set: Average loss: 0.1678
Train: [0/2931 (0%)] Loss: 0.197127
Train: [1098/2931 (100%)] Loss: 0.170522
Epoch: 6/10. Train set: Average loss: 0.1706
Epoch: 6/10. Validation set: Average loss: 0.1660
Train: [0/2931 (0%)] Loss: 0.214880
Train: [1098/2931 (100%)] Loss: 0.163700
Epoch: 7/10. Train set: Average loss: 0.1638
Epoch: 7/10. Validation set: Average loss: 0.1730
Train: [0/2931 (0%)] Loss: 0.217082
Train: [1098/2931 (100%)] Loss: 0.169587
Epoch: 8/10. Train set: Average loss: 0.1697
Epoch: 8/10. Validation set: Average loss: 0.1879
Train: [0/2931 (0%)] Loss: 0.227795
Train: [1098/2931 (100%)] Loss: 0.165456
Epoch: 9/10. Train set: Average loss: 0.1656
Epoch: 9/10. Validation set: Average loss: 0.1596
Train: [0/2931 (0%)] Loss: 0.122143
Train: [1098/2931 (100%)] Loss: 0.157381
Epoch: 10/10. Train set: Average loss: 0.1573
Epoch: 10/10. Validation set: Average loss: 0.1586
model_1
Train: [0/2931 (0%)] Loss: 0.248103
Train: [1098/2931 (100%)] Loss: 0.170453
Epoch: 1/10. Train set: Average loss: 0.1707
Epoch: 1/10. Validation set: Average loss: 0.1772
Train: [0/2931 (0%)] Loss: 0.192732
Train: [1098/2931 (100%)] Loss: 0.181071
Epoch: 2/10. Train set: Average loss: 0.1811
Epoch: 2/10. Validation set: Average loss: 0.1623
Train: [0/2931 (0%)] Loss: 0.138900
Train: [1098/2931 (100%)] Loss: 0.171251
Epoch: 3/10. Train set: Average loss: 0.1712
Epoch: 3/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.227918
Train: [1098/2931 (100%)] Loss: 0.183757
Epoch: 4/10. Train set: Average loss: 0.1839
Epoch: 4/10. Validation set: Average loss: 0.1731
Train: [0/2931 (0%)] Loss: 0.157927
Train: [1098/2931 (100%)] Loss: 0.171614
Epoch: 5/10. Train set: Average loss: 0.1716
Epoch: 5/10. Validation set: Average loss: 0.1657
Train: [0/2931 (0%)] Loss: 0.159822
Train: [1098/2931 (100%)] Loss: 0.170320
Epoch: 6/10. Train set: Average loss: 0.1703
Epoch: 6/10. Validation set: Average loss: 0.1642
Train: [0/2931 (0%)] Loss: 0.153967
Train: [1098/2931 (100%)] Loss: 0.166519
Epoch: 7/10. Train set: Average loss: 0.1665
Epoch: 7/10. Validation set: Average loss: 0.1547
Train: [0/2931 (0%)] Loss: 0.189017
Train: [1098/2931 (100%)] Loss: 0.157176
Epoch: 8/10. Train set: Average loss: 0.1573
Epoch: 8/10. Validation set: Average loss: 0.1593
Train: [0/2931 (0%)] Loss: 0.111575
Train: [1098/2931 (100%)] Loss: 0.156069
Epoch: 9/10. Train set: Average loss: 0.1559
Epoch: 9/10. Validation set: Average loss: 0.1425
Train: [0/2931 (0%)] Loss: 0.192282
Train: [1098/2931 (100%)] Loss: 0.150287
Epoch: 10/10. Train set: Average loss: 0.1504
Epoch: 10/10. Validation set: Average loss: 0.1433
model_2
Train: [0/2931 (0%)] Loss: 0.187099
Train: [1098/2931 (100%)] Loss: 0.174603
Epoch: 1/10. Train set: Average loss: 0.1746
Epoch: 1/10. Validation set: Average loss: 0.1575
Train: [0/2931 (0%)] Loss: 0.098627
Train: [1098/2931 (100%)] Loss: 0.171403
Epoch: 2/10. Train set: Average loss: 0.1712
Epoch: 2/10. Validation set: Average loss: 0.1566
Train: [0/2931 (0%)] Loss: 0.118006
Train: [1098/2931 (100%)] Loss: 0.175257
Epoch: 3/10. Train set: Average loss: 0.1751
Epoch: 3/10. Validation set: Average loss: 0.1669
Train: [0/2931 (0%)] Loss: 0.131965
Train: [1098/2931 (100%)] Loss: 0.169449
Epoch: 4/10. Train set: Average loss: 0.1693
Epoch: 4/10. Validation set: Average loss: 0.1640
Train: [0/2931 (0%)] Loss: 0.156112
Train: [1098/2931 (100%)] Loss: 0.169533
Epoch: 5/10. Train set: Average loss: 0.1695
Epoch: 5/10. Validation set: Average loss: 0.1598
Train: [0/2931 (0%)] Loss: 0.179457
Train: [1098/2931 (100%)] Loss: 0.169698
Epoch: 6/10. Train set: Average loss: 0.1697
Epoch: 6/10. Validation set: Average loss: 0.1607
Train: [0/2931 (0%)] Loss: 0.190843
Train: [1098/2931 (100%)] Loss: 0.171622
Epoch: 7/10. Train set: Average loss: 0.1717
Epoch: 7/10. Validation set: Average loss: 0.1584
Train: [0/2931 (0%)] Loss: 0.254858
Train: [1098/2931 (100%)] Loss: 0.165034
Epoch: 8/10. Train set: Average loss: 0.1653
Epoch: 8/10. Validation set: Average loss: 0.1587
Train: [0/2931 (0%)] Loss: 0.119103
Train: [1098/2931 (100%)] Loss: 0.161403
Epoch: 9/10. Train set: Average loss: 0.1613
Epoch: 9/10. Validation set: Average loss: 0.1611
Train: [0/2931 (0%)] Loss: 0.129035
Train: [1098/2931 (100%)] Loss: 0.155339
Epoch: 10/10. Train set: Average loss: 0.1553
Epoch: 10/10. Validation set: Average loss: 0.1566
Number features: 3
model_0
Train: [0/2931 (0%)] Loss: 0.249297
Train: [1098/2931 (100%)] Loss: 0.183006
Epoch: 1/10. Train set: Average loss: 0.1832
Epoch: 1/10. Validation set: Average loss: 0.2024
Train: [0/2931 (0%)] Loss: 0.162215
Train: [1098/2931 (100%)] Loss: 0.182924
Epoch: 2/10. Train set: Average loss: 0.1829
Epoch: 2/10. Validation set: Average loss: 0.1635
Train: [0/2931 (0%)] Loss: 0.156759
Train: [1098/2931 (100%)] Loss: 0.167828
Epoch: 3/10. Train set: Average loss: 0.1678
Epoch: 3/10. Validation set: Average loss: 0.1406
Train: [0/2931 (0%)] Loss: 0.201045
Train: [1098/2931 (100%)] Loss: 0.160625
Epoch: 4/10. Train set: Average loss: 0.1607
Epoch: 4/10. Validation set: Average loss: 0.1347
Train: [0/2931 (0%)] Loss: 0.205500
Train: [1098/2931 (100%)] Loss: 0.158861
Epoch: 5/10. Train set: Average loss: 0.1590
Epoch: 5/10. Validation set: Average loss: 0.1433
Train: [0/2931 (0%)] Loss: 0.134409
Train: [1098/2931 (100%)] Loss: 0.154523
Epoch: 6/10. Train set: Average loss: 0.1545
Epoch: 6/10. Validation set: Average loss: 0.1395
Train: [0/2931 (0%)] Loss: 0.138358
Train: [1098/2931 (100%)] Loss: 0.159633
Epoch: 7/10. Train set: Average loss: 0.1596
Epoch: 7/10. Validation set: Average loss: 0.1459
Train: [0/2931 (0%)] Loss: 0.142148
Train: [1098/2931 (100%)] Loss: 0.158758
Epoch: 8/10. Train set: Average loss: 0.1587
Epoch: 8/10. Validation set: Average loss: 0.1478
Train: [0/2931 (0%)] Loss: 0.101143
Train: [1098/2931 (100%)] Loss: 0.148371
Epoch: 9/10. Train set: Average loss: 0.1482
Epoch: 9/10. Validation set: Average loss: 0.1347
Train: [0/2931 (0%)] Loss: 0.159657
Train: [1098/2931 (100%)] Loss: 0.147856
Epoch: 10/10. Train set: Average loss: 0.1479
Epoch: 10/10. Validation set: Average loss: 0.1359
model_1
Train: [0/2931 (0%)] Loss: 0.312120
Train: [1098/2931 (100%)] Loss: 0.172076
Epoch: 1/10. Train set: Average loss: 0.1725
Epoch: 1/10. Validation set: Average loss: 0.1350
Train: [0/2931 (0%)] Loss: 0.193717
Train: [1098/2931 (100%)] Loss: 0.166789
Epoch: 2/10. Train set: Average loss: 0.1669
Epoch: 2/10. Validation set: Average loss: 0.1505
Train: [0/2931 (0%)] Loss: 0.164718
Train: [1098/2931 (100%)] Loss: 0.163732
Epoch: 3/10. Train set: Average loss: 0.1637
Epoch: 3/10. Validation set: Average loss: 0.1486
Train: [0/2931 (0%)] Loss: 0.232325
Train: [1098/2931 (100%)] Loss: 0.158148
Epoch: 4/10. Train set: Average loss: 0.1584
Epoch: 4/10. Validation set: Average loss: 0.1431
Train: [0/2931 (0%)] Loss: 0.185917
Train: [1098/2931 (100%)] Loss: 0.155473
Epoch: 5/10. Train set: Average loss: 0.1556
Epoch: 5/10. Validation set: Average loss: 0.1405
Train: [0/2931 (0%)] Loss: 0.128152
Train: [1098/2931 (100%)] Loss: 0.149164
Epoch: 6/10. Train set: Average loss: 0.1491
Epoch: 6/10. Validation set: Average loss: 0.1385
Train: [0/2931 (0%)] Loss: 0.110048
Train: [1098/2931 (100%)] Loss: 0.151142
Epoch: 7/10. Train set: Average loss: 0.1510
Epoch: 7/10. Validation set: Average loss: 0.1524
Train: [0/2931 (0%)] Loss: 0.195928
Train: [1098/2931 (100%)] Loss: 0.180620
Epoch: 8/10. Train set: Average loss: 0.1807
Epoch: 8/10. Validation set: Average loss: 0.1696
Train: [0/2931 (0%)] Loss: 0.253078
Train: [1098/2931 (100%)] Loss: 0.162789
Epoch: 9/10. Train set: Average loss: 0.1630
Epoch: 9/10. Validation set: Average loss: 0.1520
Train: [0/2931 (0%)] Loss: 0.106072
Train: [1098/2931 (100%)] Loss: 0.155799
Epoch: 10/10. Train set: Average loss: 0.1557
Epoch: 10/10. Validation set: Average loss: 0.1358
model_2
Train: [0/2931 (0%)] Loss: 0.373910
Train: [1098/2931 (100%)] Loss: 0.178899
Epoch: 1/10. Train set: Average loss: 0.1794
Epoch: 1/10. Validation set: Average loss: 0.1843
Train: [0/2931 (0%)] Loss: 0.262310
Train: [1098/2931 (100%)] Loss: 0.177270
Epoch: 2/10. Train set: Average loss: 0.1775
Epoch: 2/10. Validation set: Average loss: 0.1765
Train: [0/2931 (0%)] Loss: 0.253810
Train: [1098/2931 (100%)] Loss: 0.173688
Epoch: 3/10. Train set: Average loss: 0.1739
Epoch: 3/10. Validation set: Average loss: 0.1634
Train: [0/2931 (0%)] Loss: 0.174857
Train: [1098/2931 (100%)] Loss: 0.170011
Epoch: 4/10. Train set: Average loss: 0.1700
Epoch: 4/10. Validation set: Average loss: 0.1913
Train: [0/2931 (0%)] Loss: 0.204993
Train: [1098/2931 (100%)] Loss: 0.160092
Epoch: 5/10. Train set: Average loss: 0.1602
Epoch: 5/10. Validation set: Average loss: 0.1548
Train: [0/2931 (0%)] Loss: 0.173894
Train: [1098/2931 (100%)] Loss: 0.164165
Epoch: 6/10. Train set: Average loss: 0.1642
Epoch: 6/10. Validation set: Average loss: 0.1529
Train: [0/2931 (0%)] Loss: 0.128250
Train: [1098/2931 (100%)] Loss: 0.171647
Epoch: 7/10. Train set: Average loss: 0.1715
Epoch: 7/10. Validation set: Average loss: 0.1677
Train: [0/2931 (0%)] Loss: 0.234524
Train: [1098/2931 (100%)] Loss: 0.156328
Epoch: 8/10. Train set: Average loss: 0.1565
Epoch: 8/10. Validation set: Average loss: 0.1412
Train: [0/2931 (0%)] Loss: 0.210525
Train: [1098/2931 (100%)] Loss: 0.148034
Epoch: 9/10. Train set: Average loss: 0.1482
Epoch: 9/10. Validation set: Average loss: 0.1359
Train: [0/2931 (0%)] Loss: 0.140319
Train: [1098/2931 (100%)] Loss: 0.151046
Epoch: 10/10. Train set: Average loss: 0.1510
Epoch: 10/10. Validation set: Average loss: 0.1382
Number features: 4
model_0
Train: [0/2931 (0%)] Loss: 0.374823
Train: [1098/2931 (100%)] Loss: 0.172730
Epoch: 1/10. Train set: Average loss: 0.1733
Epoch: 1/10. Validation set: Average loss: 0.1673
Train: [0/2931 (0%)] Loss: 0.275353
Train: [1098/2931 (100%)] Loss: 0.165270
Epoch: 2/10. Train set: Average loss: 0.1656
Epoch: 2/10. Validation set: Average loss: 0.1845
Train: [0/2931 (0%)] Loss: 0.287510
Train: [1098/2931 (100%)] Loss: 0.167999
Epoch: 3/10. Train set: Average loss: 0.1683
Epoch: 3/10. Validation set: Average loss: 0.1944
Train: [0/2931 (0%)] Loss: 0.281919
Train: [1098/2931 (100%)] Loss: 0.160950
Epoch: 4/10. Train set: Average loss: 0.1613
Epoch: 4/10. Validation set: Average loss: 0.1649
Train: [0/2931 (0%)] Loss: 0.154450
Train: [1098/2931 (100%)] Loss: 0.164805
Epoch: 5/10. Train set: Average loss: 0.1648
Epoch: 5/10. Validation set: Average loss: 0.1614
Train: [0/2931 (0%)] Loss: 0.141266
Train: [1098/2931 (100%)] Loss: 0.153363
Epoch: 6/10. Train set: Average loss: 0.1533
Epoch: 6/10. Validation set: Average loss: 0.1774
Train: [0/2931 (0%)] Loss: 0.280593
Train: [1098/2931 (100%)] Loss: 0.161699
Epoch: 7/10. Train set: Average loss: 0.1620
Epoch: 7/10. Validation set: Average loss: 0.1586
Train: [0/2931 (0%)] Loss: 0.172083
Train: [1098/2931 (100%)] Loss: 0.157448
Epoch: 8/10. Train set: Average loss: 0.1575
Epoch: 8/10. Validation set: Average loss: 0.1627
Train: [0/2931 (0%)] Loss: 0.220455
Train: [1098/2931 (100%)] Loss: 0.148148
Epoch: 9/10. Train set: Average loss: 0.1483
Epoch: 9/10. Validation set: Average loss: 0.1490
Train: [0/2931 (0%)] Loss: 0.253321
Train: [1098/2931 (100%)] Loss: 0.144990
Epoch: 10/10. Train set: Average loss: 0.1453
Epoch: 10/10. Validation set: Average loss: 0.1488
model_1
Train: [0/2931 (0%)] Loss: 0.311977
Train: [1098/2931 (100%)] Loss: 0.162350
Epoch: 1/10. Train set: Average loss: 0.1628
Epoch: 1/10. Validation set: Average loss: 0.1734
Train: [0/2931 (0%)] Loss: 0.267583
Train: [1098/2931 (100%)] Loss: 0.161174
Epoch: 2/10. Train set: Average loss: 0.1615
Epoch: 2/10. Validation set: Average loss: 0.1850
Train: [0/2931 (0%)] Loss: 0.215827
Train: [1098/2931 (100%)] Loss: 0.163120
Epoch: 3/10. Train set: Average loss: 0.1633
Epoch: 3/10. Validation set: Average loss: 0.1573
Train: [0/2931 (0%)] Loss: 0.168145
Train: [1098/2931 (100%)] Loss: 0.158328
Epoch: 4/10. Train set: Average loss: 0.1584
Epoch: 4/10. Validation set: Average loss: 0.1646
Train: [0/2931 (0%)] Loss: 0.149752
Train: [1098/2931 (100%)] Loss: 0.148420
Epoch: 5/10. Train set: Average loss: 0.1484
Epoch: 5/10. Validation set: Average loss: 0.1716
Train: [0/2931 (0%)] Loss: 0.221934
Train: [1098/2931 (100%)] Loss: 0.154812
Epoch: 6/10. Train set: Average loss: 0.1550
Epoch: 6/10. Validation set: Average loss: 0.1587
Train: [0/2931 (0%)] Loss: 0.195982
Train: [1098/2931 (100%)] Loss: 0.158045
Epoch: 7/10. Train set: Average loss: 0.1581
Epoch: 7/10. Validation set: Average loss: 0.1797
Train: [0/2931 (0%)] Loss: 0.178622
Train: [1098/2931 (100%)] Loss: 0.152797
Epoch: 8/10. Train set: Average loss: 0.1529
Epoch: 8/10. Validation set: Average loss: 0.1673
Train: [0/2931 (0%)] Loss: 0.185816
Train: [1098/2931 (100%)] Loss: 0.148903
Epoch: 9/10. Train set: Average loss: 0.1490
Epoch: 9/10. Validation set: Average loss: 0.1436
Train: [0/2931 (0%)] Loss: 0.170490
Train: [1098/2931 (100%)] Loss: 0.147605
Epoch: 10/10. Train set: Average loss: 0.1477
Epoch: 10/10. Validation set: Average loss: 0.1446
model_2
Train: [0/2931 (0%)] Loss: 0.374606
Train: [1098/2931 (100%)] Loss: 0.166498
Epoch: 1/10. Train set: Average loss: 0.1671
Epoch: 1/10. Validation set: Average loss: 0.1650
Train: [0/2931 (0%)] Loss: 0.223835
Train: [1098/2931 (100%)] Loss: 0.160892
Epoch: 2/10. Train set: Average loss: 0.1611
Epoch: 2/10. Validation set: Average loss: 0.2064
Train: [0/2931 (0%)] Loss: 0.297244
Train: [1098/2931 (100%)] Loss: 0.159561
Epoch: 3/10. Train set: Average loss: 0.1599
Epoch: 3/10. Validation set: Average loss: 0.1775
Train: [0/2931 (0%)] Loss: 0.284444
Train: [1098/2931 (100%)] Loss: 0.157880
Epoch: 4/10. Train set: Average loss: 0.1582
Epoch: 4/10. Validation set: Average loss: 0.1828
Train: [0/2931 (0%)] Loss: 0.296700
Train: [1098/2931 (100%)] Loss: 0.153783
Epoch: 5/10. Train set: Average loss: 0.1542
Epoch: 5/10. Validation set: Average loss: 0.1666
Train: [0/2931 (0%)] Loss: 0.232787
Train: [1098/2931 (100%)] Loss: 0.150341
Epoch: 6/10. Train set: Average loss: 0.1506
Epoch: 6/10. Validation set: Average loss: 0.1697
Train: [0/2931 (0%)] Loss: 0.239347
Train: [1098/2931 (100%)] Loss: 0.147370
Epoch: 7/10. Train set: Average loss: 0.1476
Epoch: 7/10. Validation set: Average loss: 0.1673
Train: [0/2931 (0%)] Loss: 0.196420
Train: [1098/2931 (100%)] Loss: 0.152661
Epoch: 8/10. Train set: Average loss: 0.1528
Epoch: 8/10. Validation set: Average loss: 0.1729
Train: [0/2931 (0%)] Loss: 0.253451
Train: [1098/2931 (100%)] Loss: 0.151929
Epoch: 9/10. Train set: Average loss: 0.1522
Epoch: 9/10. Validation set: Average loss: 0.1505
Train: [0/2931 (0%)] Loss: 0.210607
Train: [1098/2931 (100%)] Loss: 0.151411
Epoch: 10/10. Train set: Average loss: 0.1516
Epoch: 10/10. Validation set: Average loss: 0.1498
Number features: 5
model_0
Train: [0/2931 (0%)] Loss: 0.124857
Train: [1098/2931 (100%)] Loss: 0.165936
Epoch: 1/10. Train set: Average loss: 0.1658
Epoch: 1/10. Validation set: Average loss: 0.1684
Train: [0/2931 (0%)] Loss: 0.085786
Train: [1098/2931 (100%)] Loss: 0.173113
Epoch: 2/10. Train set: Average loss: 0.1729
Epoch: 2/10. Validation set: Average loss: 0.1706
Train: [0/2931 (0%)] Loss: 0.167698
Train: [1098/2931 (100%)] Loss: 0.174692
Epoch: 3/10. Train set: Average loss: 0.1747
Epoch: 3/10. Validation set: Average loss: 0.1761
Train: [0/2931 (0%)] Loss: 0.088598
Train: [1098/2931 (100%)] Loss: 0.169374
Epoch: 4/10. Train set: Average loss: 0.1692
Epoch: 4/10. Validation set: Average loss: 0.1657
Train: [0/2931 (0%)] Loss: 0.056715
Train: [1098/2931 (100%)] Loss: 0.169159
Epoch: 5/10. Train set: Average loss: 0.1689
Epoch: 5/10. Validation set: Average loss: 0.1810
Train: [0/2931 (0%)] Loss: 0.145081
Train: [1098/2931 (100%)] Loss: 0.159125
Epoch: 6/10. Train set: Average loss: 0.1591
Epoch: 6/10. Validation set: Average loss: 0.1542
Train: [0/2931 (0%)] Loss: 0.061239
Train: [1098/2931 (100%)] Loss: 0.150857
Epoch: 7/10. Train set: Average loss: 0.1506
Epoch: 7/10. Validation set: Average loss: 0.1519
Train: [0/2931 (0%)] Loss: 0.119074
Train: [1098/2931 (100%)] Loss: 0.153661
Epoch: 8/10. Train set: Average loss: 0.1536
Epoch: 8/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.094694
Train: [1098/2931 (100%)] Loss: 0.148640
Epoch: 9/10. Train set: Average loss: 0.1485
Epoch: 9/10. Validation set: Average loss: 0.1380
Train: [0/2931 (0%)] Loss: 0.082544
Train: [1098/2931 (100%)] Loss: 0.147101
Epoch: 10/10. Train set: Average loss: 0.1469
Epoch: 10/10. Validation set: Average loss: 0.1357
model_1
Train: [0/2931 (0%)] Loss: 0.187147
Train: [1098/2931 (100%)] Loss: 0.166602
Epoch: 1/10. Train set: Average loss: 0.1667
Epoch: 1/10. Validation set: Average loss: 0.1708
Train: [0/2931 (0%)] Loss: 0.129088
Train: [1098/2931 (100%)] Loss: 0.164543
Epoch: 2/10. Train set: Average loss: 0.1644
Epoch: 2/10. Validation set: Average loss: 0.1551
Train: [0/2931 (0%)] Loss: 0.181703
Train: [1098/2931 (100%)] Loss: 0.165528
Epoch: 3/10. Train set: Average loss: 0.1656
Epoch: 3/10. Validation set: Average loss: 0.1476
Train: [0/2931 (0%)] Loss: 0.179429
Train: [1098/2931 (100%)] Loss: 0.164059
Epoch: 4/10. Train set: Average loss: 0.1641
Epoch: 4/10. Validation set: Average loss: 0.1604
Train: [0/2931 (0%)] Loss: 0.150228
Train: [1098/2931 (100%)] Loss: 0.150497
Epoch: 5/10. Train set: Average loss: 0.1505
Epoch: 5/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.109927
Train: [1098/2931 (100%)] Loss: 0.154266
Epoch: 6/10. Train set: Average loss: 0.1541
Epoch: 6/10. Validation set: Average loss: 0.1495
Train: [0/2931 (0%)] Loss: 0.149067
Train: [1098/2931 (100%)] Loss: 0.150350
Epoch: 7/10. Train set: Average loss: 0.1503
Epoch: 7/10. Validation set: Average loss: 0.1540
Train: [0/2931 (0%)] Loss: 0.093017
Train: [1098/2931 (100%)] Loss: 0.146262
Epoch: 8/10. Train set: Average loss: 0.1461
Epoch: 8/10. Validation set: Average loss: 0.1491
Train: [0/2931 (0%)] Loss: 0.129397
Train: [1098/2931 (100%)] Loss: 0.140383
Epoch: 9/10. Train set: Average loss: 0.1404
Epoch: 9/10. Validation set: Average loss: 0.1424
Train: [0/2931 (0%)] Loss: 0.212504
Train: [1098/2931 (100%)] Loss: 0.140477
Epoch: 10/10. Train set: Average loss: 0.1407
Epoch: 10/10. Validation set: Average loss: 0.1425
model_2
Train: [0/2931 (0%)] Loss: 0.249253
Train: [1098/2931 (100%)] Loss: 0.174190
Epoch: 1/10. Train set: Average loss: 0.1744
Epoch: 1/10. Validation set: Average loss: 0.1839
Train: [0/2931 (0%)] Loss: 0.212208
Train: [1098/2931 (100%)] Loss: 0.167214
Epoch: 2/10. Train set: Average loss: 0.1673
Epoch: 2/10. Validation set: Average loss: 0.1641
Train: [0/2931 (0%)] Loss: 0.213794
Train: [1098/2931 (100%)] Loss: 0.165356
Epoch: 3/10. Train set: Average loss: 0.1655
Epoch: 3/10. Validation set: Average loss: 0.1492
Train: [0/2931 (0%)] Loss: 0.195251
Train: [1098/2931 (100%)] Loss: 0.160546
Epoch: 4/10. Train set: Average loss: 0.1606
Epoch: 4/10. Validation set: Average loss: 0.1473
Train: [0/2931 (0%)] Loss: 0.127635
Train: [1098/2931 (100%)] Loss: 0.157487
Epoch: 5/10. Train set: Average loss: 0.1574
Epoch: 5/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.170474
Train: [1098/2931 (100%)] Loss: 0.155770
Epoch: 6/10. Train set: Average loss: 0.1558
Epoch: 6/10. Validation set: Average loss: 0.1616
Train: [0/2931 (0%)] Loss: 0.190919
Train: [1098/2931 (100%)] Loss: 0.154362
Epoch: 7/10. Train set: Average loss: 0.1545
Epoch: 7/10. Validation set: Average loss: 0.1478
Train: [0/2931 (0%)] Loss: 0.168325
Train: [1098/2931 (100%)] Loss: 0.160862
Epoch: 8/10. Train set: Average loss: 0.1609
Epoch: 8/10. Validation set: Average loss: 0.1790
Train: [0/2931 (0%)] Loss: 0.170163
Train: [1098/2931 (100%)] Loss: 0.155603
Epoch: 9/10. Train set: Average loss: 0.1556
Epoch: 9/10. Validation set: Average loss: 0.1457
Train: [0/2931 (0%)] Loss: 0.208320
Train: [1098/2931 (100%)] Loss: 0.147081
Epoch: 10/10. Train set: Average loss: 0.1472
Epoch: 10/10. Validation set: Average loss: 0.1480
Number features: 6
model_0
Train: [0/2931 (0%)] Loss: 0.311413
Train: [1098/2931 (100%)] Loss: 0.176498
Epoch: 1/10. Train set: Average loss: 0.1769
Epoch: 1/10. Validation set: Average loss: 0.1208
Train: [0/2931 (0%)] Loss: 0.237609
Train: [1098/2931 (100%)] Loss: 0.164335
Epoch: 2/10. Train set: Average loss: 0.1645
Epoch: 2/10. Validation set: Average loss: 0.1155
Train: [0/2931 (0%)] Loss: 0.168682
Train: [1098/2931 (100%)] Loss: 0.167052
Epoch: 3/10. Train set: Average loss: 0.1671
Epoch: 3/10. Validation set: Average loss: 0.1178
Train: [0/2931 (0%)] Loss: 0.113196
Train: [1098/2931 (100%)] Loss: 0.158919
Epoch: 4/10. Train set: Average loss: 0.1588
Epoch: 4/10. Validation set: Average loss: 0.1152
Train: [0/2931 (0%)] Loss: 0.107863
Train: [1098/2931 (100%)] Loss: 0.162475
Epoch: 5/10. Train set: Average loss: 0.1623
Epoch: 5/10. Validation set: Average loss: 0.1271
Train: [0/2931 (0%)] Loss: 0.144208
Train: [1098/2931 (100%)] Loss: 0.166705
Epoch: 6/10. Train set: Average loss: 0.1666
Epoch: 6/10. Validation set: Average loss: 0.1061
Train: [0/2931 (0%)] Loss: 0.163886
Train: [1098/2931 (100%)] Loss: 0.163629
Epoch: 7/10. Train set: Average loss: 0.1636
Epoch: 7/10. Validation set: Average loss: 0.1128
Train: [0/2931 (0%)] Loss: 0.424440
Train: [1098/2931 (100%)] Loss: 0.163175
Epoch: 8/10. Train set: Average loss: 0.1639
Epoch: 8/10. Validation set: Average loss: 0.1308
Train: [0/2931 (0%)] Loss: 0.137951
Train: [1098/2931 (100%)] Loss: 0.158275
Epoch: 9/10. Train set: Average loss: 0.1582
Epoch: 9/10. Validation set: Average loss: 0.1139
Train: [0/2931 (0%)] Loss: 0.168296
Train: [1098/2931 (100%)] Loss: 0.154313
Epoch: 10/10. Train set: Average loss: 0.1544
Epoch: 10/10. Validation set: Average loss: 0.1151
model_1
Train: [0/2931 (0%)] Loss: 0.249438
Train: [1098/2931 (100%)] Loss: 0.179247
Epoch: 1/10. Train set: Average loss: 0.1794
Epoch: 1/10. Validation set: Average loss: 0.1434
Train: [0/2931 (0%)] Loss: 0.148359
Train: [1098/2931 (100%)] Loss: 0.172173
Epoch: 2/10. Train set: Average loss: 0.1721
Epoch: 2/10. Validation set: Average loss: 0.1401
Train: [0/2931 (0%)] Loss: 0.151753
Train: [1098/2931 (100%)] Loss: 0.164262
Epoch: 3/10. Train set: Average loss: 0.1642
Epoch: 3/10. Validation set: Average loss: 0.1431
Train: [0/2931 (0%)] Loss: 0.237706
Train: [1098/2931 (100%)] Loss: 0.166988
Epoch: 4/10. Train set: Average loss: 0.1672
Epoch: 4/10. Validation set: Average loss: 0.1367
Train: [0/2931 (0%)] Loss: 0.229354
Train: [1098/2931 (100%)] Loss: 0.161887
Epoch: 5/10. Train set: Average loss: 0.1621
Epoch: 5/10. Validation set: Average loss: 0.1220
Train: [0/2931 (0%)] Loss: 0.107949
Train: [1098/2931 (100%)] Loss: 0.163049
Epoch: 6/10. Train set: Average loss: 0.1629
Epoch: 6/10. Validation set: Average loss: 0.1523
Train: [0/2931 (0%)] Loss: 0.189181
Train: [1098/2931 (100%)] Loss: 0.168945
Epoch: 7/10. Train set: Average loss: 0.1690
Epoch: 7/10. Validation set: Average loss: 0.1516
Train: [0/2931 (0%)] Loss: 0.107110
Train: [1098/2931 (100%)] Loss: 0.162230
Epoch: 8/10. Train set: Average loss: 0.1621
Epoch: 8/10. Validation set: Average loss: 0.1280
Train: [0/2931 (0%)] Loss: 0.142682
Train: [1098/2931 (100%)] Loss: 0.153418
Epoch: 9/10. Train set: Average loss: 0.1534
Epoch: 9/10. Validation set: Average loss: 0.1294
Train: [0/2931 (0%)] Loss: 0.169053
Train: [1098/2931 (100%)] Loss: 0.152245
Epoch: 10/10. Train set: Average loss: 0.1523
Epoch: 10/10. Validation set: Average loss: 0.1256
model_2
Train: [0/2931 (0%)] Loss: 0.187170
Train: [1098/2931 (100%)] Loss: 0.169790
Epoch: 1/10. Train set: Average loss: 0.1698
Epoch: 1/10. Validation set: Average loss: 0.1460
Train: [0/2931 (0%)] Loss: 0.136048
Train: [1098/2931 (100%)] Loss: 0.169321
Epoch: 2/10. Train set: Average loss: 0.1692
Epoch: 2/10. Validation set: Average loss: 0.1324
Train: [0/2931 (0%)] Loss: 0.157580
Train: [1098/2931 (100%)] Loss: 0.177934
Epoch: 3/10. Train set: Average loss: 0.1779
Epoch: 3/10. Validation set: Average loss: 0.1337
Train: [0/2931 (0%)] Loss: 0.190568
Train: [1098/2931 (100%)] Loss: 0.164036
Epoch: 4/10. Train set: Average loss: 0.1641
Epoch: 4/10. Validation set: Average loss: 0.1650
Train: [0/2931 (0%)] Loss: 0.127062
Train: [1098/2931 (100%)] Loss: 0.171318
Epoch: 5/10. Train set: Average loss: 0.1712
Epoch: 5/10. Validation set: Average loss: 0.1426
Train: [0/2931 (0%)] Loss: 0.199422
Train: [1098/2931 (100%)] Loss: 0.159607
Epoch: 6/10. Train set: Average loss: 0.1597
Epoch: 6/10. Validation set: Average loss: 0.1378
Train: [0/2931 (0%)] Loss: 0.096196
Train: [1098/2931 (100%)] Loss: 0.162173
Epoch: 7/10. Train set: Average loss: 0.1620
Epoch: 7/10. Validation set: Average loss: 0.1514
Train: [0/2931 (0%)] Loss: 0.086531
Train: [1098/2931 (100%)] Loss: 0.173176
Epoch: 8/10. Train set: Average loss: 0.1729
Epoch: 8/10. Validation set: Average loss: 0.1661
Train: [0/2931 (0%)] Loss: 0.170200
Train: [1098/2931 (100%)] Loss: 0.174863
Epoch: 9/10. Train set: Average loss: 0.1749
Epoch: 9/10. Validation set: Average loss: 0.1570
Train: [0/2931 (0%)] Loss: 0.215146
Train: [1098/2931 (100%)] Loss: 0.173852
Epoch: 10/10. Train set: Average loss: 0.1740
Epoch: 10/10. Validation set: Average loss: 0.1516
Number features: 7
model_0
Train: [0/2931 (0%)] Loss: 0.249817
Train: [1098/2931 (100%)] Loss: 0.174665
Epoch: 1/10. Train set: Average loss: 0.1749
Epoch: 1/10. Validation set: Average loss: 0.1878
Train: [0/2931 (0%)] Loss: 0.224087
Train: [1098/2931 (100%)] Loss: 0.167836
Epoch: 2/10. Train set: Average loss: 0.1680
Epoch: 2/10. Validation set: Average loss: 0.1680
Train: [0/2931 (0%)] Loss: 0.162937
Train: [1098/2931 (100%)] Loss: 0.166308
Epoch: 3/10. Train set: Average loss: 0.1663
Epoch: 3/10. Validation set: Average loss: 0.1710
Train: [0/2931 (0%)] Loss: 0.204904
Train: [1098/2931 (100%)] Loss: 0.167750
Epoch: 4/10. Train set: Average loss: 0.1679
Epoch: 4/10. Validation set: Average loss: 0.1700
Train: [0/2931 (0%)] Loss: 0.169095
Train: [1098/2931 (100%)] Loss: 0.162316
Epoch: 5/10. Train set: Average loss: 0.1623
Epoch: 5/10. Validation set: Average loss: 0.1641
Train: [0/2931 (0%)] Loss: 0.147804
Train: [1098/2931 (100%)] Loss: 0.158624
Epoch: 6/10. Train set: Average loss: 0.1586
Epoch: 6/10. Validation set: Average loss: 0.1564
Train: [0/2931 (0%)] Loss: 0.145413
Train: [1098/2931 (100%)] Loss: 0.157419
Epoch: 7/10. Train set: Average loss: 0.1574
Epoch: 7/10. Validation set: Average loss: 0.1591
Train: [0/2931 (0%)] Loss: 0.203829
Train: [1098/2931 (100%)] Loss: 0.158378
Epoch: 8/10. Train set: Average loss: 0.1585
Epoch: 8/10. Validation set: Average loss: 0.1781
Train: [0/2931 (0%)] Loss: 0.169693
Train: [1098/2931 (100%)] Loss: 0.151708
Epoch: 9/10. Train set: Average loss: 0.1518
Epoch: 9/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.210166
Train: [1098/2931 (100%)] Loss: 0.147197
Epoch: 10/10. Train set: Average loss: 0.1474
Epoch: 10/10. Validation set: Average loss: 0.1502
model_1
Train: [0/2931 (0%)] Loss: 0.187184
Train: [1098/2931 (100%)] Loss: 0.166506
Epoch: 1/10. Train set: Average loss: 0.1666
Epoch: 1/10. Validation set: Average loss: 0.1570
Train: [0/2931 (0%)] Loss: 0.223115
Train: [1098/2931 (100%)] Loss: 0.167209
Epoch: 2/10. Train set: Average loss: 0.1674
Epoch: 2/10. Validation set: Average loss: 0.1462
Train: [0/2931 (0%)] Loss: 0.125175
Train: [1098/2931 (100%)] Loss: 0.161000
Epoch: 3/10. Train set: Average loss: 0.1609
Epoch: 3/10. Validation set: Average loss: 0.1651
Train: [0/2931 (0%)] Loss: 0.161160
Train: [1098/2931 (100%)] Loss: 0.160025
Epoch: 4/10. Train set: Average loss: 0.1600
Epoch: 4/10. Validation set: Average loss: 0.1741
Train: [0/2931 (0%)] Loss: 0.109305
Train: [1098/2931 (100%)] Loss: 0.171091
Epoch: 5/10. Train set: Average loss: 0.1709
Epoch: 5/10. Validation set: Average loss: 0.1570
Train: [0/2931 (0%)] Loss: 0.147703
Train: [1098/2931 (100%)] Loss: 0.165525
Epoch: 6/10. Train set: Average loss: 0.1655
Epoch: 6/10. Validation set: Average loss: 0.1657
Train: [0/2931 (0%)] Loss: 0.120449
Train: [1098/2931 (100%)] Loss: 0.158056
Epoch: 7/10. Train set: Average loss: 0.1580
Epoch: 7/10. Validation set: Average loss: 0.1788
Train: [0/2931 (0%)] Loss: 0.157770
Train: [1098/2931 (100%)] Loss: 0.159510
Epoch: 8/10. Train set: Average loss: 0.1595
Epoch: 8/10. Validation set: Average loss: 0.1607
Train: [0/2931 (0%)] Loss: 0.150150
Train: [1098/2931 (100%)] Loss: 0.152876
Epoch: 9/10. Train set: Average loss: 0.1529
Epoch: 9/10. Validation set: Average loss: 0.1482
Train: [0/2931 (0%)] Loss: 0.208431
Train: [1098/2931 (100%)] Loss: 0.150062
Epoch: 10/10. Train set: Average loss: 0.1502
Epoch: 10/10. Validation set: Average loss: 0.1500
model_2
Train: [0/2931 (0%)] Loss: 0.124877
Train: [1098/2931 (100%)] Loss: 0.165653
Epoch: 1/10. Train set: Average loss: 0.1655
Epoch: 1/10. Validation set: Average loss: 0.1544
Train: [0/2931 (0%)] Loss: 0.166773
Train: [1098/2931 (100%)] Loss: 0.165337
Epoch: 2/10. Train set: Average loss: 0.1653
Epoch: 2/10. Validation set: Average loss: 0.1588
Train: [0/2931 (0%)] Loss: 0.350084
Train: [1098/2931 (100%)] Loss: 0.166624
Epoch: 3/10. Train set: Average loss: 0.1671
Epoch: 3/10. Validation set: Average loss: 0.1588
Train: [0/2931 (0%)] Loss: 0.131807
Train: [1098/2931 (100%)] Loss: 0.163511
Epoch: 4/10. Train set: Average loss: 0.1634
Epoch: 4/10. Validation set: Average loss: 0.1447
Train: [0/2931 (0%)] Loss: 0.073869
Train: [1098/2931 (100%)] Loss: 0.159991
Epoch: 5/10. Train set: Average loss: 0.1598
Epoch: 5/10. Validation set: Average loss: 0.1499
Train: [0/2931 (0%)] Loss: 0.093219
Train: [1098/2931 (100%)] Loss: 0.159269
Epoch: 6/10. Train set: Average loss: 0.1591
Epoch: 6/10. Validation set: Average loss: 0.1516
Train: [0/2931 (0%)] Loss: 0.143600
Train: [1098/2931 (100%)] Loss: 0.159832
Epoch: 7/10. Train set: Average loss: 0.1598
Epoch: 7/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.112954
Train: [1098/2931 (100%)] Loss: 0.159864
Epoch: 8/10. Train set: Average loss: 0.1597
Epoch: 8/10. Validation set: Average loss: 0.1769
Train: [0/2931 (0%)] Loss: 0.223670
Train: [1098/2931 (100%)] Loss: 0.162095
Epoch: 9/10. Train set: Average loss: 0.1623
Epoch: 9/10. Validation set: Average loss: 0.1536
Train: [0/2931 (0%)] Loss: 0.126086
Train: [1098/2931 (100%)] Loss: 0.152392
Epoch: 10/10. Train set: Average loss: 0.1523
Epoch: 10/10. Validation set: Average loss: 0.1531
Number features: 8
model_0
Train: [0/2931 (0%)] Loss: 0.374174
Train: [1098/2931 (100%)] Loss: 0.172439
Epoch: 1/10. Train set: Average loss: 0.1730
Epoch: 1/10. Validation set: Average loss: 0.1569
Train: [0/2931 (0%)] Loss: 0.252058
Train: [1098/2931 (100%)] Loss: 0.157756
Epoch: 2/10. Train set: Average loss: 0.1580
Epoch: 2/10. Validation set: Average loss: 0.1528
Train: [0/2931 (0%)] Loss: 0.223080
Train: [1098/2931 (100%)] Loss: 0.158187
Epoch: 3/10. Train set: Average loss: 0.1584
Epoch: 3/10. Validation set: Average loss: 0.1535
Train: [0/2931 (0%)] Loss: 0.230918
Train: [1098/2931 (100%)] Loss: 0.154024
Epoch: 4/10. Train set: Average loss: 0.1542
Epoch: 4/10. Validation set: Average loss: 0.1599
Train: [0/2931 (0%)] Loss: 0.151587
Train: [1098/2931 (100%)] Loss: 0.147392
Epoch: 5/10. Train set: Average loss: 0.1474
Epoch: 5/10. Validation set: Average loss: 0.1442
Train: [0/2931 (0%)] Loss: 0.255017
Train: [1098/2931 (100%)] Loss: 0.153787
Epoch: 6/10. Train set: Average loss: 0.1541
Epoch: 6/10. Validation set: Average loss: 0.1476
Train: [0/2931 (0%)] Loss: 0.191706
Train: [1098/2931 (100%)] Loss: 0.150746
Epoch: 7/10. Train set: Average loss: 0.1509
Epoch: 7/10. Validation set: Average loss: 0.1563
Train: [0/2931 (0%)] Loss: 0.223330
Train: [1098/2931 (100%)] Loss: 0.150246
Epoch: 8/10. Train set: Average loss: 0.1504
Epoch: 8/10. Validation set: Average loss: 0.1472
Train: [0/2931 (0%)] Loss: 0.210071
Train: [1098/2931 (100%)] Loss: 0.149352
Epoch: 9/10. Train set: Average loss: 0.1495
Epoch: 9/10. Validation set: Average loss: 0.1338
Train: [0/2931 (0%)] Loss: 0.191847
Train: [1098/2931 (100%)] Loss: 0.145099
Epoch: 10/10. Train set: Average loss: 0.1452
Epoch: 10/10. Validation set: Average loss: 0.1339
model_1
Train: [0/2931 (0%)] Loss: 0.373834
Train: [1098/2931 (100%)] Loss: 0.171769
Epoch: 1/10. Train set: Average loss: 0.1723
Epoch: 1/10. Validation set: Average loss: 0.1797
Train: [0/2931 (0%)] Loss: 0.250009
Train: [1098/2931 (100%)] Loss: 0.173330
Epoch: 2/10. Train set: Average loss: 0.1735
Epoch: 2/10. Validation set: Average loss: 0.1618
Train: [0/2931 (0%)] Loss: 0.226483
Train: [1098/2931 (100%)] Loss: 0.170841
Epoch: 3/10. Train set: Average loss: 0.1710
Epoch: 3/10. Validation set: Average loss: 0.1629
Train: [0/2931 (0%)] Loss: 0.193298
Train: [1098/2931 (100%)] Loss: 0.160712
Epoch: 4/10. Train set: Average loss: 0.1608
Epoch: 4/10. Validation set: Average loss: 0.1656
Train: [0/2931 (0%)] Loss: 0.221440
Train: [1098/2931 (100%)] Loss: 0.158113
Epoch: 5/10. Train set: Average loss: 0.1583
Epoch: 5/10. Validation set: Average loss: 0.1744
Train: [0/2931 (0%)] Loss: 0.245799
Train: [1098/2931 (100%)] Loss: 0.156356
Epoch: 6/10. Train set: Average loss: 0.1566
Epoch: 6/10. Validation set: Average loss: 0.1667
Train: [0/2931 (0%)] Loss: 0.208689
Train: [1098/2931 (100%)] Loss: 0.154242
Epoch: 7/10. Train set: Average loss: 0.1544
Epoch: 7/10. Validation set: Average loss: 0.1615
Train: [0/2931 (0%)] Loss: 0.248228
Train: [1098/2931 (100%)] Loss: 0.154723
Epoch: 8/10. Train set: Average loss: 0.1550
Epoch: 8/10. Validation set: Average loss: 0.1619
Train: [0/2931 (0%)] Loss: 0.184548
Train: [1098/2931 (100%)] Loss: 0.152260
Epoch: 9/10. Train set: Average loss: 0.1523
Epoch: 9/10. Validation set: Average loss: 0.1469
Train: [0/2931 (0%)] Loss: 0.166806
Train: [1098/2931 (100%)] Loss: 0.143824
Epoch: 10/10. Train set: Average loss: 0.1439
Epoch: 10/10. Validation set: Average loss: 0.1440
model_2
Train: [0/2931 (0%)] Loss: 0.374010
Train: [1098/2931 (100%)] Loss: 0.174528
Epoch: 1/10. Train set: Average loss: 0.1751
Epoch: 1/10. Validation set: Average loss: 0.1664
Train: [0/2931 (0%)] Loss: 0.254224
Train: [1098/2931 (100%)] Loss: 0.155605
Epoch: 2/10. Train set: Average loss: 0.1559
Epoch: 2/10. Validation set: Average loss: 0.1558
Train: [0/2931 (0%)] Loss: 0.159115
Train: [1098/2931 (100%)] Loss: 0.160584
Epoch: 3/10. Train set: Average loss: 0.1606
Epoch: 3/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.234069
Train: [1098/2931 (100%)] Loss: 0.152910
Epoch: 4/10. Train set: Average loss: 0.1531
Epoch: 4/10. Validation set: Average loss: 0.1574
Train: [0/2931 (0%)] Loss: 0.223141
Train: [1098/2931 (100%)] Loss: 0.153152
Epoch: 5/10. Train set: Average loss: 0.1533
Epoch: 5/10. Validation set: Average loss: 0.1451
Train: [0/2931 (0%)] Loss: 0.194084
Train: [1098/2931 (100%)] Loss: 0.158779
Epoch: 6/10. Train set: Average loss: 0.1589
Epoch: 6/10. Validation set: Average loss: 0.1586
Train: [0/2931 (0%)] Loss: 0.341197
Train: [1098/2931 (100%)] Loss: 0.160693
Epoch: 7/10. Train set: Average loss: 0.1612
Epoch: 7/10. Validation set: Average loss: 0.1684
Train: [0/2931 (0%)] Loss: 0.239985
Train: [1098/2931 (100%)] Loss: 0.151661
Epoch: 8/10. Train set: Average loss: 0.1519
Epoch: 8/10. Validation set: Average loss: 0.1611
Train: [0/2931 (0%)] Loss: 0.221484
Train: [1098/2931 (100%)] Loss: 0.151790
Epoch: 9/10. Train set: Average loss: 0.1520
Epoch: 9/10. Validation set: Average loss: 0.1420
Train: [0/2931 (0%)] Loss: 0.172030
Train: [1098/2931 (100%)] Loss: 0.143799
Epoch: 10/10. Train set: Average loss: 0.1439
Epoch: 10/10. Validation set: Average loss: 0.1424
Number features: 9
model_0
Train: [0/2931 (0%)] Loss: 0.311739
Train: [1098/2931 (100%)] Loss: 0.161763
Epoch: 1/10. Train set: Average loss: 0.1622
Epoch: 1/10. Validation set: Average loss: 0.1704
Train: [0/2931 (0%)] Loss: 0.228470
Train: [1098/2931 (100%)] Loss: 0.159768
Epoch: 2/10. Train set: Average loss: 0.1600
Epoch: 2/10. Validation set: Average loss: 0.1491
Train: [0/2931 (0%)] Loss: 0.157460
Train: [1098/2931 (100%)] Loss: 0.150000
Epoch: 3/10. Train set: Average loss: 0.1500
Epoch: 3/10. Validation set: Average loss: 0.1526
Train: [0/2931 (0%)] Loss: 0.160575
Train: [1098/2931 (100%)] Loss: 0.161516
Epoch: 4/10. Train set: Average loss: 0.1615
Epoch: 4/10. Validation set: Average loss: 0.1723
Train: [0/2931 (0%)] Loss: 0.223825
Train: [1098/2931 (100%)] Loss: 0.158067
Epoch: 5/10. Train set: Average loss: 0.1582
Epoch: 5/10. Validation set: Average loss: 0.1546
Train: [0/2931 (0%)] Loss: 0.155100
Train: [1098/2931 (100%)] Loss: 0.162263
Epoch: 6/10. Train set: Average loss: 0.1622
Epoch: 6/10. Validation set: Average loss: 0.1438
Train: [0/2931 (0%)] Loss: 0.127193
Train: [1098/2931 (100%)] Loss: 0.155447
Epoch: 7/10. Train set: Average loss: 0.1554
Epoch: 7/10. Validation set: Average loss: 0.1496
Train: [0/2931 (0%)] Loss: 0.101585
Train: [1098/2931 (100%)] Loss: 0.157011
Epoch: 8/10. Train set: Average loss: 0.1569
Epoch: 8/10. Validation set: Average loss: 0.1472
Train: [0/2931 (0%)] Loss: 0.177108
Train: [1098/2931 (100%)] Loss: 0.154770
Epoch: 9/10. Train set: Average loss: 0.1548
Epoch: 9/10. Validation set: Average loss: 0.1338
Train: [0/2931 (0%)] Loss: 0.131928
Train: [1098/2931 (100%)] Loss: 0.146285
Epoch: 10/10. Train set: Average loss: 0.1462
Epoch: 10/10. Validation set: Average loss: 0.1322
model_1
Train: [0/2931 (0%)] Loss: 0.186602
Train: [1098/2931 (100%)] Loss: 0.167459
Epoch: 1/10. Train set: Average loss: 0.1675
Epoch: 1/10. Validation set: Average loss: 0.2084
Train: [0/2931 (0%)] Loss: 0.163635
Train: [1098/2931 (100%)] Loss: 0.158512
Epoch: 2/10. Train set: Average loss: 0.1585
Epoch: 2/10. Validation set: Average loss: 0.2015
Train: [0/2931 (0%)] Loss: 0.095429
Train: [1098/2931 (100%)] Loss: 0.167992
Epoch: 3/10. Train set: Average loss: 0.1678
Epoch: 3/10. Validation set: Average loss: 0.1761
Train: [0/2931 (0%)] Loss: 0.093552
Train: [1098/2931 (100%)] Loss: 0.156883
Epoch: 4/10. Train set: Average loss: 0.1567
Epoch: 4/10. Validation set: Average loss: 0.1610
Train: [0/2931 (0%)] Loss: 0.202603
Train: [1098/2931 (100%)] Loss: 0.157509
Epoch: 5/10. Train set: Average loss: 0.1576
Epoch: 5/10. Validation set: Average loss: 0.1657
Train: [0/2931 (0%)] Loss: 0.165595
Train: [1098/2931 (100%)] Loss: 0.166301
Epoch: 6/10. Train set: Average loss: 0.1663
Epoch: 6/10. Validation set: Average loss: 0.1507
Train: [0/2931 (0%)] Loss: 0.144109
Train: [1098/2931 (100%)] Loss: 0.156015
Epoch: 7/10. Train set: Average loss: 0.1560
Epoch: 7/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.137737
Train: [1098/2931 (100%)] Loss: 0.161587
Epoch: 8/10. Train set: Average loss: 0.1615
Epoch: 8/10. Validation set: Average loss: 0.1467
Train: [0/2931 (0%)] Loss: 0.141070
Train: [1098/2931 (100%)] Loss: 0.150438
Epoch: 9/10. Train set: Average loss: 0.1504
Epoch: 9/10. Validation set: Average loss: 0.1362
Train: [0/2931 (0%)] Loss: 0.211718
Train: [1098/2931 (100%)] Loss: 0.144329
Epoch: 10/10. Train set: Average loss: 0.1445
Epoch: 10/10. Validation set: Average loss: 0.1352
model_2
Train: [0/2931 (0%)] Loss: 0.249329
Train: [1098/2931 (100%)] Loss: 0.171320
Epoch: 1/10. Train set: Average loss: 0.1715
Epoch: 1/10. Validation set: Average loss: 0.1609
Train: [0/2931 (0%)] Loss: 0.123918
Train: [1098/2931 (100%)] Loss: 0.160403
Epoch: 2/10. Train set: Average loss: 0.1603
Epoch: 2/10. Validation set: Average loss: 0.1501
Train: [0/2931 (0%)] Loss: 0.088278
Train: [1098/2931 (100%)] Loss: 0.164773
Epoch: 3/10. Train set: Average loss: 0.1646
Epoch: 3/10. Validation set: Average loss: 0.1822
Train: [0/2931 (0%)] Loss: 0.204437
Train: [1098/2931 (100%)] Loss: 0.161415
Epoch: 4/10. Train set: Average loss: 0.1615
Epoch: 4/10. Validation set: Average loss: 0.1496
Train: [0/2931 (0%)] Loss: 0.218900
Train: [1098/2931 (100%)] Loss: 0.159666
Epoch: 5/10. Train set: Average loss: 0.1598
Epoch: 5/10. Validation set: Average loss: 0.1481
Train: [0/2931 (0%)] Loss: 0.152889
Train: [1098/2931 (100%)] Loss: 0.155767
Epoch: 6/10. Train set: Average loss: 0.1558
Epoch: 6/10. Validation set: Average loss: 0.1396
Train: [0/2931 (0%)] Loss: 0.093512
Train: [1098/2931 (100%)] Loss: 0.158164
Epoch: 7/10. Train set: Average loss: 0.1580
Epoch: 7/10. Validation set: Average loss: 0.1479
Train: [0/2931 (0%)] Loss: 0.197385
Train: [1098/2931 (100%)] Loss: 0.150361
Epoch: 8/10. Train set: Average loss: 0.1505
Epoch: 8/10. Validation set: Average loss: 0.1525
Train: [0/2931 (0%)] Loss: 0.102113
Train: [1098/2931 (100%)] Loss: 0.146019
Epoch: 9/10. Train set: Average loss: 0.1459
Epoch: 9/10. Validation set: Average loss: 0.1334
Train: [0/2931 (0%)] Loss: 0.099769
Train: [1098/2931 (100%)] Loss: 0.148620
Epoch: 10/10. Train set: Average loss: 0.1485
Epoch: 10/10. Validation set: Average loss: 0.1337
Number features: 10
model_0
Train: [0/2931 (0%)] Loss: 0.249412
Train: [1098/2931 (100%)] Loss: 0.187082
Epoch: 1/10. Train set: Average loss: 0.1873
Epoch: 1/10. Validation set: Average loss: 0.1655
Train: [0/2931 (0%)] Loss: 0.250895
Train: [1098/2931 (100%)] Loss: 0.182110
Epoch: 2/10. Train set: Average loss: 0.1823
Epoch: 2/10. Validation set: Average loss: 0.1622
Train: [0/2931 (0%)] Loss: 0.123092
Train: [1098/2931 (100%)] Loss: 0.161674
Epoch: 3/10. Train set: Average loss: 0.1616
Epoch: 3/10. Validation set: Average loss: 0.1458
Train: [0/2931 (0%)] Loss: 0.196010
Train: [1098/2931 (100%)] Loss: 0.176337
Epoch: 4/10. Train set: Average loss: 0.1764
Epoch: 4/10. Validation set: Average loss: 0.1779
Train: [0/2931 (0%)] Loss: 0.199691
Train: [1098/2931 (100%)] Loss: 0.169670
Epoch: 5/10. Train set: Average loss: 0.1698
Epoch: 5/10. Validation set: Average loss: 0.1404
Train: [0/2931 (0%)] Loss: 0.134894
Train: [1098/2931 (100%)] Loss: 0.163425
Epoch: 6/10. Train set: Average loss: 0.1633
Epoch: 6/10. Validation set: Average loss: 0.1333
Train: [0/2931 (0%)] Loss: 0.102510
Train: [1098/2931 (100%)] Loss: 0.157410
Epoch: 7/10. Train set: Average loss: 0.1573
Epoch: 7/10. Validation set: Average loss: 0.1401
Train: [0/2931 (0%)] Loss: 0.162592
Train: [1098/2931 (100%)] Loss: 0.158957
Epoch: 8/10. Train set: Average loss: 0.1590
Epoch: 8/10. Validation set: Average loss: 0.1330
Train: [0/2931 (0%)] Loss: 0.160712
Train: [1098/2931 (100%)] Loss: 0.154028
Epoch: 9/10. Train set: Average loss: 0.1540
Epoch: 9/10. Validation set: Average loss: 0.1264
Train: [0/2931 (0%)] Loss: 0.160319
Train: [1098/2931 (100%)] Loss: 0.152868
Epoch: 10/10. Train set: Average loss: 0.1529
Epoch: 10/10. Validation set: Average loss: 0.1271
model_1
Train: [0/2931 (0%)] Loss: 0.373996
Train: [1098/2931 (100%)] Loss: 0.172552
Epoch: 1/10. Train set: Average loss: 0.1731
Epoch: 1/10. Validation set: Average loss: 0.2431
Train: [0/2931 (0%)] Loss: 0.156250
Train: [1098/2931 (100%)] Loss: 0.174181
Epoch: 2/10. Train set: Average loss: 0.1741
Epoch: 2/10. Validation set: Average loss: 0.2098
Train: [0/2931 (0%)] Loss: 0.249445
Train: [1098/2931 (100%)] Loss: 0.173141
Epoch: 3/10. Train set: Average loss: 0.1733
Epoch: 3/10. Validation set: Average loss: 0.1726
Train: [0/2931 (0%)] Loss: 0.163062
Train: [1098/2931 (100%)] Loss: 0.167174
Epoch: 4/10. Train set: Average loss: 0.1672
Epoch: 4/10. Validation set: Average loss: 0.1908
Train: [0/2931 (0%)] Loss: 0.256956
Train: [1098/2931 (100%)] Loss: 0.161539
Epoch: 5/10. Train set: Average loss: 0.1618
Epoch: 5/10. Validation set: Average loss: 0.1563
Train: [0/2931 (0%)] Loss: 0.118587
Train: [1098/2931 (100%)] Loss: 0.159734
Epoch: 6/10. Train set: Average loss: 0.1596
Epoch: 6/10. Validation set: Average loss: 0.1572
Train: [0/2931 (0%)] Loss: 0.217206
Train: [1098/2931 (100%)] Loss: 0.161703
Epoch: 7/10. Train set: Average loss: 0.1619
Epoch: 7/10. Validation set: Average loss: 0.1568
Train: [0/2931 (0%)] Loss: 0.194384
Train: [1098/2931 (100%)] Loss: 0.160779
Epoch: 8/10. Train set: Average loss: 0.1609
Epoch: 8/10. Validation set: Average loss: 0.1596
Train: [0/2931 (0%)] Loss: 0.203464
Train: [1098/2931 (100%)] Loss: 0.157606
Epoch: 9/10. Train set: Average loss: 0.1577
Epoch: 9/10. Validation set: Average loss: 0.1357
Train: [0/2931 (0%)] Loss: 0.185702
Train: [1098/2931 (100%)] Loss: 0.153994
Epoch: 10/10. Train set: Average loss: 0.1541
Epoch: 10/10. Validation set: Average loss: 0.1346
model_2
Train: [0/2931 (0%)] Loss: 0.372829
Train: [1098/2931 (100%)] Loss: 0.177363
Epoch: 1/10. Train set: Average loss: 0.1779
Epoch: 1/10. Validation set: Average loss: 0.1640
Train: [0/2931 (0%)] Loss: 0.264099
Train: [1098/2931 (100%)] Loss: 0.171105
Epoch: 2/10. Train set: Average loss: 0.1714
Epoch: 2/10. Validation set: Average loss: 0.1556
Train: [0/2931 (0%)] Loss: 0.211866
Train: [1098/2931 (100%)] Loss: 0.183897
Epoch: 3/10. Train set: Average loss: 0.1840
Epoch: 3/10. Validation set: Average loss: 0.1890
Train: [0/2931 (0%)] Loss: 0.216176
Train: [1098/2931 (100%)] Loss: 0.166163
Epoch: 4/10. Train set: Average loss: 0.1663
Epoch: 4/10. Validation set: Average loss: 0.1632
Train: [0/2931 (0%)] Loss: 0.200188
Train: [1098/2931 (100%)] Loss: 0.163710
Epoch: 5/10. Train set: Average loss: 0.1638
Epoch: 5/10. Validation set: Average loss: 0.1689
Train: [0/2931 (0%)] Loss: 0.112324
Train: [1098/2931 (100%)] Loss: 0.164526
Epoch: 6/10. Train set: Average loss: 0.1644
Epoch: 6/10. Validation set: Average loss: 0.1919
Train: [0/2931 (0%)] Loss: 0.184597
Train: [1098/2931 (100%)] Loss: 0.173952
Epoch: 7/10. Train set: Average loss: 0.1740
Epoch: 7/10. Validation set: Average loss: 0.1836
Train: [0/2931 (0%)] Loss: 0.176221
Train: [1098/2931 (100%)] Loss: 0.167745
Epoch: 8/10. Train set: Average loss: 0.1678
Epoch: 8/10. Validation set: Average loss: 0.1569
Train: [0/2931 (0%)] Loss: 0.073566
Train: [1098/2931 (100%)] Loss: 0.155699
Epoch: 9/10. Train set: Average loss: 0.1555
Epoch: 9/10. Validation set: Average loss: 0.1548
Train: [0/2931 (0%)] Loss: 0.091202
Train: [1098/2931 (100%)] Loss: 0.149078
Epoch: 10/10. Train set: Average loss: 0.1489
Epoch: 10/10. Validation set: Average loss: 0.1524
Number features: 11
model_0
Train: [0/2931 (0%)] Loss: 0.312177
Train: [1098/2931 (100%)] Loss: 0.168041
Epoch: 1/10. Train set: Average loss: 0.1684
Epoch: 1/10. Validation set: Average loss: 0.1332
Train: [0/2931 (0%)] Loss: 0.132364
Train: [1098/2931 (100%)] Loss: 0.165831
Epoch: 2/10. Train set: Average loss: 0.1657
Epoch: 2/10. Validation set: Average loss: 0.1304
Train: [0/2931 (0%)] Loss: 0.112254
Train: [1098/2931 (100%)] Loss: 0.158665
Epoch: 3/10. Train set: Average loss: 0.1585
Epoch: 3/10. Validation set: Average loss: 0.1404
Train: [0/2931 (0%)] Loss: 0.180067
Train: [1098/2931 (100%)] Loss: 0.160611
Epoch: 4/10. Train set: Average loss: 0.1607
Epoch: 4/10. Validation set: Average loss: 0.1294
Train: [0/2931 (0%)] Loss: 0.163732
Train: [1098/2931 (100%)] Loss: 0.157914
Epoch: 5/10. Train set: Average loss: 0.1579
Epoch: 5/10. Validation set: Average loss: 0.1336
Train: [0/2931 (0%)] Loss: 0.189788
Train: [1098/2931 (100%)] Loss: 0.153006
Epoch: 6/10. Train set: Average loss: 0.1531
Epoch: 6/10. Validation set: Average loss: 0.1484
Train: [0/2931 (0%)] Loss: 0.202032
Train: [1098/2931 (100%)] Loss: 0.158801
Epoch: 7/10. Train set: Average loss: 0.1589
Epoch: 7/10. Validation set: Average loss: 0.1383
Train: [0/2931 (0%)] Loss: 0.166501
Train: [1098/2931 (100%)] Loss: 0.149896
Epoch: 8/10. Train set: Average loss: 0.1499
Epoch: 8/10. Validation set: Average loss: 0.1249
Train: [0/2931 (0%)] Loss: 0.213199
Train: [1098/2931 (100%)] Loss: 0.148180
Epoch: 9/10. Train set: Average loss: 0.1484
Epoch: 9/10. Validation set: Average loss: 0.1285
Train: [0/2931 (0%)] Loss: 0.114280
Train: [1098/2931 (100%)] Loss: 0.147572
Epoch: 10/10. Train set: Average loss: 0.1475
Epoch: 10/10. Validation set: Average loss: 0.1304
model_1
Train: [0/2931 (0%)] Loss: 0.312005
Train: [1098/2931 (100%)] Loss: 0.175944
Epoch: 1/10. Train set: Average loss: 0.1763
Epoch: 1/10. Validation set: Average loss: 0.2002
Train: [0/2931 (0%)] Loss: 0.312919
Train: [1098/2931 (100%)] Loss: 0.170425
Epoch: 2/10. Train set: Average loss: 0.1708
Epoch: 2/10. Validation set: Average loss: 0.1388
Train: [0/2931 (0%)] Loss: 0.197988
Train: [1098/2931 (100%)] Loss: 0.157850
Epoch: 3/10. Train set: Average loss: 0.1580
Epoch: 3/10. Validation set: Average loss: 0.1497
Train: [0/2931 (0%)] Loss: 0.136628
Train: [1098/2931 (100%)] Loss: 0.159658
Epoch: 4/10. Train set: Average loss: 0.1596
Epoch: 4/10. Validation set: Average loss: 0.1322
Train: [0/2931 (0%)] Loss: 0.096568
Train: [1098/2931 (100%)] Loss: 0.156240
Epoch: 5/10. Train set: Average loss: 0.1561
Epoch: 5/10. Validation set: Average loss: 0.1347
Train: [0/2931 (0%)] Loss: 0.174852
Train: [1098/2931 (100%)] Loss: 0.153904
Epoch: 6/10. Train set: Average loss: 0.1540
Epoch: 6/10. Validation set: Average loss: 0.1349
Train: [0/2931 (0%)] Loss: 0.202289
Train: [1098/2931 (100%)] Loss: 0.157255
Epoch: 7/10. Train set: Average loss: 0.1574
Epoch: 7/10. Validation set: Average loss: 0.1301
Train: [0/2931 (0%)] Loss: 0.089711
Train: [1098/2931 (100%)] Loss: 0.148369
Epoch: 8/10. Train set: Average loss: 0.1482
Epoch: 8/10. Validation set: Average loss: 0.1300
Train: [0/2931 (0%)] Loss: 0.192598
Train: [1098/2931 (100%)] Loss: 0.151589
Epoch: 9/10. Train set: Average loss: 0.1517
Epoch: 9/10. Validation set: Average loss: 0.1311
Train: [0/2931 (0%)] Loss: 0.179440
Train: [1098/2931 (100%)] Loss: 0.148712
Epoch: 10/10. Train set: Average loss: 0.1488
Epoch: 10/10. Validation set: Average loss: 0.1333
model_2
Train: [0/2931 (0%)] Loss: 0.124834
Train: [1098/2931 (100%)] Loss: 0.176548
Epoch: 1/10. Train set: Average loss: 0.1764
Epoch: 1/10. Validation set: Average loss: 0.1668
Train: [0/2931 (0%)] Loss: 0.102105
Train: [1098/2931 (100%)] Loss: 0.170898
Epoch: 2/10. Train set: Average loss: 0.1707
Epoch: 2/10. Validation set: Average loss: 0.1326
Train: [0/2931 (0%)] Loss: 0.255631
Train: [1098/2931 (100%)] Loss: 0.162758
Epoch: 3/10. Train set: Average loss: 0.1630
Epoch: 3/10. Validation set: Average loss: 0.1527
Train: [0/2931 (0%)] Loss: 0.408119
Train: [1098/2931 (100%)] Loss: 0.159014
Epoch: 4/10. Train set: Average loss: 0.1597
Epoch: 4/10. Validation set: Average loss: 0.1271
Train: [0/2931 (0%)] Loss: 0.157194
Train: [1098/2931 (100%)] Loss: 0.157520
Epoch: 5/10. Train set: Average loss: 0.1575
Epoch: 5/10. Validation set: Average loss: 0.1364
Train: [0/2931 (0%)] Loss: 0.211618
Train: [1098/2931 (100%)] Loss: 0.155954
Epoch: 6/10. Train set: Average loss: 0.1561
Epoch: 6/10. Validation set: Average loss: 0.1287
Train: [0/2931 (0%)] Loss: 0.181626
Train: [1098/2931 (100%)] Loss: 0.153132
Epoch: 7/10. Train set: Average loss: 0.1532
Epoch: 7/10. Validation set: Average loss: 0.1277
Train: [0/2931 (0%)] Loss: 0.142254
Train: [1098/2931 (100%)] Loss: 0.151576
Epoch: 8/10. Train set: Average loss: 0.1516
Epoch: 8/10. Validation set: Average loss: 0.1396
Train: [0/2931 (0%)] Loss: 0.157139
Train: [1098/2931 (100%)] Loss: 0.146200
Epoch: 9/10. Train set: Average loss: 0.1462
Epoch: 9/10. Validation set: Average loss: 0.1271
Train: [0/2931 (0%)] Loss: 0.169215
Train: [1098/2931 (100%)] Loss: 0.142956
Epoch: 10/10. Train set: Average loss: 0.1430
Epoch: 10/10. Validation set: Average loss: 0.1255
Number features: 12
model_0
Train: [0/2931 (0%)] Loss: 0.124846
Train: [1098/2931 (100%)] Loss: 0.167120
Epoch: 1/10. Train set: Average loss: 0.1670
Epoch: 1/10. Validation set: Average loss: 0.1552
Train: [0/2931 (0%)] Loss: 0.101031
Train: [1098/2931 (100%)] Loss: 0.164560
Epoch: 2/10. Train set: Average loss: 0.1644
Epoch: 2/10. Validation set: Average loss: 0.1443
Train: [0/2931 (0%)] Loss: 0.029425
Train: [1098/2931 (100%)] Loss: 0.160298
Epoch: 3/10. Train set: Average loss: 0.1599
Epoch: 3/10. Validation set: Average loss: 0.1541
Train: [0/2931 (0%)] Loss: 0.102553
Train: [1098/2931 (100%)] Loss: 0.159567
Epoch: 4/10. Train set: Average loss: 0.1594
Epoch: 4/10. Validation set: Average loss: 0.1585
Train: [0/2931 (0%)] Loss: 0.261287
Train: [1098/2931 (100%)] Loss: 0.158422
Epoch: 5/10. Train set: Average loss: 0.1587
Epoch: 5/10. Validation set: Average loss: 0.1630
Train: [0/2931 (0%)] Loss: 0.284492
Train: [1098/2931 (100%)] Loss: 0.162757
Epoch: 6/10. Train set: Average loss: 0.1631
Epoch: 6/10. Validation set: Average loss: 0.1531
Train: [0/2931 (0%)] Loss: 0.146258
Train: [1098/2931 (100%)] Loss: 0.150515
Epoch: 7/10. Train set: Average loss: 0.1505
Epoch: 7/10. Validation set: Average loss: 0.1495
Train: [0/2931 (0%)] Loss: 0.100882
Train: [1098/2931 (100%)] Loss: 0.152031
Epoch: 8/10. Train set: Average loss: 0.1519
Epoch: 8/10. Validation set: Average loss: 0.1584
Train: [0/2931 (0%)] Loss: 0.229376
Train: [1098/2931 (100%)] Loss: 0.152146
Epoch: 9/10. Train set: Average loss: 0.1524
Epoch: 9/10. Validation set: Average loss: 0.1494
Train: [0/2931 (0%)] Loss: 0.149636
Train: [1098/2931 (100%)] Loss: 0.143578
Epoch: 10/10. Train set: Average loss: 0.1436
Epoch: 10/10. Validation set: Average loss: 0.1515
model_1
Train: [0/2931 (0%)] Loss: 0.249808
Train: [1098/2931 (100%)] Loss: 0.161854
Epoch: 1/10. Train set: Average loss: 0.1621
Epoch: 1/10. Validation set: Average loss: 0.1633
Train: [0/2931 (0%)] Loss: 0.176933
Train: [1098/2931 (100%)] Loss: 0.168080
Epoch: 2/10. Train set: Average loss: 0.1681
Epoch: 2/10. Validation set: Average loss: 0.1771
Train: [0/2931 (0%)] Loss: 0.238057
Train: [1098/2931 (100%)] Loss: 0.159450
Epoch: 3/10. Train set: Average loss: 0.1597
Epoch: 3/10. Validation set: Average loss: 0.1550
Train: [0/2931 (0%)] Loss: 0.139022
Train: [1098/2931 (100%)] Loss: 0.173770
Epoch: 4/10. Train set: Average loss: 0.1737
Epoch: 4/10. Validation set: Average loss: 0.1647
Train: [0/2931 (0%)] Loss: 0.179869
Train: [1098/2931 (100%)] Loss: 0.164578
Epoch: 5/10. Train set: Average loss: 0.1646
Epoch: 5/10. Validation set: Average loss: 0.1557
Train: [0/2931 (0%)] Loss: 0.181220
Train: [1098/2931 (100%)] Loss: 0.153914
Epoch: 6/10. Train set: Average loss: 0.1540
Epoch: 6/10. Validation set: Average loss: 0.1463
Train: [0/2931 (0%)] Loss: 0.146850
Train: [1098/2931 (100%)] Loss: 0.157944
Epoch: 7/10. Train set: Average loss: 0.1579
Epoch: 7/10. Validation set: Average loss: 0.1461
Train: [0/2931 (0%)] Loss: 0.125385
Train: [1098/2931 (100%)] Loss: 0.157141
Epoch: 8/10. Train set: Average loss: 0.1571
Epoch: 8/10. Validation set: Average loss: 0.1478
Train: [0/2931 (0%)] Loss: 0.124587
Train: [1098/2931 (100%)] Loss: 0.148190
Epoch: 9/10. Train set: Average loss: 0.1481
Epoch: 9/10. Validation set: Average loss: 0.1388
Train: [0/2931 (0%)] Loss: 0.153556
Train: [1098/2931 (100%)] Loss: 0.148180
Epoch: 10/10. Train set: Average loss: 0.1482
Epoch: 10/10. Validation set: Average loss: 0.1388
model_2
Train: [0/2931 (0%)] Loss: 0.062471
Train: [1098/2931 (100%)] Loss: 0.176194
Epoch: 1/10. Train set: Average loss: 0.1759
Epoch: 1/10. Validation set: Average loss: 0.1647
Train: [0/2931 (0%)] Loss: 0.124979
Train: [1098/2931 (100%)] Loss: 0.164169
Epoch: 2/10. Train set: Average loss: 0.1641
Epoch: 2/10. Validation set: Average loss: 0.1591
Train: [0/2931 (0%)] Loss: 0.194013
Train: [1098/2931 (100%)] Loss: 0.161464
Epoch: 3/10. Train set: Average loss: 0.1616
Epoch: 3/10. Validation set: Average loss: 0.1696
Train: [0/2931 (0%)] Loss: 0.105732
Train: [1098/2931 (100%)] Loss: 0.167445
Epoch: 4/10. Train set: Average loss: 0.1673
Epoch: 4/10. Validation set: Average loss: 0.1929
Train: [0/2931 (0%)] Loss: 0.153379
Train: [1098/2931 (100%)] Loss: 0.155339
Epoch: 5/10. Train set: Average loss: 0.1553
Epoch: 5/10. Validation set: Average loss: 0.1499
Train: [0/2931 (0%)] Loss: 0.136056
Train: [1098/2931 (100%)] Loss: 0.157275
Epoch: 6/10. Train set: Average loss: 0.1572
Epoch: 6/10. Validation set: Average loss: 0.1546
Train: [0/2931 (0%)] Loss: 0.073974
Train: [1098/2931 (100%)] Loss: 0.158201
Epoch: 7/10. Train set: Average loss: 0.1580
Epoch: 7/10. Validation set: Average loss: 0.1635
Train: [0/2931 (0%)] Loss: 0.100368
Train: [1098/2931 (100%)] Loss: 0.153201
Epoch: 8/10. Train set: Average loss: 0.1531
Epoch: 8/10. Validation set: Average loss: 0.1580
Train: [0/2931 (0%)] Loss: 0.099074
Train: [1098/2931 (100%)] Loss: 0.149263
Epoch: 9/10. Train set: Average loss: 0.1491
Epoch: 9/10. Validation set: Average loss: 0.1470
Train: [0/2931 (0%)] Loss: 0.099755
Train: [1098/2931 (100%)] Loss: 0.148099
Epoch: 10/10. Train set: Average loss: 0.1480
Epoch: 10/10. Validation set: Average loss: 0.1457
Number features: 13
model_0
Train: [0/2931 (0%)] Loss: 0.249722
Train: [1098/2931 (100%)] Loss: 0.170928
Epoch: 1/10. Train set: Average loss: 0.1711
Epoch: 1/10. Validation set: Average loss: 0.1578
Train: [0/2931 (0%)] Loss: 0.125957
Train: [1098/2931 (100%)] Loss: 0.161343
Epoch: 2/10. Train set: Average loss: 0.1612
Epoch: 2/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.133766
Train: [1098/2931 (100%)] Loss: 0.165905
Epoch: 3/10. Train set: Average loss: 0.1658
Epoch: 3/10. Validation set: Average loss: 0.1320
Train: [0/2931 (0%)] Loss: 0.127214
Train: [1098/2931 (100%)] Loss: 0.160706
Epoch: 4/10. Train set: Average loss: 0.1606
Epoch: 4/10. Validation set: Average loss: 0.1294
Train: [0/2931 (0%)] Loss: 0.182462
Train: [1098/2931 (100%)] Loss: 0.157801
Epoch: 5/10. Train set: Average loss: 0.1579
Epoch: 5/10. Validation set: Average loss: 0.1428
Train: [0/2931 (0%)] Loss: 0.101304
Train: [1098/2931 (100%)] Loss: 0.157238
Epoch: 6/10. Train set: Average loss: 0.1571
Epoch: 6/10. Validation set: Average loss: 0.1473
Train: [0/2931 (0%)] Loss: 0.159777
Train: [1098/2931 (100%)] Loss: 0.158900
Epoch: 7/10. Train set: Average loss: 0.1589
Epoch: 7/10. Validation set: Average loss: 0.1400
Train: [0/2931 (0%)] Loss: 0.172493
Train: [1098/2931 (100%)] Loss: 0.156865
Epoch: 8/10. Train set: Average loss: 0.1569
Epoch: 8/10. Validation set: Average loss: 0.1236
Train: [0/2931 (0%)] Loss: 0.166630
Train: [1098/2931 (100%)] Loss: 0.144938
Epoch: 9/10. Train set: Average loss: 0.1450
Epoch: 9/10. Validation set: Average loss: 0.1188
Train: [0/2931 (0%)] Loss: 0.127844
Train: [1098/2931 (100%)] Loss: 0.148606
Epoch: 10/10. Train set: Average loss: 0.1485
Epoch: 10/10. Validation set: Average loss: 0.1208
model_1
Train: [0/2931 (0%)] Loss: 0.249688
Train: [1098/2931 (100%)] Loss: 0.165473
Epoch: 1/10. Train set: Average loss: 0.1657
Epoch: 1/10. Validation set: Average loss: 0.1405
Train: [0/2931 (0%)] Loss: 0.154052
Train: [1098/2931 (100%)] Loss: 0.159096
Epoch: 2/10. Train set: Average loss: 0.1591
Epoch: 2/10. Validation set: Average loss: 0.1464
Train: [0/2931 (0%)] Loss: 0.243976
Train: [1098/2931 (100%)] Loss: 0.170764
Epoch: 3/10. Train set: Average loss: 0.1710
Epoch: 3/10. Validation set: Average loss: 0.1478
Train: [0/2931 (0%)] Loss: 0.224453
Train: [1098/2931 (100%)] Loss: 0.160204
Epoch: 4/10. Train set: Average loss: 0.1604
Epoch: 4/10. Validation set: Average loss: 0.1339
Train: [0/2931 (0%)] Loss: 0.140216
Train: [1098/2931 (100%)] Loss: 0.158101
Epoch: 5/10. Train set: Average loss: 0.1581
Epoch: 5/10. Validation set: Average loss: 0.1361
Train: [0/2931 (0%)] Loss: 0.193429
Train: [1098/2931 (100%)] Loss: 0.157666
Epoch: 6/10. Train set: Average loss: 0.1578
Epoch: 6/10. Validation set: Average loss: 0.1395
Train: [0/2931 (0%)] Loss: 0.198858
Train: [1098/2931 (100%)] Loss: 0.157941
Epoch: 7/10. Train set: Average loss: 0.1581
Epoch: 7/10. Validation set: Average loss: 0.1286
Train: [0/2931 (0%)] Loss: 0.179480
Train: [1098/2931 (100%)] Loss: 0.159295
Epoch: 8/10. Train set: Average loss: 0.1593
Epoch: 8/10. Validation set: Average loss: 0.1295
Train: [0/2931 (0%)] Loss: 0.105008
Train: [1098/2931 (100%)] Loss: 0.147565
Epoch: 9/10. Train set: Average loss: 0.1474
Epoch: 9/10. Validation set: Average loss: 0.1264
Train: [0/2931 (0%)] Loss: 0.117981
Train: [1098/2931 (100%)] Loss: 0.151202
Epoch: 10/10. Train set: Average loss: 0.1511
Epoch: 10/10. Validation set: Average loss: 0.1269
model_2
Train: [0/2931 (0%)] Loss: 0.186820
Train: [1098/2931 (100%)] Loss: 0.167451
Epoch: 1/10. Train set: Average loss: 0.1675
Epoch: 1/10. Validation set: Average loss: 0.1771
Train: [0/2931 (0%)] Loss: 0.132608
Train: [1098/2931 (100%)] Loss: 0.162559
Epoch: 2/10. Train set: Average loss: 0.1625
Epoch: 2/10. Validation set: Average loss: 0.1527
Train: [0/2931 (0%)] Loss: 0.133469
Train: [1098/2931 (100%)] Loss: 0.165529
Epoch: 3/10. Train set: Average loss: 0.1654
Epoch: 3/10. Validation set: Average loss: 0.1478
Train: [0/2931 (0%)] Loss: 0.187875
Train: [1098/2931 (100%)] Loss: 0.169626
Epoch: 4/10. Train set: Average loss: 0.1697
Epoch: 4/10. Validation set: Average loss: 0.1402
Train: [0/2931 (0%)] Loss: 0.101742
Train: [1098/2931 (100%)] Loss: 0.159960
Epoch: 5/10. Train set: Average loss: 0.1598
Epoch: 5/10. Validation set: Average loss: 0.1386
Train: [0/2931 (0%)] Loss: 0.129781
Train: [1098/2931 (100%)] Loss: 0.159523
Epoch: 6/10. Train set: Average loss: 0.1594
Epoch: 6/10. Validation set: Average loss: 0.1272
Train: [0/2931 (0%)] Loss: 0.139906
Train: [1098/2931 (100%)] Loss: 0.154326
Epoch: 7/10. Train set: Average loss: 0.1543
Epoch: 7/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.142215
Train: [1098/2931 (100%)] Loss: 0.153193
Epoch: 8/10. Train set: Average loss: 0.1532
Epoch: 8/10. Validation set: Average loss: 0.1443
Train: [0/2931 (0%)] Loss: 0.122812
Train: [1098/2931 (100%)] Loss: 0.151648
Epoch: 9/10. Train set: Average loss: 0.1516
Epoch: 9/10. Validation set: Average loss: 0.1210
Train: [0/2931 (0%)] Loss: 0.125212
Train: [1098/2931 (100%)] Loss: 0.145386
Epoch: 10/10. Train set: Average loss: 0.1453
Epoch: 10/10. Validation set: Average loss: 0.1211
Number features: 14
model_0
Train: [0/2931 (0%)] Loss: 0.311782
Train: [1098/2931 (100%)] Loss: 0.171142
Epoch: 1/10. Train set: Average loss: 0.1715
Epoch: 1/10. Validation set: Average loss: 0.1523
Train: [0/2931 (0%)] Loss: 0.152628
Train: [1098/2931 (100%)] Loss: 0.169321
Epoch: 2/10. Train set: Average loss: 0.1693
Epoch: 2/10. Validation set: Average loss: 0.1799
Train: [0/2931 (0%)] Loss: 0.180688
Train: [1098/2931 (100%)] Loss: 0.170696
Epoch: 3/10. Train set: Average loss: 0.1707
Epoch: 3/10. Validation set: Average loss: 0.1479
Train: [0/2931 (0%)] Loss: 0.118194
Train: [1098/2931 (100%)] Loss: 0.160951
Epoch: 4/10. Train set: Average loss: 0.1608
Epoch: 4/10. Validation set: Average loss: 0.1461
Train: [0/2931 (0%)] Loss: 0.083107
Train: [1098/2931 (100%)] Loss: 0.160856
Epoch: 5/10. Train set: Average loss: 0.1606
Epoch: 5/10. Validation set: Average loss: 0.1436
Train: [0/2931 (0%)] Loss: 0.189100
Train: [1098/2931 (100%)] Loss: 0.162687
Epoch: 6/10. Train set: Average loss: 0.1628
Epoch: 6/10. Validation set: Average loss: 0.1398
Train: [0/2931 (0%)] Loss: 0.104530
Train: [1098/2931 (100%)] Loss: 0.162396
Epoch: 7/10. Train set: Average loss: 0.1622
Epoch: 7/10. Validation set: Average loss: 0.1480
Train: [0/2931 (0%)] Loss: 0.106146
Train: [1098/2931 (100%)] Loss: 0.159736
Epoch: 8/10. Train set: Average loss: 0.1596
Epoch: 8/10. Validation set: Average loss: 0.1394
Train: [0/2931 (0%)] Loss: 0.133384
Train: [1098/2931 (100%)] Loss: 0.155127
Epoch: 9/10. Train set: Average loss: 0.1551
Epoch: 9/10. Validation set: Average loss: 0.1385
Train: [0/2931 (0%)] Loss: 0.164516
Train: [1098/2931 (100%)] Loss: 0.151263
Epoch: 10/10. Train set: Average loss: 0.1513
Epoch: 10/10. Validation set: Average loss: 0.1401
model_1
Train: [0/2931 (0%)] Loss: 0.124548
Train: [1098/2931 (100%)] Loss: 0.169221
Epoch: 1/10. Train set: Average loss: 0.1691
Epoch: 1/10. Validation set: Average loss: 0.1493
Train: [0/2931 (0%)] Loss: 0.103781
Train: [1098/2931 (100%)] Loss: 0.165229
Epoch: 2/10. Train set: Average loss: 0.1651
Epoch: 2/10. Validation set: Average loss: 0.1621
Train: [0/2931 (0%)] Loss: 0.100728
Train: [1098/2931 (100%)] Loss: 0.160378
Epoch: 3/10. Train set: Average loss: 0.1602
Epoch: 3/10. Validation set: Average loss: 0.1252
Train: [0/2931 (0%)] Loss: 0.154313
Train: [1098/2931 (100%)] Loss: 0.157410
Epoch: 4/10. Train set: Average loss: 0.1574
Epoch: 4/10. Validation set: Average loss: 0.1440
Train: [0/2931 (0%)] Loss: 0.128331
Train: [1098/2931 (100%)] Loss: 0.164039
Epoch: 5/10. Train set: Average loss: 0.1639
Epoch: 5/10. Validation set: Average loss: 0.1883
Train: [0/2931 (0%)] Loss: 0.125215
Train: [1098/2931 (100%)] Loss: 0.168806
Epoch: 6/10. Train set: Average loss: 0.1687
Epoch: 6/10. Validation set: Average loss: 0.1860
Train: [0/2931 (0%)] Loss: 0.079327
Train: [1098/2931 (100%)] Loss: 0.167780
Epoch: 7/10. Train set: Average loss: 0.1675
Epoch: 7/10. Validation set: Average loss: 0.1378
Train: [0/2931 (0%)] Loss: 0.100648
Train: [1098/2931 (100%)] Loss: 0.155497
Epoch: 8/10. Train set: Average loss: 0.1553
Epoch: 8/10. Validation set: Average loss: 0.1363
Train: [0/2931 (0%)] Loss: 0.158128
Train: [1098/2931 (100%)] Loss: 0.149840
Epoch: 9/10. Train set: Average loss: 0.1499
Epoch: 9/10. Validation set: Average loss: 0.1341
Train: [0/2931 (0%)] Loss: 0.147025
Train: [1098/2931 (100%)] Loss: 0.150036
Epoch: 10/10. Train set: Average loss: 0.1500
Epoch: 10/10. Validation set: Average loss: 0.1342
model_2
Train: [0/2931 (0%)] Loss: 0.187077
Train: [1098/2931 (100%)] Loss: 0.166705
Epoch: 1/10. Train set: Average loss: 0.1668
Epoch: 1/10. Validation set: Average loss: 0.1637
Train: [0/2931 (0%)] Loss: 0.160034
Train: [1098/2931 (100%)] Loss: 0.175702
Epoch: 2/10. Train set: Average loss: 0.1757
Epoch: 2/10. Validation set: Average loss: 0.1681
Train: [0/2931 (0%)] Loss: 0.162259
Train: [1098/2931 (100%)] Loss: 0.163207
Epoch: 3/10. Train set: Average loss: 0.1632
Epoch: 3/10. Validation set: Average loss: 0.1774
Train: [0/2931 (0%)] Loss: 0.143641
Train: [1098/2931 (100%)] Loss: 0.166412
Epoch: 4/10. Train set: Average loss: 0.1664
Epoch: 4/10. Validation set: Average loss: 0.1765
Train: [0/2931 (0%)] Loss: 0.136546
Train: [1098/2931 (100%)] Loss: 0.159771
Epoch: 5/10. Train set: Average loss: 0.1597
Epoch: 5/10. Validation set: Average loss: 0.1533
Train: [0/2931 (0%)] Loss: 0.151770
Train: [1098/2931 (100%)] Loss: 0.160618
Epoch: 6/10. Train set: Average loss: 0.1606
Epoch: 6/10. Validation set: Average loss: 0.1504
Train: [0/2931 (0%)] Loss: 0.123361
Train: [1098/2931 (100%)] Loss: 0.151119
Epoch: 7/10. Train set: Average loss: 0.1510
Epoch: 7/10. Validation set: Average loss: 0.1467
Train: [0/2931 (0%)] Loss: 0.156258
Train: [1098/2931 (100%)] Loss: 0.156110
Epoch: 8/10. Train set: Average loss: 0.1561
Epoch: 8/10. Validation set: Average loss: 0.1686
Train: [0/2931 (0%)] Loss: 0.153889
Train: [1098/2931 (100%)] Loss: 0.155886
Epoch: 9/10. Train set: Average loss: 0.1559
Epoch: 9/10. Validation set: Average loss: 0.1414
Train: [0/2931 (0%)] Loss: 0.179934
Train: [1098/2931 (100%)] Loss: 0.144610
Epoch: 10/10. Train set: Average loss: 0.1447
Epoch: 10/10. Validation set: Average loss: 0.1414
Number features: 15
model_0
Train: [0/2931 (0%)] Loss: 0.187218
Train: [1098/2931 (100%)] Loss: 0.169886
Epoch: 1/10. Train set: Average loss: 0.1699
Epoch: 1/10. Validation set: Average loss: 0.1591
Train: [0/2931 (0%)] Loss: 0.207101
Train: [1098/2931 (100%)] Loss: 0.168929
Epoch: 2/10. Train set: Average loss: 0.1690
Epoch: 2/10. Validation set: Average loss: 0.1495
Train: [0/2931 (0%)] Loss: 0.071338
Train: [1098/2931 (100%)] Loss: 0.157009
Epoch: 3/10. Train set: Average loss: 0.1568
Epoch: 3/10. Validation set: Average loss: 0.1726
Train: [0/2931 (0%)] Loss: 0.189601
Train: [1098/2931 (100%)] Loss: 0.168629
Epoch: 4/10. Train set: Average loss: 0.1687
Epoch: 4/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.163232
Train: [1098/2931 (100%)] Loss: 0.163086
Epoch: 5/10. Train set: Average loss: 0.1631
Epoch: 5/10. Validation set: Average loss: 0.1781
Train: [0/2931 (0%)] Loss: 0.204972
Train: [1098/2931 (100%)] Loss: 0.155482
Epoch: 6/10. Train set: Average loss: 0.1556
Epoch: 6/10. Validation set: Average loss: 0.1525
Train: [0/2931 (0%)] Loss: 0.198200
Train: [1098/2931 (100%)] Loss: 0.167773
Epoch: 7/10. Train set: Average loss: 0.1679
Epoch: 7/10. Validation set: Average loss: 0.1682
Train: [0/2931 (0%)] Loss: 0.204565
Train: [1098/2931 (100%)] Loss: 0.156119
Epoch: 8/10. Train set: Average loss: 0.1563
Epoch: 8/10. Validation set: Average loss: 0.1703
Train: [0/2931 (0%)] Loss: 0.190123
Train: [1098/2931 (100%)] Loss: 0.155336
Epoch: 9/10. Train set: Average loss: 0.1554
Epoch: 9/10. Validation set: Average loss: 0.1486
Train: [0/2931 (0%)] Loss: 0.141820
Train: [1098/2931 (100%)] Loss: 0.147669
Epoch: 10/10. Train set: Average loss: 0.1477
Epoch: 10/10. Validation set: Average loss: 0.1489
model_1
Train: [0/2931 (0%)] Loss: 0.248835
Train: [1098/2931 (100%)] Loss: 0.177620
Epoch: 1/10. Train set: Average loss: 0.1778
Epoch: 1/10. Validation set: Average loss: 0.1714
Train: [0/2931 (0%)] Loss: 0.128312
Train: [1098/2931 (100%)] Loss: 0.180385
Epoch: 2/10. Train set: Average loss: 0.1802
Epoch: 2/10. Validation set: Average loss: 0.1882
Train: [0/2931 (0%)] Loss: 0.197571
Train: [1098/2931 (100%)] Loss: 0.186177
Epoch: 3/10. Train set: Average loss: 0.1862
Epoch: 3/10. Validation set: Average loss: 0.1784
Train: [0/2931 (0%)] Loss: 0.180224
Train: [1098/2931 (100%)] Loss: 0.159931
Epoch: 4/10. Train set: Average loss: 0.1600
Epoch: 4/10. Validation set: Average loss: 0.1821
Train: [0/2931 (0%)] Loss: 0.133966
Train: [1098/2931 (100%)] Loss: 0.161610
Epoch: 5/10. Train set: Average loss: 0.1615
Epoch: 5/10. Validation set: Average loss: 0.1498
Train: [0/2931 (0%)] Loss: 0.180559
Train: [1098/2931 (100%)] Loss: 0.167781
Epoch: 6/10. Train set: Average loss: 0.1678
Epoch: 6/10. Validation set: Average loss: 0.1762
Train: [0/2931 (0%)] Loss: 0.149246
Train: [1098/2931 (100%)] Loss: 0.162563
Epoch: 7/10. Train set: Average loss: 0.1625
Epoch: 7/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.175564
Train: [1098/2931 (100%)] Loss: 0.159047
Epoch: 8/10. Train set: Average loss: 0.1591
Epoch: 8/10. Validation set: Average loss: 0.1692
Train: [0/2931 (0%)] Loss: 0.186975
Train: [1098/2931 (100%)] Loss: 0.159223
Epoch: 9/10. Train set: Average loss: 0.1593
Epoch: 9/10. Validation set: Average loss: 0.1420
Train: [0/2931 (0%)] Loss: 0.189447
Train: [1098/2931 (100%)] Loss: 0.151837
Epoch: 10/10. Train set: Average loss: 0.1519
Epoch: 10/10. Validation set: Average loss: 0.1417
model_2
Train: [0/2931 (0%)] Loss: 0.124908
Train: [1098/2931 (100%)] Loss: 0.169033
Epoch: 1/10. Train set: Average loss: 0.1689
Epoch: 1/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.169911
Train: [1098/2931 (100%)] Loss: 0.169521
Epoch: 2/10. Train set: Average loss: 0.1695
Epoch: 2/10. Validation set: Average loss: 0.1735
Train: [0/2931 (0%)] Loss: 0.152619
Train: [1098/2931 (100%)] Loss: 0.161033
Epoch: 3/10. Train set: Average loss: 0.1610
Epoch: 3/10. Validation set: Average loss: 0.1506
Train: [0/2931 (0%)] Loss: 0.097098
Train: [1098/2931 (100%)] Loss: 0.158109
Epoch: 4/10. Train set: Average loss: 0.1579
Epoch: 4/10. Validation set: Average loss: 0.1756
Train: [0/2931 (0%)] Loss: 0.089424
Train: [1098/2931 (100%)] Loss: 0.161251
Epoch: 5/10. Train set: Average loss: 0.1611
Epoch: 5/10. Validation set: Average loss: 0.1685
Train: [0/2931 (0%)] Loss: 0.062694
Train: [1098/2931 (100%)] Loss: 0.160819
Epoch: 6/10. Train set: Average loss: 0.1606
Epoch: 6/10. Validation set: Average loss: 0.1751
Train: [0/2931 (0%)] Loss: 0.133999
Train: [1098/2931 (100%)] Loss: 0.154364
Epoch: 7/10. Train set: Average loss: 0.1543
Epoch: 7/10. Validation set: Average loss: 0.1607
Train: [0/2931 (0%)] Loss: 0.111309
Train: [1098/2931 (100%)] Loss: 0.155447
Epoch: 8/10. Train set: Average loss: 0.1553
Epoch: 8/10. Validation set: Average loss: 0.1604
Train: [0/2931 (0%)] Loss: 0.105627
Train: [1098/2931 (100%)] Loss: 0.151532
Epoch: 9/10. Train set: Average loss: 0.1514
Epoch: 9/10. Validation set: Average loss: 0.1445
Train: [0/2931 (0%)] Loss: 0.105700
Train: [1098/2931 (100%)] Loss: 0.147182
Epoch: 10/10. Train set: Average loss: 0.1471
Epoch: 10/10. Validation set: Average loss: 0.1410
Number features: 16
model_0
Train: [0/2931 (0%)] Loss: 0.187312
Train: [1098/2931 (100%)] Loss: 0.175063
Epoch: 1/10. Train set: Average loss: 0.1751
Epoch: 1/10. Validation set: Average loss: 0.1490
Train: [0/2931 (0%)] Loss: 0.155324
Train: [1098/2931 (100%)] Loss: 0.161024
Epoch: 2/10. Train set: Average loss: 0.1610
Epoch: 2/10. Validation set: Average loss: 0.1248
Train: [0/2931 (0%)] Loss: 0.191281
Train: [1098/2931 (100%)] Loss: 0.161380
Epoch: 3/10. Train set: Average loss: 0.1615
Epoch: 3/10. Validation set: Average loss: 0.1321
Train: [0/2931 (0%)] Loss: 0.116309
Train: [1098/2931 (100%)] Loss: 0.158789
Epoch: 4/10. Train set: Average loss: 0.1587
Epoch: 4/10. Validation set: Average loss: 0.1297
Train: [0/2931 (0%)] Loss: 0.250284
Train: [1098/2931 (100%)] Loss: 0.158402
Epoch: 5/10. Train set: Average loss: 0.1587
Epoch: 5/10. Validation set: Average loss: 0.1341
Train: [0/2931 (0%)] Loss: 0.201631
Train: [1098/2931 (100%)] Loss: 0.160060
Epoch: 6/10. Train set: Average loss: 0.1602
Epoch: 6/10. Validation set: Average loss: 0.1482
Train: [0/2931 (0%)] Loss: 0.193952
Train: [1098/2931 (100%)] Loss: 0.163231
Epoch: 7/10. Train set: Average loss: 0.1633
Epoch: 7/10. Validation set: Average loss: 0.1518
Train: [0/2931 (0%)] Loss: 0.116618
Train: [1098/2931 (100%)] Loss: 0.159096
Epoch: 8/10. Train set: Average loss: 0.1590
Epoch: 8/10. Validation set: Average loss: 0.1566
Train: [0/2931 (0%)] Loss: 0.145078
Train: [1098/2931 (100%)] Loss: 0.153266
Epoch: 9/10. Train set: Average loss: 0.1532
Epoch: 9/10. Validation set: Average loss: 0.1293
Train: [0/2931 (0%)] Loss: 0.096912
Train: [1098/2931 (100%)] Loss: 0.152427
Epoch: 10/10. Train set: Average loss: 0.1523
Epoch: 10/10. Validation set: Average loss: 0.1276
model_1
Train: [0/2931 (0%)] Loss: 0.311945
Train: [1098/2931 (100%)] Loss: 0.182047
Epoch: 1/10. Train set: Average loss: 0.1824
Epoch: 1/10. Validation set: Average loss: 0.1542
Train: [0/2931 (0%)] Loss: 0.240431
Train: [1098/2931 (100%)] Loss: 0.181122
Epoch: 2/10. Train set: Average loss: 0.1813
Epoch: 2/10. Validation set: Average loss: 0.1645
Train: [0/2931 (0%)] Loss: 0.237757
Train: [1098/2931 (100%)] Loss: 0.172784
Epoch: 3/10. Train set: Average loss: 0.1730
Epoch: 3/10. Validation set: Average loss: 0.1530
Train: [0/2931 (0%)] Loss: 0.256449
Train: [1098/2931 (100%)] Loss: 0.162615
Epoch: 4/10. Train set: Average loss: 0.1629
Epoch: 4/10. Validation set: Average loss: 0.1427
Train: [0/2931 (0%)] Loss: 0.252131
Train: [1098/2931 (100%)] Loss: 0.160812
Epoch: 5/10. Train set: Average loss: 0.1611
Epoch: 5/10. Validation set: Average loss: 0.1253
Train: [0/2931 (0%)] Loss: 0.160119
Train: [1098/2931 (100%)] Loss: 0.163293
Epoch: 6/10. Train set: Average loss: 0.1633
Epoch: 6/10. Validation set: Average loss: 0.1532
Train: [0/2931 (0%)] Loss: 0.263887
Train: [1098/2931 (100%)] Loss: 0.164410
Epoch: 7/10. Train set: Average loss: 0.1647
Epoch: 7/10. Validation set: Average loss: 0.1547
Train: [0/2931 (0%)] Loss: 0.279145
Train: [1098/2931 (100%)] Loss: 0.160035
Epoch: 8/10. Train set: Average loss: 0.1604
Epoch: 8/10. Validation set: Average loss: 0.1613
Train: [0/2931 (0%)] Loss: 0.159373
Train: [1098/2931 (100%)] Loss: 0.156264
Epoch: 9/10. Train set: Average loss: 0.1563
Epoch: 9/10. Validation set: Average loss: 0.1348
Train: [0/2931 (0%)] Loss: 0.168038
Train: [1098/2931 (100%)] Loss: 0.153070
Epoch: 10/10. Train set: Average loss: 0.1531
Epoch: 10/10. Validation set: Average loss: 0.1374
model_2
Train: [0/2931 (0%)] Loss: 0.124958
Train: [1098/2931 (100%)] Loss: 0.168861
Epoch: 1/10. Train set: Average loss: 0.1687
Epoch: 1/10. Validation set: Average loss: 0.1674
Train: [0/2931 (0%)] Loss: 0.094877
Train: [1098/2931 (100%)] Loss: 0.169132
Epoch: 2/10. Train set: Average loss: 0.1689
Epoch: 2/10. Validation set: Average loss: 0.1640
Train: [0/2931 (0%)] Loss: 0.164889
Train: [1098/2931 (100%)] Loss: 0.156713
Epoch: 3/10. Train set: Average loss: 0.1567
Epoch: 3/10. Validation set: Average loss: 0.1616
Train: [0/2931 (0%)] Loss: 0.078786
Train: [1098/2931 (100%)] Loss: 0.163334
Epoch: 4/10. Train set: Average loss: 0.1631
Epoch: 4/10. Validation set: Average loss: 0.1508
Train: [0/2931 (0%)] Loss: 0.180394
Train: [1098/2931 (100%)] Loss: 0.173822
Epoch: 5/10. Train set: Average loss: 0.1738
Epoch: 5/10. Validation set: Average loss: 0.1653
Train: [0/2931 (0%)] Loss: 0.107438
Train: [1098/2931 (100%)] Loss: 0.169924
Epoch: 6/10. Train set: Average loss: 0.1698
Epoch: 6/10. Validation set: Average loss: 0.1538
Train: [0/2931 (0%)] Loss: 0.135732
Train: [1098/2931 (100%)] Loss: 0.157809
Epoch: 7/10. Train set: Average loss: 0.1577
Epoch: 7/10. Validation set: Average loss: 0.1658
Train: [0/2931 (0%)] Loss: 0.134964
Train: [1098/2931 (100%)] Loss: 0.159558
Epoch: 8/10. Train set: Average loss: 0.1595
Epoch: 8/10. Validation set: Average loss: 0.1444
Train: [0/2931 (0%)] Loss: 0.231067
Train: [1098/2931 (100%)] Loss: 0.157680
Epoch: 9/10. Train set: Average loss: 0.1579
Epoch: 9/10. Validation set: Average loss: 0.1322
Train: [0/2931 (0%)] Loss: 0.117573
Train: [1098/2931 (100%)] Loss: 0.152796
Epoch: 10/10. Train set: Average loss: 0.1527
Epoch: 10/10. Validation set: Average loss: 0.1331
Number features: 17
model_0
Train: [0/2931 (0%)] Loss: 0.374628
Train: [1098/2931 (100%)] Loss: 0.162002
Epoch: 1/10. Train set: Average loss: 0.1626
Epoch: 1/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.180819
Train: [1098/2931 (100%)] Loss: 0.163059
Epoch: 2/10. Train set: Average loss: 0.1631
Epoch: 2/10. Validation set: Average loss: 0.1531
Train: [0/2931 (0%)] Loss: 0.184415
Train: [1098/2931 (100%)] Loss: 0.167436
Epoch: 3/10. Train set: Average loss: 0.1675
Epoch: 3/10. Validation set: Average loss: 0.1449
Train: [0/2931 (0%)] Loss: 0.170762
Train: [1098/2931 (100%)] Loss: 0.157348
Epoch: 4/10. Train set: Average loss: 0.1574
Epoch: 4/10. Validation set: Average loss: 0.1508
Train: [0/2931 (0%)] Loss: 0.146019
Train: [1098/2931 (100%)] Loss: 0.152132
Epoch: 5/10. Train set: Average loss: 0.1521
Epoch: 5/10. Validation set: Average loss: 0.1524
Train: [0/2931 (0%)] Loss: 0.235489
Train: [1098/2931 (100%)] Loss: 0.154585
Epoch: 6/10. Train set: Average loss: 0.1548
Epoch: 6/10. Validation set: Average loss: 0.1847
Train: [0/2931 (0%)] Loss: 0.220954
Train: [1098/2931 (100%)] Loss: 0.157061
Epoch: 7/10. Train set: Average loss: 0.1572
Epoch: 7/10. Validation set: Average loss: 0.1512
Train: [0/2931 (0%)] Loss: 0.150713
Train: [1098/2931 (100%)] Loss: 0.157604
Epoch: 8/10. Train set: Average loss: 0.1576
Epoch: 8/10. Validation set: Average loss: 0.1576
Train: [0/2931 (0%)] Loss: 0.151361
Train: [1098/2931 (100%)] Loss: 0.147709
Epoch: 9/10. Train set: Average loss: 0.1477
Epoch: 9/10. Validation set: Average loss: 0.1498
Train: [0/2931 (0%)] Loss: 0.103571
Train: [1098/2931 (100%)] Loss: 0.151553
Epoch: 10/10. Train set: Average loss: 0.1514
Epoch: 10/10. Validation set: Average loss: 0.1500
model_1
Train: [0/2931 (0%)] Loss: 0.374799
Train: [1098/2931 (100%)] Loss: 0.174493
Epoch: 1/10. Train set: Average loss: 0.1750
Epoch: 1/10. Validation set: Average loss: 0.1585
Train: [0/2931 (0%)] Loss: 0.252822
Train: [1098/2931 (100%)] Loss: 0.156563
Epoch: 2/10. Train set: Average loss: 0.1568
Epoch: 2/10. Validation set: Average loss: 0.1531
Train: [0/2931 (0%)] Loss: 0.160695
Train: [1098/2931 (100%)] Loss: 0.161935
Epoch: 3/10. Train set: Average loss: 0.1619
Epoch: 3/10. Validation set: Average loss: 0.1541
Train: [0/2931 (0%)] Loss: 0.200844
Train: [1098/2931 (100%)] Loss: 0.160702
Epoch: 4/10. Train set: Average loss: 0.1608
Epoch: 4/10. Validation set: Average loss: 0.1720
Train: [0/2931 (0%)] Loss: 0.152854
Train: [1098/2931 (100%)] Loss: 0.159028
Epoch: 5/10. Train set: Average loss: 0.1590
Epoch: 5/10. Validation set: Average loss: 0.1586
Train: [0/2931 (0%)] Loss: 0.168218
Train: [1098/2931 (100%)] Loss: 0.153289
Epoch: 6/10. Train set: Average loss: 0.1533
Epoch: 6/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.154774
Train: [1098/2931 (100%)] Loss: 0.159331
Epoch: 7/10. Train set: Average loss: 0.1593
Epoch: 7/10. Validation set: Average loss: 0.1596
Train: [0/2931 (0%)] Loss: 0.253805
Train: [1098/2931 (100%)] Loss: 0.154245
Epoch: 8/10. Train set: Average loss: 0.1545
Epoch: 8/10. Validation set: Average loss: 0.1594
Train: [0/2931 (0%)] Loss: 0.067803
Train: [1098/2931 (100%)] Loss: 0.149459
Epoch: 9/10. Train set: Average loss: 0.1492
Epoch: 9/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.176805
Train: [1098/2931 (100%)] Loss: 0.151878
Epoch: 10/10. Train set: Average loss: 0.1519
Epoch: 10/10. Validation set: Average loss: 0.1467
model_2
Train: [0/2931 (0%)] Loss: 0.374619
Train: [1098/2931 (100%)] Loss: 0.170318
Epoch: 1/10. Train set: Average loss: 0.1709
Epoch: 1/10. Validation set: Average loss: 0.1590
Train: [0/2931 (0%)] Loss: 0.165027
Train: [1098/2931 (100%)] Loss: 0.165866
Epoch: 2/10. Train set: Average loss: 0.1659
Epoch: 2/10. Validation set: Average loss: 0.1880
Train: [0/2931 (0%)] Loss: 0.452332
Train: [1098/2931 (100%)] Loss: 0.172410
Epoch: 3/10. Train set: Average loss: 0.1732
Epoch: 3/10. Validation set: Average loss: 0.1855
Train: [0/2931 (0%)] Loss: 0.142133
Train: [1098/2931 (100%)] Loss: 0.169023
Epoch: 4/10. Train set: Average loss: 0.1689
Epoch: 4/10. Validation set: Average loss: 0.1517
Train: [0/2931 (0%)] Loss: 0.227498
Train: [1098/2931 (100%)] Loss: 0.154219
Epoch: 5/10. Train set: Average loss: 0.1544
Epoch: 5/10. Validation set: Average loss: 0.1663
Train: [0/2931 (0%)] Loss: 0.250061
Train: [1098/2931 (100%)] Loss: 0.161862
Epoch: 6/10. Train set: Average loss: 0.1621
Epoch: 6/10. Validation set: Average loss: 0.1571
Train: [0/2931 (0%)] Loss: 0.198977
Train: [1098/2931 (100%)] Loss: 0.161722
Epoch: 7/10. Train set: Average loss: 0.1618
Epoch: 7/10. Validation set: Average loss: 0.1700
Train: [0/2931 (0%)] Loss: 0.232387
Train: [1098/2931 (100%)] Loss: 0.156251
Epoch: 8/10. Train set: Average loss: 0.1565
Epoch: 8/10. Validation set: Average loss: 0.1475
Train: [0/2931 (0%)] Loss: 0.248716
Train: [1098/2931 (100%)] Loss: 0.151702
Epoch: 9/10. Train set: Average loss: 0.1520
Epoch: 9/10. Validation set: Average loss: 0.1457
Train: [0/2931 (0%)] Loss: 0.103638
Train: [1098/2931 (100%)] Loss: 0.148452
Epoch: 10/10. Train set: Average loss: 0.1483
Epoch: 10/10. Validation set: Average loss: 0.1463
Number features: 18
model_0
Train: [0/2931 (0%)] Loss: 0.374099
Train: [1098/2931 (100%)] Loss: 0.165380
Epoch: 1/10. Train set: Average loss: 0.1659
Epoch: 1/10. Validation set: Average loss: 0.1660
Train: [0/2931 (0%)] Loss: 0.244431
Train: [1098/2931 (100%)] Loss: 0.157949
Epoch: 2/10. Train set: Average loss: 0.1582
Epoch: 2/10. Validation set: Average loss: 0.1575
Train: [0/2931 (0%)] Loss: 0.160236
Train: [1098/2931 (100%)] Loss: 0.147767
Epoch: 3/10. Train set: Average loss: 0.1478
Epoch: 3/10. Validation set: Average loss: 0.1511
Train: [0/2931 (0%)] Loss: 0.182919
Train: [1098/2931 (100%)] Loss: 0.152665
Epoch: 4/10. Train set: Average loss: 0.1527
Epoch: 4/10. Validation set: Average loss: 0.1758
Train: [0/2931 (0%)] Loss: 0.217551
Train: [1098/2931 (100%)] Loss: 0.151266
Epoch: 5/10. Train set: Average loss: 0.1514
Epoch: 5/10. Validation set: Average loss: 0.1660
Train: [0/2931 (0%)] Loss: 0.262580
Train: [1098/2931 (100%)] Loss: 0.149001
Epoch: 6/10. Train set: Average loss: 0.1493
Epoch: 6/10. Validation set: Average loss: 0.1576
Train: [0/2931 (0%)] Loss: 0.270618
Train: [1098/2931 (100%)] Loss: 0.149245
Epoch: 7/10. Train set: Average loss: 0.1496
Epoch: 7/10. Validation set: Average loss: 0.1671
Train: [0/2931 (0%)] Loss: 0.259048
Train: [1098/2931 (100%)] Loss: 0.153705
Epoch: 8/10. Train set: Average loss: 0.1540
Epoch: 8/10. Validation set: Average loss: 0.1836
Train: [0/2931 (0%)] Loss: 0.313416
Train: [1098/2931 (100%)] Loss: 0.160404
Epoch: 9/10. Train set: Average loss: 0.1608
Epoch: 9/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.239434
Train: [1098/2931 (100%)] Loss: 0.148789
Epoch: 10/10. Train set: Average loss: 0.1490
Epoch: 10/10. Validation set: Average loss: 0.1407
model_1
Train: [0/2931 (0%)] Loss: 0.186902
Train: [1098/2931 (100%)] Loss: 0.161736
Epoch: 1/10. Train set: Average loss: 0.1618
Epoch: 1/10. Validation set: Average loss: 0.1630
Train: [0/2931 (0%)] Loss: 0.176008
Train: [1098/2931 (100%)] Loss: 0.161370
Epoch: 2/10. Train set: Average loss: 0.1614
Epoch: 2/10. Validation set: Average loss: 0.1810
Train: [0/2931 (0%)] Loss: 0.156174
Train: [1098/2931 (100%)] Loss: 0.157422
Epoch: 3/10. Train set: Average loss: 0.1574
Epoch: 3/10. Validation set: Average loss: 0.1877
Train: [0/2931 (0%)] Loss: 0.159963
Train: [1098/2931 (100%)] Loss: 0.165032
Epoch: 4/10. Train set: Average loss: 0.1650
Epoch: 4/10. Validation set: Average loss: 0.2191
Train: [0/2931 (0%)] Loss: 0.172581
Train: [1098/2931 (100%)] Loss: 0.157407
Epoch: 5/10. Train set: Average loss: 0.1574
Epoch: 5/10. Validation set: Average loss: 0.1704
Train: [0/2931 (0%)] Loss: 0.147848
Train: [1098/2931 (100%)] Loss: 0.146895
Epoch: 6/10. Train set: Average loss: 0.1469
Epoch: 6/10. Validation set: Average loss: 0.1673
Train: [0/2931 (0%)] Loss: 0.230451
Train: [1098/2931 (100%)] Loss: 0.153555
Epoch: 7/10. Train set: Average loss: 0.1538
Epoch: 7/10. Validation set: Average loss: 0.1832
Train: [0/2931 (0%)] Loss: 0.180007
Train: [1098/2931 (100%)] Loss: 0.149249
Epoch: 8/10. Train set: Average loss: 0.1493
Epoch: 8/10. Validation set: Average loss: 0.1862
Train: [0/2931 (0%)] Loss: 0.206393
Train: [1098/2931 (100%)] Loss: 0.155397
Epoch: 9/10. Train set: Average loss: 0.1555
Epoch: 9/10. Validation set: Average loss: 0.1601
Train: [0/2931 (0%)] Loss: 0.187902
Train: [1098/2931 (100%)] Loss: 0.148551
Epoch: 10/10. Train set: Average loss: 0.1487
Epoch: 10/10. Validation set: Average loss: 0.1353
model_2
Train: [0/2931 (0%)] Loss: 0.311659
Train: [1098/2931 (100%)] Loss: 0.165482
Epoch: 1/10. Train set: Average loss: 0.1659
Epoch: 1/10. Validation set: Average loss: 0.1890
Train: [0/2931 (0%)] Loss: 0.218419
Train: [1098/2931 (100%)] Loss: 0.147106
Epoch: 2/10. Train set: Average loss: 0.1473
Epoch: 2/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.135239
Train: [1098/2931 (100%)] Loss: 0.148486
Epoch: 3/10. Train set: Average loss: 0.1485
Epoch: 3/10. Validation set: Average loss: 0.1963
Train: [0/2931 (0%)] Loss: 0.100641
Train: [1098/2931 (100%)] Loss: 0.155706
Epoch: 4/10. Train set: Average loss: 0.1556
Epoch: 4/10. Validation set: Average loss: 0.1751
Train: [0/2931 (0%)] Loss: 0.153470
Train: [1098/2931 (100%)] Loss: 0.146922
Epoch: 5/10. Train set: Average loss: 0.1469
Epoch: 5/10. Validation set: Average loss: 0.1583
Train: [0/2931 (0%)] Loss: 0.215489
Train: [1098/2931 (100%)] Loss: 0.155898
Epoch: 6/10. Train set: Average loss: 0.1561
Epoch: 6/10. Validation set: Average loss: 0.1905
Train: [0/2931 (0%)] Loss: 0.238558
Train: [1098/2931 (100%)] Loss: 0.149396
Epoch: 7/10. Train set: Average loss: 0.1496
Epoch: 7/10. Validation set: Average loss: 0.1695
Train: [0/2931 (0%)] Loss: 0.074113
Train: [1098/2931 (100%)] Loss: 0.146136
Epoch: 8/10. Train set: Average loss: 0.1459
Epoch: 8/10. Validation set: Average loss: 0.1638
Train: [0/2931 (0%)] Loss: 0.161287
Train: [1098/2931 (100%)] Loss: 0.144221
Epoch: 9/10. Train set: Average loss: 0.1443
Epoch: 9/10. Validation set: Average loss: 0.1325
Train: [0/2931 (0%)] Loss: 0.084114
Train: [1098/2931 (100%)] Loss: 0.139603
Epoch: 10/10. Train set: Average loss: 0.1395
Epoch: 10/10. Validation set: Average loss: 0.1375
Number features: 19
model_0
Train: [0/2931 (0%)] Loss: 0.000003
Train: [1098/2931 (100%)] Loss: 0.163966
Epoch: 1/10. Train set: Average loss: 0.1635
Epoch: 1/10. Validation set: Average loss: 0.1601
Train: [0/2931 (0%)] Loss: 0.095407
Train: [1098/2931 (100%)] Loss: 0.157911
Epoch: 2/10. Train set: Average loss: 0.1577
Epoch: 2/10. Validation set: Average loss: 0.1165
Train: [0/2931 (0%)] Loss: 0.062938
Train: [1098/2931 (100%)] Loss: 0.160182
Epoch: 3/10. Train set: Average loss: 0.1599
Epoch: 3/10. Validation set: Average loss: 0.1350
Train: [0/2931 (0%)] Loss: 0.157520
Train: [1098/2931 (100%)] Loss: 0.162006
Epoch: 4/10. Train set: Average loss: 0.1620
Epoch: 4/10. Validation set: Average loss: 0.1310
Train: [0/2931 (0%)] Loss: 0.221023
Train: [1098/2931 (100%)] Loss: 0.156551
Epoch: 5/10. Train set: Average loss: 0.1567
Epoch: 5/10. Validation set: Average loss: 0.1287
Train: [0/2931 (0%)] Loss: 0.229073
Train: [1098/2931 (100%)] Loss: 0.159371
Epoch: 6/10. Train set: Average loss: 0.1596
Epoch: 6/10. Validation set: Average loss: 0.1261
Train: [0/2931 (0%)] Loss: 0.128206
Train: [1098/2931 (100%)] Loss: 0.164015
Epoch: 7/10. Train set: Average loss: 0.1639
Epoch: 7/10. Validation set: Average loss: 0.1232
Train: [0/2931 (0%)] Loss: 0.137431
Train: [1098/2931 (100%)] Loss: 0.169250
Epoch: 8/10. Train set: Average loss: 0.1692
Epoch: 8/10. Validation set: Average loss: 0.1683
Train: [0/2931 (0%)] Loss: 0.070367
Train: [1098/2931 (100%)] Loss: 0.161515
Epoch: 9/10. Train set: Average loss: 0.1613
Epoch: 9/10. Validation set: Average loss: 0.1393
Train: [0/2931 (0%)] Loss: 0.104784
Train: [1098/2931 (100%)] Loss: 0.164694
Epoch: 10/10. Train set: Average loss: 0.1645
Epoch: 10/10. Validation set: Average loss: 0.1403
model_1
Train: [0/2931 (0%)] Loss: 0.311458
Train: [1098/2931 (100%)] Loss: 0.161688
Epoch: 1/10. Train set: Average loss: 0.1621
Epoch: 1/10. Validation set: Average loss: 0.1397
Train: [0/2931 (0%)] Loss: 0.202103
Train: [1098/2931 (100%)] Loss: 0.161657
Epoch: 2/10. Train set: Average loss: 0.1618
Epoch: 2/10. Validation set: Average loss: 0.1130
Train: [0/2931 (0%)] Loss: 0.103716
Train: [1098/2931 (100%)] Loss: 0.158635
Epoch: 3/10. Train set: Average loss: 0.1585
Epoch: 3/10. Validation set: Average loss: 0.1562
Train: [0/2931 (0%)] Loss: 0.140149
Train: [1098/2931 (100%)] Loss: 0.164984
Epoch: 4/10. Train set: Average loss: 0.1649
Epoch: 4/10. Validation set: Average loss: 0.1352
Train: [0/2931 (0%)] Loss: 0.148123
Train: [1098/2931 (100%)] Loss: 0.161216
Epoch: 5/10. Train set: Average loss: 0.1612
Epoch: 5/10. Validation set: Average loss: 0.1133
Train: [0/2931 (0%)] Loss: 0.217151
Train: [1098/2931 (100%)] Loss: 0.152310
Epoch: 6/10. Train set: Average loss: 0.1525
Epoch: 6/10. Validation set: Average loss: 0.1343
Train: [0/2931 (0%)] Loss: 0.101676
Train: [1098/2931 (100%)] Loss: 0.157975
Epoch: 7/10. Train set: Average loss: 0.1578
Epoch: 7/10. Validation set: Average loss: 0.1262
Train: [0/2931 (0%)] Loss: 0.286095
Train: [1098/2931 (100%)] Loss: 0.151791
Epoch: 8/10. Train set: Average loss: 0.1522
Epoch: 8/10. Validation set: Average loss: 0.1256
Train: [0/2931 (0%)] Loss: 0.246815
Train: [1098/2931 (100%)] Loss: 0.149226
Epoch: 9/10. Train set: Average loss: 0.1495
Epoch: 9/10. Validation set: Average loss: 0.1302
Train: [0/2931 (0%)] Loss: 0.110721
Train: [1098/2931 (100%)] Loss: 0.141262
Epoch: 10/10. Train set: Average loss: 0.1412
Epoch: 10/10. Validation set: Average loss: 0.1285
model_2
Train: [0/2931 (0%)] Loss: 0.249466
Train: [1098/2931 (100%)] Loss: 0.161627
Epoch: 1/10. Train set: Average loss: 0.1619
Epoch: 1/10. Validation set: Average loss: 0.1470
Train: [0/2931 (0%)] Loss: 0.221130
Train: [1098/2931 (100%)] Loss: 0.159445
Epoch: 2/10. Train set: Average loss: 0.1596
Epoch: 2/10. Validation set: Average loss: 0.1492
Train: [0/2931 (0%)] Loss: 0.198451
Train: [1098/2931 (100%)] Loss: 0.153658
Epoch: 3/10. Train set: Average loss: 0.1538
Epoch: 3/10. Validation set: Average loss: 0.1328
Train: [0/2931 (0%)] Loss: 0.155692
Train: [1098/2931 (100%)] Loss: 0.163997
Epoch: 4/10. Train set: Average loss: 0.1640
Epoch: 4/10. Validation set: Average loss: 0.1535
Train: [0/2931 (0%)] Loss: 0.183914
Train: [1098/2931 (100%)] Loss: 0.157993
Epoch: 5/10. Train set: Average loss: 0.1581
Epoch: 5/10. Validation set: Average loss: 0.1511
Train: [0/2931 (0%)] Loss: 0.161256
Train: [1098/2931 (100%)] Loss: 0.155796
Epoch: 6/10. Train set: Average loss: 0.1558
Epoch: 6/10. Validation set: Average loss: 0.1506
Train: [0/2931 (0%)] Loss: 0.200680
Train: [1098/2931 (100%)] Loss: 0.154723
Epoch: 7/10. Train set: Average loss: 0.1548
Epoch: 7/10. Validation set: Average loss: 0.1513
Train: [0/2931 (0%)] Loss: 0.112397
Train: [1098/2931 (100%)] Loss: 0.156938
Epoch: 8/10. Train set: Average loss: 0.1568
Epoch: 8/10. Validation set: Average loss: 0.1320
Train: [0/2931 (0%)] Loss: 0.172098
Train: [1098/2931 (100%)] Loss: 0.146801
Epoch: 9/10. Train set: Average loss: 0.1469
Epoch: 9/10. Validation set: Average loss: 0.1301
Train: [0/2931 (0%)] Loss: 0.122571
Train: [1098/2931 (100%)] Loss: 0.145802
Epoch: 10/10. Train set: Average loss: 0.1457
Epoch: 10/10. Validation set: Average loss: 0.1299
Number features: 20
model_0
Train: [0/2931 (0%)] Loss: 0.312092
Train: [1098/2931 (100%)] Loss: 0.161594
Epoch: 1/10. Train set: Average loss: 0.1620
Epoch: 1/10. Validation set: Average loss: 0.1780
Train: [0/2931 (0%)] Loss: 0.159799
Train: [1098/2931 (100%)] Loss: 0.150011
Epoch: 2/10. Train set: Average loss: 0.1500
Epoch: 2/10. Validation set: Average loss: 0.1708
Train: [0/2931 (0%)] Loss: 0.218066
Train: [1098/2931 (100%)] Loss: 0.156368
Epoch: 3/10. Train set: Average loss: 0.1565
Epoch: 3/10. Validation set: Average loss: 0.1685
Train: [0/2931 (0%)] Loss: 0.194393
Train: [1098/2931 (100%)] Loss: 0.150576
Epoch: 4/10. Train set: Average loss: 0.1507
Epoch: 4/10. Validation set: Average loss: 0.1614
Train: [0/2931 (0%)] Loss: 0.156045
Train: [1098/2931 (100%)] Loss: 0.145860
Epoch: 5/10. Train set: Average loss: 0.1459
Epoch: 5/10. Validation set: Average loss: 0.1776
Train: [0/2931 (0%)] Loss: 0.266631
Train: [1098/2931 (100%)] Loss: 0.154063
Epoch: 6/10. Train set: Average loss: 0.1544
Epoch: 6/10. Validation set: Average loss: 0.1572
Train: [0/2931 (0%)] Loss: 0.165483
Train: [1098/2931 (100%)] Loss: 0.148114
Epoch: 7/10. Train set: Average loss: 0.1482
Epoch: 7/10. Validation set: Average loss: 0.1705
Train: [0/2931 (0%)] Loss: 0.225475
Train: [1098/2931 (100%)] Loss: 0.150035
Epoch: 8/10. Train set: Average loss: 0.1502
Epoch: 8/10. Validation set: Average loss: 0.1607
Train: [0/2931 (0%)] Loss: 0.218015
Train: [1098/2931 (100%)] Loss: 0.143848
Epoch: 9/10. Train set: Average loss: 0.1440
Epoch: 9/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.180250
Train: [1098/2931 (100%)] Loss: 0.144640
Epoch: 10/10. Train set: Average loss: 0.1447
Epoch: 10/10. Validation set: Average loss: 0.1480
model_1
Train: [0/2931 (0%)] Loss: 0.312177
Train: [1098/2931 (100%)] Loss: 0.164309
Epoch: 1/10. Train set: Average loss: 0.1647
Epoch: 1/10. Validation set: Average loss: 0.1593
Train: [0/2931 (0%)] Loss: 0.139682
Train: [1098/2931 (100%)] Loss: 0.149618
Epoch: 2/10. Train set: Average loss: 0.1496
Epoch: 2/10. Validation set: Average loss: 0.1681
Train: [0/2931 (0%)] Loss: 0.220141
Train: [1098/2931 (100%)] Loss: 0.159513
Epoch: 3/10. Train set: Average loss: 0.1597
Epoch: 3/10. Validation set: Average loss: 0.1583
Train: [0/2931 (0%)] Loss: 0.163896
Train: [1098/2931 (100%)] Loss: 0.155884
Epoch: 4/10. Train set: Average loss: 0.1559
Epoch: 4/10. Validation set: Average loss: 0.1580
Train: [0/2931 (0%)] Loss: 0.186783
Train: [1098/2931 (100%)] Loss: 0.154634
Epoch: 5/10. Train set: Average loss: 0.1547
Epoch: 5/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.199909
Train: [1098/2931 (100%)] Loss: 0.157603
Epoch: 6/10. Train set: Average loss: 0.1577
Epoch: 6/10. Validation set: Average loss: 0.1801
Train: [0/2931 (0%)] Loss: 0.275641
Train: [1098/2931 (100%)] Loss: 0.152017
Epoch: 7/10. Train set: Average loss: 0.1524
Epoch: 7/10. Validation set: Average loss: 0.1745
Train: [0/2931 (0%)] Loss: 0.129814
Train: [1098/2931 (100%)] Loss: 0.150181
Epoch: 8/10. Train set: Average loss: 0.1501
Epoch: 8/10. Validation set: Average loss: 0.1735
Train: [0/2931 (0%)] Loss: 0.143338
Train: [1098/2931 (100%)] Loss: 0.152533
Epoch: 9/10. Train set: Average loss: 0.1525
Epoch: 9/10. Validation set: Average loss: 0.1624
Train: [0/2931 (0%)] Loss: 0.156953
Train: [1098/2931 (100%)] Loss: 0.146945
Epoch: 10/10. Train set: Average loss: 0.1470
Epoch: 10/10. Validation set: Average loss: 0.1585
model_2
Train: [0/2931 (0%)] Loss: 0.310534
Train: [1098/2931 (100%)] Loss: 0.156607
Epoch: 1/10. Train set: Average loss: 0.1570
Epoch: 1/10. Validation set: Average loss: 0.1648
Train: [0/2931 (0%)] Loss: 0.158676
Train: [1098/2931 (100%)] Loss: 0.161648
Epoch: 2/10. Train set: Average loss: 0.1616
Epoch: 2/10. Validation set: Average loss: 0.1446
Train: [0/2931 (0%)] Loss: 0.153257
Train: [1098/2931 (100%)] Loss: 0.156773
Epoch: 3/10. Train set: Average loss: 0.1568
Epoch: 3/10. Validation set: Average loss: 0.1770
Train: [0/2931 (0%)] Loss: 0.101691
Train: [1098/2931 (100%)] Loss: 0.154597
Epoch: 4/10. Train set: Average loss: 0.1545
Epoch: 4/10. Validation set: Average loss: 0.1616
Train: [0/2931 (0%)] Loss: 0.181199
Train: [1098/2931 (100%)] Loss: 0.149884
Epoch: 5/10. Train set: Average loss: 0.1500
Epoch: 5/10. Validation set: Average loss: 0.1557
Train: [0/2931 (0%)] Loss: 0.073662
Train: [1098/2931 (100%)] Loss: 0.152605
Epoch: 6/10. Train set: Average loss: 0.1524
Epoch: 6/10. Validation set: Average loss: 0.1488
Train: [0/2931 (0%)] Loss: 0.104809
Train: [1098/2931 (100%)] Loss: 0.150963
Epoch: 7/10. Train set: Average loss: 0.1508
Epoch: 7/10. Validation set: Average loss: 0.1701
Train: [0/2931 (0%)] Loss: 0.161126
Train: [1098/2931 (100%)] Loss: 0.151143
Epoch: 8/10. Train set: Average loss: 0.1512
Epoch: 8/10. Validation set: Average loss: 0.1616
Train: [0/2931 (0%)] Loss: 0.118273
Train: [1098/2931 (100%)] Loss: 0.146031
Epoch: 9/10. Train set: Average loss: 0.1460
Epoch: 9/10. Validation set: Average loss: 0.1527
Train: [0/2931 (0%)] Loss: 0.165978
Train: [1098/2931 (100%)] Loss: 0.139323
Epoch: 10/10. Train set: Average loss: 0.1394
Epoch: 10/10. Validation set: Average loss: 0.1528
Number features: 21
model_0
Train: [0/2931 (0%)] Loss: 0.311821
Train: [1098/2931 (100%)] Loss: 0.156139
Epoch: 1/10. Train set: Average loss: 0.1566
Epoch: 1/10. Validation set: Average loss: 0.1349
Train: [0/2931 (0%)] Loss: 0.103300
Train: [1098/2931 (100%)] Loss: 0.151093
Epoch: 2/10. Train set: Average loss: 0.1510
Epoch: 2/10. Validation set: Average loss: 0.1522
Train: [0/2931 (0%)] Loss: 0.206736
Train: [1098/2931 (100%)] Loss: 0.154567
Epoch: 3/10. Train set: Average loss: 0.1547
Epoch: 3/10. Validation set: Average loss: 0.1470
Train: [0/2931 (0%)] Loss: 0.069796
Train: [1098/2931 (100%)] Loss: 0.157099
Epoch: 4/10. Train set: Average loss: 0.1569
Epoch: 4/10. Validation set: Average loss: 0.1576
Train: [0/2931 (0%)] Loss: 0.209593
Train: [1098/2931 (100%)] Loss: 0.161792
Epoch: 5/10. Train set: Average loss: 0.1619
Epoch: 5/10. Validation set: Average loss: 0.1388
Train: [0/2931 (0%)] Loss: 0.192427
Train: [1098/2931 (100%)] Loss: 0.162663
Epoch: 6/10. Train set: Average loss: 0.1627
Epoch: 6/10. Validation set: Average loss: 0.1560
Train: [0/2931 (0%)] Loss: 0.171165
Train: [1098/2931 (100%)] Loss: 0.160294
Epoch: 7/10. Train set: Average loss: 0.1603
Epoch: 7/10. Validation set: Average loss: 0.1567
Train: [0/2931 (0%)] Loss: 0.199443
Train: [1098/2931 (100%)] Loss: 0.161632
Epoch: 8/10. Train set: Average loss: 0.1617
Epoch: 8/10. Validation set: Average loss: 0.1724
Train: [0/2931 (0%)] Loss: 0.129972
Train: [1098/2931 (100%)] Loss: 0.159815
Epoch: 9/10. Train set: Average loss: 0.1597
Epoch: 9/10. Validation set: Average loss: 0.1738
Train: [0/2931 (0%)] Loss: 0.229063
Train: [1098/2931 (100%)] Loss: 0.156525
Epoch: 10/10. Train set: Average loss: 0.1567
Epoch: 10/10. Validation set: Average loss: 0.1711
model_1
Train: [0/2931 (0%)] Loss: 0.311087
Train: [1098/2931 (100%)] Loss: 0.151448
Epoch: 1/10. Train set: Average loss: 0.1519
Epoch: 1/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.230338
Train: [1098/2931 (100%)] Loss: 0.153459
Epoch: 2/10. Train set: Average loss: 0.1537
Epoch: 2/10. Validation set: Average loss: 0.1682
Train: [0/2931 (0%)] Loss: 0.141554
Train: [1098/2931 (100%)] Loss: 0.147873
Epoch: 3/10. Train set: Average loss: 0.1479
Epoch: 3/10. Validation set: Average loss: 0.1361
Train: [0/2931 (0%)] Loss: 0.176859
Train: [1098/2931 (100%)] Loss: 0.148870
Epoch: 4/10. Train set: Average loss: 0.1489
Epoch: 4/10. Validation set: Average loss: 0.1389
Train: [0/2931 (0%)] Loss: 0.221769
Train: [1098/2931 (100%)] Loss: 0.155274
Epoch: 5/10. Train set: Average loss: 0.1555
Epoch: 5/10. Validation set: Average loss: 0.1645
Train: [0/2931 (0%)] Loss: 0.194328
Train: [1098/2931 (100%)] Loss: 0.159970
Epoch: 6/10. Train set: Average loss: 0.1601
Epoch: 6/10. Validation set: Average loss: 0.1364
Train: [0/2931 (0%)] Loss: 0.201956
Train: [1098/2931 (100%)] Loss: 0.151084
Epoch: 7/10. Train set: Average loss: 0.1512
Epoch: 7/10. Validation set: Average loss: 0.1331
Train: [0/2931 (0%)] Loss: 0.141283
Train: [1098/2931 (100%)] Loss: 0.147487
Epoch: 8/10. Train set: Average loss: 0.1475
Epoch: 8/10. Validation set: Average loss: 0.1458
Train: [0/2931 (0%)] Loss: 0.174876
Train: [1098/2931 (100%)] Loss: 0.146295
Epoch: 9/10. Train set: Average loss: 0.1464
Epoch: 9/10. Validation set: Average loss: 0.1446
Train: [0/2931 (0%)] Loss: 0.229624
Train: [1098/2931 (100%)] Loss: 0.143104
Epoch: 10/10. Train set: Average loss: 0.1433
Epoch: 10/10. Validation set: Average loss: 0.1476
model_2
Train: [0/2931 (0%)] Loss: 0.124196
Train: [1098/2931 (100%)] Loss: 0.158693
Epoch: 1/10. Train set: Average loss: 0.1586
Epoch: 1/10. Validation set: Average loss: 0.1304
Train: [0/2931 (0%)] Loss: 0.169038
Train: [1098/2931 (100%)] Loss: 0.153569
Epoch: 2/10. Train set: Average loss: 0.1536
Epoch: 2/10. Validation set: Average loss: 0.1476
Train: [0/2931 (0%)] Loss: 0.124688
Train: [1098/2931 (100%)] Loss: 0.156377
Epoch: 3/10. Train set: Average loss: 0.1563
Epoch: 3/10. Validation set: Average loss: 0.1625
Train: [0/2931 (0%)] Loss: 0.220992
Train: [1098/2931 (100%)] Loss: 0.165940
Epoch: 4/10. Train set: Average loss: 0.1661
Epoch: 4/10. Validation set: Average loss: 0.1481
Train: [0/2931 (0%)] Loss: 0.120357
Train: [1098/2931 (100%)] Loss: 0.151562
Epoch: 5/10. Train set: Average loss: 0.1515
Epoch: 5/10. Validation set: Average loss: 0.1510
Train: [0/2931 (0%)] Loss: 0.208990
Train: [1098/2931 (100%)] Loss: 0.153370
Epoch: 6/10. Train set: Average loss: 0.1535
Epoch: 6/10. Validation set: Average loss: 0.1369
Train: [0/2931 (0%)] Loss: 0.143091
Train: [1098/2931 (100%)] Loss: 0.157740
Epoch: 7/10. Train set: Average loss: 0.1577
Epoch: 7/10. Validation set: Average loss: 0.1341
Train: [0/2931 (0%)] Loss: 0.077291
Train: [1098/2931 (100%)] Loss: 0.148966
Epoch: 8/10. Train set: Average loss: 0.1488
Epoch: 8/10. Validation set: Average loss: 0.1452
Train: [0/2931 (0%)] Loss: 0.147442
Train: [1098/2931 (100%)] Loss: 0.145572
Epoch: 9/10. Train set: Average loss: 0.1456
Epoch: 9/10. Validation set: Average loss: 0.1402
Train: [0/2931 (0%)] Loss: 0.094560
Train: [1098/2931 (100%)] Loss: 0.145362
Epoch: 10/10. Train set: Average loss: 0.1452
Epoch: 10/10. Validation set: Average loss: 0.1404
Number features: 22
model_0
Train: [0/2931 (0%)] Loss: 0.124696
Train: [1098/2931 (100%)] Loss: 0.159438
Epoch: 1/10. Train set: Average loss: 0.1593
Epoch: 1/10. Validation set: Average loss: 0.1550
Train: [0/2931 (0%)] Loss: 0.120150
Train: [1098/2931 (100%)] Loss: 0.159736
Epoch: 2/10. Train set: Average loss: 0.1596
Epoch: 2/10. Validation set: Average loss: 0.1239
Train: [0/2931 (0%)] Loss: 0.134753
Train: [1098/2931 (100%)] Loss: 0.160177
Epoch: 3/10. Train set: Average loss: 0.1601
Epoch: 3/10. Validation set: Average loss: 0.1202
Train: [0/2931 (0%)] Loss: 0.134195
Train: [1098/2931 (100%)] Loss: 0.156184
Epoch: 4/10. Train set: Average loss: 0.1561
Epoch: 4/10. Validation set: Average loss: 0.1521
Train: [0/2931 (0%)] Loss: 0.138874
Train: [1098/2931 (100%)] Loss: 0.158593
Epoch: 5/10. Train set: Average loss: 0.1585
Epoch: 5/10. Validation set: Average loss: 0.1402
Train: [0/2931 (0%)] Loss: 0.126166
Train: [1098/2931 (100%)] Loss: 0.160741
Epoch: 6/10. Train set: Average loss: 0.1606
Epoch: 6/10. Validation set: Average loss: 0.1470
Train: [0/2931 (0%)] Loss: 0.208170
Train: [1098/2931 (100%)] Loss: 0.150626
Epoch: 7/10. Train set: Average loss: 0.1508
Epoch: 7/10. Validation set: Average loss: 0.1325
Train: [0/2931 (0%)] Loss: 0.205419
Train: [1098/2931 (100%)] Loss: 0.155273
Epoch: 8/10. Train set: Average loss: 0.1554
Epoch: 8/10. Validation set: Average loss: 0.1467
Train: [0/2931 (0%)] Loss: 0.146881
Train: [1098/2931 (100%)] Loss: 0.147002
Epoch: 9/10. Train set: Average loss: 0.1470
Epoch: 9/10. Validation set: Average loss: 0.1370
Train: [0/2931 (0%)] Loss: 0.114335
Train: [1098/2931 (100%)] Loss: 0.144145
Epoch: 10/10. Train set: Average loss: 0.1441
Epoch: 10/10. Validation set: Average loss: 0.1370
model_1
Train: [0/2931 (0%)] Loss: 0.187314
Train: [1098/2931 (100%)] Loss: 0.157895
Epoch: 1/10. Train set: Average loss: 0.1580
Epoch: 1/10. Validation set: Average loss: 0.1285
Train: [0/2931 (0%)] Loss: 0.165673
Train: [1098/2931 (100%)] Loss: 0.159747
Epoch: 2/10. Train set: Average loss: 0.1598
Epoch: 2/10. Validation set: Average loss: 0.1284
Train: [0/2931 (0%)] Loss: 0.236949
Train: [1098/2931 (100%)] Loss: 0.170616
Epoch: 3/10. Train set: Average loss: 0.1708
Epoch: 3/10. Validation set: Average loss: 0.1471
Train: [0/2931 (0%)] Loss: 0.085285
Train: [1098/2931 (100%)] Loss: 0.165115
Epoch: 4/10. Train set: Average loss: 0.1649
Epoch: 4/10. Validation set: Average loss: 0.1444
Train: [0/2931 (0%)] Loss: 0.174532
Train: [1098/2931 (100%)] Loss: 0.156624
Epoch: 5/10. Train set: Average loss: 0.1567
Epoch: 5/10. Validation set: Average loss: 0.1464
Train: [0/2931 (0%)] Loss: 0.095620
Train: [1098/2931 (100%)] Loss: 0.158752
Epoch: 6/10. Train set: Average loss: 0.1586
Epoch: 6/10. Validation set: Average loss: 0.1345
Train: [0/2931 (0%)] Loss: 0.137457
Train: [1098/2931 (100%)] Loss: 0.158780
Epoch: 7/10. Train set: Average loss: 0.1587
Epoch: 7/10. Validation set: Average loss: 0.1355
Train: [0/2931 (0%)] Loss: 0.145613
Train: [1098/2931 (100%)] Loss: 0.150528
Epoch: 8/10. Train set: Average loss: 0.1505
Epoch: 8/10. Validation set: Average loss: 0.1284
Train: [0/2931 (0%)] Loss: 0.137733
Train: [1098/2931 (100%)] Loss: 0.143412
Epoch: 9/10. Train set: Average loss: 0.1434
Epoch: 9/10. Validation set: Average loss: 0.1315
Train: [0/2931 (0%)] Loss: 0.091347
Train: [1098/2931 (100%)] Loss: 0.145105
Epoch: 10/10. Train set: Average loss: 0.1450
Epoch: 10/10. Validation set: Average loss: 0.1367
model_2
Train: [0/2931 (0%)] Loss: 0.124571
Train: [1098/2931 (100%)] Loss: 0.163815
Epoch: 1/10. Train set: Average loss: 0.1637
Epoch: 1/10. Validation set: Average loss: 0.1410
Train: [0/2931 (0%)] Loss: 0.475120
Train: [1098/2931 (100%)] Loss: 0.162671
Epoch: 2/10. Train set: Average loss: 0.1635
Epoch: 2/10. Validation set: Average loss: 0.1428
Train: [0/2931 (0%)] Loss: 0.087581
Train: [1098/2931 (100%)] Loss: 0.158495
Epoch: 3/10. Train set: Average loss: 0.1583
Epoch: 3/10. Validation set: Average loss: 0.1215
Train: [0/2931 (0%)] Loss: 0.138482
Train: [1098/2931 (100%)] Loss: 0.155511
Epoch: 4/10. Train set: Average loss: 0.1555
Epoch: 4/10. Validation set: Average loss: 0.1366
Train: [0/2931 (0%)] Loss: 0.214536
Train: [1098/2931 (100%)] Loss: 0.156767
Epoch: 5/10. Train set: Average loss: 0.1569
Epoch: 5/10. Validation set: Average loss: 0.1910
Train: [0/2931 (0%)] Loss: 0.277139
Train: [1098/2931 (100%)] Loss: 0.159966
Epoch: 6/10. Train set: Average loss: 0.1603
Epoch: 6/10. Validation set: Average loss: 0.1402
Train: [0/2931 (0%)] Loss: 0.291065
Train: [1098/2931 (100%)] Loss: 0.166456
Epoch: 7/10. Train set: Average loss: 0.1668
Epoch: 7/10. Validation set: Average loss: 0.1503
Train: [0/2931 (0%)] Loss: 0.070217
Train: [1098/2931 (100%)] Loss: 0.163137
Epoch: 8/10. Train set: Average loss: 0.1629
Epoch: 8/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.149355
Train: [1098/2931 (100%)] Loss: 0.149555
Epoch: 9/10. Train set: Average loss: 0.1496
Epoch: 9/10. Validation set: Average loss: 0.1318
Train: [0/2931 (0%)] Loss: 0.194628
Train: [1098/2931 (100%)] Loss: 0.143389
Epoch: 10/10. Train set: Average loss: 0.1435
Epoch: 10/10. Validation set: Average loss: 0.1308
Number features: 23
model_0
Train: [0/2931 (0%)] Loss: 0.062346
Train: [1098/2931 (100%)] Loss: 0.169242
Epoch: 1/10. Train set: Average loss: 0.1690
Epoch: 1/10. Validation set: Average loss: 0.1758
Train: [0/2931 (0%)] Loss: 0.509744
Train: [1098/2931 (100%)] Loss: 0.163322
Epoch: 2/10. Train set: Average loss: 0.1643
Epoch: 2/10. Validation set: Average loss: 0.1539
Train: [0/2931 (0%)] Loss: 0.067546
Train: [1098/2931 (100%)] Loss: 0.156652
Epoch: 3/10. Train set: Average loss: 0.1564
Epoch: 3/10. Validation set: Average loss: 0.1852
Train: [0/2931 (0%)] Loss: 0.285536
Train: [1098/2931 (100%)] Loss: 0.155351
Epoch: 4/10. Train set: Average loss: 0.1557
Epoch: 4/10. Validation set: Average loss: 0.1678
Train: [0/2931 (0%)] Loss: 0.146143
Train: [1098/2931 (100%)] Loss: 0.156128
Epoch: 5/10. Train set: Average loss: 0.1561
Epoch: 5/10. Validation set: Average loss: 0.2217
Train: [0/2931 (0%)] Loss: 0.502355
Train: [1098/2931 (100%)] Loss: 0.171076
Epoch: 6/10. Train set: Average loss: 0.1720
Epoch: 6/10. Validation set: Average loss: 0.2204
Train: [0/2931 (0%)] Loss: 0.170802
Train: [1098/2931 (100%)] Loss: 0.167204
Epoch: 7/10. Train set: Average loss: 0.1672
Epoch: 7/10. Validation set: Average loss: 0.1684
Train: [0/2931 (0%)] Loss: 0.119273
Train: [1098/2931 (100%)] Loss: 0.166338
Epoch: 8/10. Train set: Average loss: 0.1662
Epoch: 8/10. Validation set: Average loss: 0.1671
Train: [0/2931 (0%)] Loss: 0.294824
Train: [1098/2931 (100%)] Loss: 0.153682
Epoch: 9/10. Train set: Average loss: 0.1541
Epoch: 9/10. Validation set: Average loss: 0.1508
Train: [0/2931 (0%)] Loss: 0.073026
Train: [1098/2931 (100%)] Loss: 0.146199
Epoch: 10/10. Train set: Average loss: 0.1460
Epoch: 10/10. Validation set: Average loss: 0.1504
model_1
Train: [0/2931 (0%)] Loss: 0.249586
Train: [1098/2931 (100%)] Loss: 0.173407
Epoch: 1/10. Train set: Average loss: 0.1736
Epoch: 1/10. Validation set: Average loss: 0.2010
Train: [0/2931 (0%)] Loss: 0.084303
Train: [1098/2931 (100%)] Loss: 0.178212
Epoch: 2/10. Train set: Average loss: 0.1780
Epoch: 2/10. Validation set: Average loss: 0.1848
Train: [0/2931 (0%)] Loss: 0.094448
Train: [1098/2931 (100%)] Loss: 0.170586
Epoch: 3/10. Train set: Average loss: 0.1704
Epoch: 3/10. Validation set: Average loss: 0.3427
Train: [0/2931 (0%)] Loss: 0.193522
Train: [1098/2931 (100%)] Loss: 0.165428
Epoch: 4/10. Train set: Average loss: 0.1655
Epoch: 4/10. Validation set: Average loss: 0.2841
Train: [0/2931 (0%)] Loss: 0.356819
Train: [1098/2931 (100%)] Loss: 0.166530
Epoch: 5/10. Train set: Average loss: 0.1670
Epoch: 5/10. Validation set: Average loss: 0.2324
Train: [0/2931 (0%)] Loss: 0.312755
Train: [1098/2931 (100%)] Loss: 0.157574
Epoch: 6/10. Train set: Average loss: 0.1580
Epoch: 6/10. Validation set: Average loss: 0.1973
Train: [0/2931 (0%)] Loss: 0.070177
Train: [1098/2931 (100%)] Loss: 0.154287
Epoch: 7/10. Train set: Average loss: 0.1541
Epoch: 7/10. Validation set: Average loss: 0.2115
Train: [0/2931 (0%)] Loss: 0.247115
Train: [1098/2931 (100%)] Loss: 0.149679
Epoch: 8/10. Train set: Average loss: 0.1499
Epoch: 8/10. Validation set: Average loss: 0.2435
Train: [0/2931 (0%)] Loss: 0.400878
Train: [1098/2931 (100%)] Loss: 0.152308
Epoch: 9/10. Train set: Average loss: 0.1530
Epoch: 9/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.176842
Train: [1098/2931 (100%)] Loss: 0.146074
Epoch: 10/10. Train set: Average loss: 0.1462
Epoch: 10/10. Validation set: Average loss: 0.1499
model_2
Train: [0/2931 (0%)] Loss: 0.312018
Train: [1098/2931 (100%)] Loss: 0.168900
Epoch: 1/10. Train set: Average loss: 0.1693
Epoch: 1/10. Validation set: Average loss: 0.3932
Train: [0/2931 (0%)] Loss: 1.666468
Train: [1098/2931 (100%)] Loss: 0.168528
Epoch: 2/10. Train set: Average loss: 0.1726
Epoch: 2/10. Validation set: Average loss: 0.1648
Train: [0/2931 (0%)] Loss: 0.165081
Train: [1098/2931 (100%)] Loss: 0.176593
Epoch: 3/10. Train set: Average loss: 0.1766
Epoch: 3/10. Validation set: Average loss: 0.1693
Train: [0/2931 (0%)] Loss: 0.141771
Train: [1098/2931 (100%)] Loss: 0.166012
Epoch: 4/10. Train set: Average loss: 0.1659
Epoch: 4/10. Validation set: Average loss: 0.3335
Train: [0/2931 (0%)] Loss: 0.262634
Train: [1098/2931 (100%)] Loss: 0.172872
Epoch: 5/10. Train set: Average loss: 0.1731
Epoch: 5/10. Validation set: Average loss: 0.1810
Train: [0/2931 (0%)] Loss: 0.110803
Train: [1098/2931 (100%)] Loss: 0.167474
Epoch: 6/10. Train set: Average loss: 0.1673
Epoch: 6/10. Validation set: Average loss: 0.1697
Train: [0/2931 (0%)] Loss: 0.155922
Train: [1098/2931 (100%)] Loss: 0.162157
Epoch: 7/10. Train set: Average loss: 0.1621
Epoch: 7/10. Validation set: Average loss: 0.2046
Train: [0/2931 (0%)] Loss: 0.237859
Train: [1098/2931 (100%)] Loss: 0.158385
Epoch: 8/10. Train set: Average loss: 0.1586
Epoch: 8/10. Validation set: Average loss: 0.2245
Train: [0/2931 (0%)] Loss: 0.157416
Train: [1098/2931 (100%)] Loss: 0.150219
Epoch: 9/10. Train set: Average loss: 0.1502
Epoch: 9/10. Validation set: Average loss: 0.1499
Train: [0/2931 (0%)] Loss: 0.148553
Train: [1098/2931 (100%)] Loss: 0.147616
Epoch: 10/10. Train set: Average loss: 0.1476
Epoch: 10/10. Validation set: Average loss: 0.1510
Number features: 24
model_0
Train: [0/2931 (0%)] Loss: 0.187213
Train: [1098/2931 (100%)] Loss: 0.168461
Epoch: 1/10. Train set: Average loss: 0.1685
Epoch: 1/10. Validation set: Average loss: 0.1490
Train: [0/2931 (0%)] Loss: 0.169652
Train: [1098/2931 (100%)] Loss: 0.172000
Epoch: 2/10. Train set: Average loss: 0.1720
Epoch: 2/10. Validation set: Average loss: 0.1601
Train: [0/2931 (0%)] Loss: 0.090425
Train: [1098/2931 (100%)] Loss: 0.171559
Epoch: 3/10. Train set: Average loss: 0.1713
Epoch: 3/10. Validation set: Average loss: 0.1328
Train: [0/2931 (0%)] Loss: 0.121009
Train: [1098/2931 (100%)] Loss: 0.165797
Epoch: 4/10. Train set: Average loss: 0.1657
Epoch: 4/10. Validation set: Average loss: 0.1403
Train: [0/2931 (0%)] Loss: 0.132607
Train: [1098/2931 (100%)] Loss: 0.165958
Epoch: 5/10. Train set: Average loss: 0.1659
Epoch: 5/10. Validation set: Average loss: 0.1308
Train: [0/2931 (0%)] Loss: 0.134930
Train: [1098/2931 (100%)] Loss: 0.158596
Epoch: 6/10. Train set: Average loss: 0.1585
Epoch: 6/10. Validation set: Average loss: 0.1435
Train: [0/2931 (0%)] Loss: 0.181504
Train: [1098/2931 (100%)] Loss: 0.165291
Epoch: 7/10. Train set: Average loss: 0.1653
Epoch: 7/10. Validation set: Average loss: 0.1602
Train: [0/2931 (0%)] Loss: 0.167018
Train: [1098/2931 (100%)] Loss: 0.163799
Epoch: 8/10. Train set: Average loss: 0.1638
Epoch: 8/10. Validation set: Average loss: 0.1691
Train: [0/2931 (0%)] Loss: 0.151216
Train: [1098/2931 (100%)] Loss: 0.157533
Epoch: 9/10. Train set: Average loss: 0.1575
Epoch: 9/10. Validation set: Average loss: 0.1438
Train: [0/2931 (0%)] Loss: 0.115693
Train: [1098/2931 (100%)] Loss: 0.148854
Epoch: 10/10. Train set: Average loss: 0.1488
Epoch: 10/10. Validation set: Average loss: 0.1487
model_1
Train: [0/2931 (0%)] Loss: 0.186360
Train: [1098/2931 (100%)] Loss: 0.171983
Epoch: 1/10. Train set: Average loss: 0.1720
Epoch: 1/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.173244
Train: [1098/2931 (100%)] Loss: 0.159895
Epoch: 2/10. Train set: Average loss: 0.1599
Epoch: 2/10. Validation set: Average loss: 0.1300
Train: [0/2931 (0%)] Loss: 0.114230
Train: [1098/2931 (100%)] Loss: 0.162674
Epoch: 3/10. Train set: Average loss: 0.1625
Epoch: 3/10. Validation set: Average loss: 0.1331
Train: [0/2931 (0%)] Loss: 0.184205
Train: [1098/2931 (100%)] Loss: 0.156392
Epoch: 4/10. Train set: Average loss: 0.1565
Epoch: 4/10. Validation set: Average loss: 0.1431
Train: [0/2931 (0%)] Loss: 0.107182
Train: [1098/2931 (100%)] Loss: 0.151586
Epoch: 5/10. Train set: Average loss: 0.1515
Epoch: 5/10. Validation set: Average loss: 0.1264
Train: [0/2931 (0%)] Loss: 0.140219
Train: [1098/2931 (100%)] Loss: 0.158467
Epoch: 6/10. Train set: Average loss: 0.1584
Epoch: 6/10. Validation set: Average loss: 0.1369
Train: [0/2931 (0%)] Loss: 0.209621
Train: [1098/2931 (100%)] Loss: 0.155362
Epoch: 7/10. Train set: Average loss: 0.1555
Epoch: 7/10. Validation set: Average loss: 0.1425
Train: [0/2931 (0%)] Loss: 0.103968
Train: [1098/2931 (100%)] Loss: 0.153822
Epoch: 8/10. Train set: Average loss: 0.1537
Epoch: 8/10. Validation set: Average loss: 0.1464
Train: [0/2931 (0%)] Loss: 0.119293
Train: [1098/2931 (100%)] Loss: 0.149289
Epoch: 9/10. Train set: Average loss: 0.1492
Epoch: 9/10. Validation set: Average loss: 0.1379
Train: [0/2931 (0%)] Loss: 0.127050
Train: [1098/2931 (100%)] Loss: 0.148340
Epoch: 10/10. Train set: Average loss: 0.1483
Epoch: 10/10. Validation set: Average loss: 0.1352
model_2
Train: [0/2931 (0%)] Loss: 0.311953
Train: [1098/2931 (100%)] Loss: 0.180374
Epoch: 1/10. Train set: Average loss: 0.1807
Epoch: 1/10. Validation set: Average loss: 0.1247
Train: [0/2931 (0%)] Loss: 0.157428
Train: [1098/2931 (100%)] Loss: 0.160970
Epoch: 2/10. Train set: Average loss: 0.1610
Epoch: 2/10. Validation set: Average loss: 0.1413
Train: [0/2931 (0%)] Loss: 0.174404
Train: [1098/2931 (100%)] Loss: 0.168358
Epoch: 3/10. Train set: Average loss: 0.1684
Epoch: 3/10. Validation set: Average loss: 0.1250
Train: [0/2931 (0%)] Loss: 0.260466
Train: [1098/2931 (100%)] Loss: 0.160974
Epoch: 4/10. Train set: Average loss: 0.1612
Epoch: 4/10. Validation set: Average loss: 0.1334
Train: [0/2931 (0%)] Loss: 0.156847
Train: [1098/2931 (100%)] Loss: 0.164449
Epoch: 5/10. Train set: Average loss: 0.1644
Epoch: 5/10. Validation set: Average loss: 0.1516
Train: [0/2931 (0%)] Loss: 0.113917
Train: [1098/2931 (100%)] Loss: 0.166551
Epoch: 6/10. Train set: Average loss: 0.1664
Epoch: 6/10. Validation set: Average loss: 0.1447
Train: [0/2931 (0%)] Loss: 0.169778
Train: [1098/2931 (100%)] Loss: 0.168351
Epoch: 7/10. Train set: Average loss: 0.1684
Epoch: 7/10. Validation set: Average loss: 0.1357
Train: [0/2931 (0%)] Loss: 0.178612
Train: [1098/2931 (100%)] Loss: 0.166244
Epoch: 8/10. Train set: Average loss: 0.1663
Epoch: 8/10. Validation set: Average loss: 0.1311
Train: [0/2931 (0%)] Loss: 0.133452
Train: [1098/2931 (100%)] Loss: 0.148852
Epoch: 9/10. Train set: Average loss: 0.1488
Epoch: 9/10. Validation set: Average loss: 0.1352
Train: [0/2931 (0%)] Loss: 0.201153
Train: [1098/2931 (100%)] Loss: 0.151658
Epoch: 10/10. Train set: Average loss: 0.1518
Epoch: 10/10. Validation set: Average loss: 0.1363
Number features: 25
model_0
Train: [0/2931 (0%)] Loss: 0.249505
Train: [1098/2931 (100%)] Loss: 0.166777
Epoch: 1/10. Train set: Average loss: 0.1670
Epoch: 1/10. Validation set: Average loss: 0.1375
Train: [0/2931 (0%)] Loss: 0.172798
Train: [1098/2931 (100%)] Loss: 0.162926
Epoch: 2/10. Train set: Average loss: 0.1630
Epoch: 2/10. Validation set: Average loss: 0.1622
Train: [0/2931 (0%)] Loss: 0.161284
Train: [1098/2931 (100%)] Loss: 0.169247
Epoch: 3/10. Train set: Average loss: 0.1692
Epoch: 3/10. Validation set: Average loss: 0.1661
Train: [0/2931 (0%)] Loss: 0.184359
Train: [1098/2931 (100%)] Loss: 0.153864
Epoch: 4/10. Train set: Average loss: 0.1539
Epoch: 4/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.150405
Train: [1098/2931 (100%)] Loss: 0.154939
Epoch: 5/10. Train set: Average loss: 0.1549
Epoch: 5/10. Validation set: Average loss: 0.1320
Train: [0/2931 (0%)] Loss: 0.178688
Train: [1098/2931 (100%)] Loss: 0.154491
Epoch: 6/10. Train set: Average loss: 0.1546
Epoch: 6/10. Validation set: Average loss: 0.1260
Train: [0/2931 (0%)] Loss: 0.235860
Train: [1098/2931 (100%)] Loss: 0.151567
Epoch: 7/10. Train set: Average loss: 0.1518
Epoch: 7/10. Validation set: Average loss: 0.1214
Train: [0/2931 (0%)] Loss: 0.158517
Train: [1098/2931 (100%)] Loss: 0.150299
Epoch: 8/10. Train set: Average loss: 0.1503
Epoch: 8/10. Validation set: Average loss: 0.1259
Train: [0/2931 (0%)] Loss: 0.129701
Train: [1098/2931 (100%)] Loss: 0.148928
Epoch: 9/10. Train set: Average loss: 0.1489
Epoch: 9/10. Validation set: Average loss: 0.1222
Train: [0/2931 (0%)] Loss: 0.204649
Train: [1098/2931 (100%)] Loss: 0.143269
Epoch: 10/10. Train set: Average loss: 0.1434
Epoch: 10/10. Validation set: Average loss: 0.1191
model_1
Train: [0/2931 (0%)] Loss: 0.312173
Train: [1098/2931 (100%)] Loss: 0.164818
Epoch: 1/10. Train set: Average loss: 0.1652
Epoch: 1/10. Validation set: Average loss: 0.1685
Train: [0/2931 (0%)] Loss: 0.146586
Train: [1098/2931 (100%)] Loss: 0.154665
Epoch: 2/10. Train set: Average loss: 0.1546
Epoch: 2/10. Validation set: Average loss: 0.1600
Train: [0/2931 (0%)] Loss: 0.153409
Train: [1098/2931 (100%)] Loss: 0.166401
Epoch: 3/10. Train set: Average loss: 0.1664
Epoch: 3/10. Validation set: Average loss: 0.1968
Train: [0/2931 (0%)] Loss: 0.167349
Train: [1098/2931 (100%)] Loss: 0.156237
Epoch: 4/10. Train set: Average loss: 0.1563
Epoch: 4/10. Validation set: Average loss: 0.1413
Train: [0/2931 (0%)] Loss: 0.258663
Train: [1098/2931 (100%)] Loss: 0.157172
Epoch: 5/10. Train set: Average loss: 0.1574
Epoch: 5/10. Validation set: Average loss: 0.1589
Train: [0/2931 (0%)] Loss: 0.087240
Train: [1098/2931 (100%)] Loss: 0.157485
Epoch: 6/10. Train set: Average loss: 0.1573
Epoch: 6/10. Validation set: Average loss: 0.1396
Train: [0/2931 (0%)] Loss: 0.147783
Train: [1098/2931 (100%)] Loss: 0.154882
Epoch: 7/10. Train set: Average loss: 0.1549
Epoch: 7/10. Validation set: Average loss: 0.1470
Train: [0/2931 (0%)] Loss: 0.145846
Train: [1098/2931 (100%)] Loss: 0.151903
Epoch: 8/10. Train set: Average loss: 0.1519
Epoch: 8/10. Validation set: Average loss: 0.1430
Train: [0/2931 (0%)] Loss: 0.216498
Train: [1098/2931 (100%)] Loss: 0.144621
Epoch: 9/10. Train set: Average loss: 0.1448
Epoch: 9/10. Validation set: Average loss: 0.1281
Train: [0/2931 (0%)] Loss: 0.148740
Train: [1098/2931 (100%)] Loss: 0.144274
Epoch: 10/10. Train set: Average loss: 0.1443
Epoch: 10/10. Validation set: Average loss: 0.1282
model_2
Train: [0/2931 (0%)] Loss: 0.249204
Train: [1098/2931 (100%)] Loss: 0.168339
Epoch: 1/10. Train set: Average loss: 0.1686
Epoch: 1/10. Validation set: Average loss: 0.1565
Train: [0/2931 (0%)] Loss: 0.201386
Train: [1098/2931 (100%)] Loss: 0.153754
Epoch: 2/10. Train set: Average loss: 0.1539
Epoch: 2/10. Validation set: Average loss: 0.1349
Train: [0/2931 (0%)] Loss: 0.217741
Train: [1098/2931 (100%)] Loss: 0.160106
Epoch: 3/10. Train set: Average loss: 0.1603
Epoch: 3/10. Validation set: Average loss: 0.1665
Train: [0/2931 (0%)] Loss: 0.233445
Train: [1098/2931 (100%)] Loss: 0.153743
Epoch: 4/10. Train set: Average loss: 0.1540
Epoch: 4/10. Validation set: Average loss: 0.1656
Train: [0/2931 (0%)] Loss: 0.139048
Train: [1098/2931 (100%)] Loss: 0.158087
Epoch: 5/10. Train set: Average loss: 0.1580
Epoch: 5/10. Validation set: Average loss: 0.1389
Train: [0/2931 (0%)] Loss: 0.132739
Train: [1098/2931 (100%)] Loss: 0.154876
Epoch: 6/10. Train set: Average loss: 0.1548
Epoch: 6/10. Validation set: Average loss: 0.1262
Train: [0/2931 (0%)] Loss: 0.219492
Train: [1098/2931 (100%)] Loss: 0.153706
Epoch: 7/10. Train set: Average loss: 0.1539
Epoch: 7/10. Validation set: Average loss: 0.1573
Train: [0/2931 (0%)] Loss: 0.147850
Train: [1098/2931 (100%)] Loss: 0.156438
Epoch: 8/10. Train set: Average loss: 0.1564
Epoch: 8/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.142134
Train: [1098/2931 (100%)] Loss: 0.149638
Epoch: 9/10. Train set: Average loss: 0.1496
Epoch: 9/10. Validation set: Average loss: 0.1305
Train: [0/2931 (0%)] Loss: 0.169827
Train: [1098/2931 (100%)] Loss: 0.143885
Epoch: 10/10. Train set: Average loss: 0.1440
Epoch: 10/10. Validation set: Average loss: 0.1292
Number features: 26
model_0
Train: [0/2931 (0%)] Loss: 0.124704
Train: [1098/2931 (100%)] Loss: 0.171226
Epoch: 1/10. Train set: Average loss: 0.1711
Epoch: 1/10. Validation set: Average loss: 0.1580
Train: [0/2931 (0%)] Loss: 0.112250
Train: [1098/2931 (100%)] Loss: 0.175344
Epoch: 2/10. Train set: Average loss: 0.1752
Epoch: 2/10. Validation set: Average loss: 0.1330
Train: [0/2931 (0%)] Loss: 0.124082
Train: [1098/2931 (100%)] Loss: 0.172134
Epoch: 3/10. Train set: Average loss: 0.1720
Epoch: 3/10. Validation set: Average loss: 0.1312
Train: [0/2931 (0%)] Loss: 0.136928
Train: [1098/2931 (100%)] Loss: 0.171933
Epoch: 4/10. Train set: Average loss: 0.1718
Epoch: 4/10. Validation set: Average loss: 0.1352
Train: [0/2931 (0%)] Loss: 0.067254
Train: [1098/2931 (100%)] Loss: 0.158309
Epoch: 5/10. Train set: Average loss: 0.1581
Epoch: 5/10. Validation set: Average loss: 0.1460
Train: [0/2931 (0%)] Loss: 0.242276
Train: [1098/2931 (100%)] Loss: 0.160582
Epoch: 6/10. Train set: Average loss: 0.1608
Epoch: 6/10. Validation set: Average loss: 0.1358
Train: [0/2931 (0%)] Loss: 0.174257
Train: [1098/2931 (100%)] Loss: 0.156370
Epoch: 7/10. Train set: Average loss: 0.1564
Epoch: 7/10. Validation set: Average loss: 0.1234
Train: [0/2931 (0%)] Loss: 0.080392
Train: [1098/2931 (100%)] Loss: 0.158954
Epoch: 8/10. Train set: Average loss: 0.1587
Epoch: 8/10. Validation set: Average loss: 0.1455
Train: [0/2931 (0%)] Loss: 0.117181
Train: [1098/2931 (100%)] Loss: 0.150362
Epoch: 9/10. Train set: Average loss: 0.1503
Epoch: 9/10. Validation set: Average loss: 0.1327
Train: [0/2931 (0%)] Loss: 0.136321
Train: [1098/2931 (100%)] Loss: 0.146369
Epoch: 10/10. Train set: Average loss: 0.1463
Epoch: 10/10. Validation set: Average loss: 0.1311
model_1
Train: [0/2931 (0%)] Loss: 0.062274
Train: [1098/2931 (100%)] Loss: 0.182203
Epoch: 1/10. Train set: Average loss: 0.1819
Epoch: 1/10. Validation set: Average loss: 0.1349
Train: [0/2931 (0%)] Loss: 0.238996
Train: [1098/2931 (100%)] Loss: 0.171139
Epoch: 2/10. Train set: Average loss: 0.1713
Epoch: 2/10. Validation set: Average loss: 0.1284
Train: [0/2931 (0%)] Loss: 0.128562
Train: [1098/2931 (100%)] Loss: 0.162321
Epoch: 3/10. Train set: Average loss: 0.1622
Epoch: 3/10. Validation set: Average loss: 0.1278
Train: [0/2931 (0%)] Loss: 0.115398
Train: [1098/2931 (100%)] Loss: 0.164754
Epoch: 4/10. Train set: Average loss: 0.1646
Epoch: 4/10. Validation set: Average loss: 0.1496
Train: [0/2931 (0%)] Loss: 0.236575
Train: [1098/2931 (100%)] Loss: 0.165085
Epoch: 5/10. Train set: Average loss: 0.1653
Epoch: 5/10. Validation set: Average loss: 0.1343
Train: [0/2931 (0%)] Loss: 0.137180
Train: [1098/2931 (100%)] Loss: 0.155550
Epoch: 6/10. Train set: Average loss: 0.1555
Epoch: 6/10. Validation set: Average loss: 0.1198
Train: [0/2931 (0%)] Loss: 0.080653
Train: [1098/2931 (100%)] Loss: 0.159697
Epoch: 7/10. Train set: Average loss: 0.1595
Epoch: 7/10. Validation set: Average loss: 0.1521
Train: [0/2931 (0%)] Loss: 0.092243
Train: [1098/2931 (100%)] Loss: 0.163641
Epoch: 8/10. Train set: Average loss: 0.1634
Epoch: 8/10. Validation set: Average loss: 0.1357
Train: [0/2931 (0%)] Loss: 0.104282
Train: [1098/2931 (100%)] Loss: 0.149658
Epoch: 9/10. Train set: Average loss: 0.1495
Epoch: 9/10. Validation set: Average loss: 0.1468
Train: [0/2931 (0%)] Loss: 0.085362
Train: [1098/2931 (100%)] Loss: 0.154410
Epoch: 10/10. Train set: Average loss: 0.1542
Epoch: 10/10. Validation set: Average loss: 0.1269
model_2
Train: [0/2931 (0%)] Loss: 0.311960
Train: [1098/2931 (100%)] Loss: 0.166634
Epoch: 1/10. Train set: Average loss: 0.1670
Epoch: 1/10. Validation set: Average loss: 0.1327
Train: [0/2931 (0%)] Loss: 0.122402
Train: [1098/2931 (100%)] Loss: 0.173756
Epoch: 2/10. Train set: Average loss: 0.1736
Epoch: 2/10. Validation set: Average loss: 0.1691
Train: [0/2931 (0%)] Loss: 0.201551
Train: [1098/2931 (100%)] Loss: 0.183097
Epoch: 3/10. Train set: Average loss: 0.1831
Epoch: 3/10. Validation set: Average loss: 0.1276
Train: [0/2931 (0%)] Loss: 0.237154
Train: [1098/2931 (100%)] Loss: 0.158898
Epoch: 4/10. Train set: Average loss: 0.1591
Epoch: 4/10. Validation set: Average loss: 0.1266
Train: [0/2931 (0%)] Loss: 0.116748
Train: [1098/2931 (100%)] Loss: 0.158465
Epoch: 5/10. Train set: Average loss: 0.1584
Epoch: 5/10. Validation set: Average loss: 0.1331
Train: [0/2931 (0%)] Loss: 0.159547
Train: [1098/2931 (100%)] Loss: 0.158592
Epoch: 6/10. Train set: Average loss: 0.1586
Epoch: 6/10. Validation set: Average loss: 0.1382
Train: [0/2931 (0%)] Loss: 0.142852
Train: [1098/2931 (100%)] Loss: 0.161273
Epoch: 7/10. Train set: Average loss: 0.1612
Epoch: 7/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.085167
Train: [1098/2931 (100%)] Loss: 0.157473
Epoch: 8/10. Train set: Average loss: 0.1573
Epoch: 8/10. Validation set: Average loss: 0.1288
Train: [0/2931 (0%)] Loss: 0.186916
Train: [1098/2931 (100%)] Loss: 0.145952
Epoch: 9/10. Train set: Average loss: 0.1461
Epoch: 9/10. Validation set: Average loss: 0.1172
Train: [0/2931 (0%)] Loss: 0.063336
Train: [1098/2931 (100%)] Loss: 0.150492
Epoch: 10/10. Train set: Average loss: 0.1503
Epoch: 10/10. Validation set: Average loss: 0.1192
Number features: 27
model_0
Train: [0/2931 (0%)] Loss: 0.249470
Train: [1098/2931 (100%)] Loss: 0.159181
Epoch: 1/10. Train set: Average loss: 0.1594
Epoch: 1/10. Validation set: Average loss: 0.1384
Train: [0/2931 (0%)] Loss: 0.174331
Train: [1098/2931 (100%)] Loss: 0.162680
Epoch: 2/10. Train set: Average loss: 0.1627
Epoch: 2/10. Validation set: Average loss: 0.1398
Train: [0/2931 (0%)] Loss: 0.138225
Train: [1098/2931 (100%)] Loss: 0.163358
Epoch: 3/10. Train set: Average loss: 0.1633
Epoch: 3/10. Validation set: Average loss: 0.1639
Train: [0/2931 (0%)] Loss: 0.209103
Train: [1098/2931 (100%)] Loss: 0.157782
Epoch: 4/10. Train set: Average loss: 0.1579
Epoch: 4/10. Validation set: Average loss: 0.1317
Train: [0/2931 (0%)] Loss: 0.234477
Train: [1098/2931 (100%)] Loss: 0.152426
Epoch: 5/10. Train set: Average loss: 0.1526
Epoch: 5/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.136066
Train: [1098/2931 (100%)] Loss: 0.145746
Epoch: 6/10. Train set: Average loss: 0.1457
Epoch: 6/10. Validation set: Average loss: 0.1444
Train: [0/2931 (0%)] Loss: 0.221527
Train: [1098/2931 (100%)] Loss: 0.154401
Epoch: 7/10. Train set: Average loss: 0.1546
Epoch: 7/10. Validation set: Average loss: 0.1169
Train: [0/2931 (0%)] Loss: 0.189269
Train: [1098/2931 (100%)] Loss: 0.149211
Epoch: 8/10. Train set: Average loss: 0.1493
Epoch: 8/10. Validation set: Average loss: 0.1359
Train: [0/2931 (0%)] Loss: 0.197279
Train: [1098/2931 (100%)] Loss: 0.140353
Epoch: 9/10. Train set: Average loss: 0.1405
Epoch: 9/10. Validation set: Average loss: 0.1263
Train: [0/2931 (0%)] Loss: 0.159202
Train: [1098/2931 (100%)] Loss: 0.143515
Epoch: 10/10. Train set: Average loss: 0.1436
Epoch: 10/10. Validation set: Average loss: 0.1287
model_1
Train: [0/2931 (0%)] Loss: 0.249363
Train: [1098/2931 (100%)] Loss: 0.159831
Epoch: 1/10. Train set: Average loss: 0.1601
Epoch: 1/10. Validation set: Average loss: 0.1701
Train: [0/2931 (0%)] Loss: 0.129866
Train: [1098/2931 (100%)] Loss: 0.161317
Epoch: 2/10. Train set: Average loss: 0.1612
Epoch: 2/10. Validation set: Average loss: 0.1522
Train: [0/2931 (0%)] Loss: 0.167693
Train: [1098/2931 (100%)] Loss: 0.151470
Epoch: 3/10. Train set: Average loss: 0.1515
Epoch: 3/10. Validation set: Average loss: 0.1341
Train: [0/2931 (0%)] Loss: 0.181839
Train: [1098/2931 (100%)] Loss: 0.146119
Epoch: 4/10. Train set: Average loss: 0.1462
Epoch: 4/10. Validation set: Average loss: 0.1501
Train: [0/2931 (0%)] Loss: 0.158023
Train: [1098/2931 (100%)] Loss: 0.159043
Epoch: 5/10. Train set: Average loss: 0.1590
Epoch: 5/10. Validation set: Average loss: 0.1429
Train: [0/2931 (0%)] Loss: 0.093739
Train: [1098/2931 (100%)] Loss: 0.149973
Epoch: 6/10. Train set: Average loss: 0.1498
Epoch: 6/10. Validation set: Average loss: 0.1313
Train: [0/2931 (0%)] Loss: 0.150517
Train: [1098/2931 (100%)] Loss: 0.146856
Epoch: 7/10. Train set: Average loss: 0.1469
Epoch: 7/10. Validation set: Average loss: 0.1407
Train: [0/2931 (0%)] Loss: 0.153756
Train: [1098/2931 (100%)] Loss: 0.148574
Epoch: 8/10. Train set: Average loss: 0.1486
Epoch: 8/10. Validation set: Average loss: 0.1549
Train: [0/2931 (0%)] Loss: 0.127515
Train: [1098/2931 (100%)] Loss: 0.144436
Epoch: 9/10. Train set: Average loss: 0.1444
Epoch: 9/10. Validation set: Average loss: 0.1333
Train: [0/2931 (0%)] Loss: 0.161387
Train: [1098/2931 (100%)] Loss: 0.144136
Epoch: 10/10. Train set: Average loss: 0.1442
Epoch: 10/10. Validation set: Average loss: 0.1345
model_2
Train: [0/2931 (0%)] Loss: 0.186725
Train: [1098/2931 (100%)] Loss: 0.159494
Epoch: 1/10. Train set: Average loss: 0.1596
Epoch: 1/10. Validation set: Average loss: 0.1826
Train: [0/2931 (0%)] Loss: 0.113985
Train: [1098/2931 (100%)] Loss: 0.156810
Epoch: 2/10. Train set: Average loss: 0.1567
Epoch: 2/10. Validation set: Average loss: 0.1558
Train: [0/2931 (0%)] Loss: 0.102837
Train: [1098/2931 (100%)] Loss: 0.149513
Epoch: 3/10. Train set: Average loss: 0.1494
Epoch: 3/10. Validation set: Average loss: 0.1632
Train: [0/2931 (0%)] Loss: 0.093470
Train: [1098/2931 (100%)] Loss: 0.156041
Epoch: 4/10. Train set: Average loss: 0.1559
Epoch: 4/10. Validation set: Average loss: 0.1457
Train: [0/2931 (0%)] Loss: 0.097932
Train: [1098/2931 (100%)] Loss: 0.146024
Epoch: 5/10. Train set: Average loss: 0.1459
Epoch: 5/10. Validation set: Average loss: 0.1324
Train: [0/2931 (0%)] Loss: 0.128518
Train: [1098/2931 (100%)] Loss: 0.148148
Epoch: 6/10. Train set: Average loss: 0.1481
Epoch: 6/10. Validation set: Average loss: 0.1348
Train: [0/2931 (0%)] Loss: 0.176906
Train: [1098/2931 (100%)] Loss: 0.147265
Epoch: 7/10. Train set: Average loss: 0.1473
Epoch: 7/10. Validation set: Average loss: 0.1586
Train: [0/2931 (0%)] Loss: 0.118084
Train: [1098/2931 (100%)] Loss: 0.150712
Epoch: 8/10. Train set: Average loss: 0.1506
Epoch: 8/10. Validation set: Average loss: 0.1434
Train: [0/2931 (0%)] Loss: 0.128895
Train: [1098/2931 (100%)] Loss: 0.152512
Epoch: 9/10. Train set: Average loss: 0.1524
Epoch: 9/10. Validation set: Average loss: 0.1237
Train: [0/2931 (0%)] Loss: 0.226390
Train: [1098/2931 (100%)] Loss: 0.142281
Epoch: 10/10. Train set: Average loss: 0.1425
Epoch: 10/10. Validation set: Average loss: 0.1203
Number features: 28
model_0
Train: [0/2931 (0%)] Loss: 0.187094
Train: [1098/2931 (100%)] Loss: 0.164324
Epoch: 1/10. Train set: Average loss: 0.1644
Epoch: 1/10. Validation set: Average loss: 0.1298
Train: [0/2931 (0%)] Loss: 0.196688
Train: [1098/2931 (100%)] Loss: 0.158668
Epoch: 2/10. Train set: Average loss: 0.1588
Epoch: 2/10. Validation set: Average loss: 0.1600
Train: [0/2931 (0%)] Loss: 0.202048
Train: [1098/2931 (100%)] Loss: 0.153975
Epoch: 3/10. Train set: Average loss: 0.1541
Epoch: 3/10. Validation set: Average loss: 0.1338
Train: [0/2931 (0%)] Loss: 0.188108
Train: [1098/2931 (100%)] Loss: 0.156860
Epoch: 4/10. Train set: Average loss: 0.1569
Epoch: 4/10. Validation set: Average loss: 0.1678
Train: [0/2931 (0%)] Loss: 0.219401
Train: [1098/2931 (100%)] Loss: 0.152633
Epoch: 5/10. Train set: Average loss: 0.1528
Epoch: 5/10. Validation set: Average loss: 0.1623
Train: [0/2931 (0%)] Loss: 0.192210
Train: [1098/2931 (100%)] Loss: 0.147832
Epoch: 6/10. Train set: Average loss: 0.1480
Epoch: 6/10. Validation set: Average loss: 0.1697
Train: [0/2931 (0%)] Loss: 0.247310
Train: [1098/2931 (100%)] Loss: 0.155243
Epoch: 7/10. Train set: Average loss: 0.1555
Epoch: 7/10. Validation set: Average loss: 0.1409
Train: [0/2931 (0%)] Loss: 0.188480
Train: [1098/2931 (100%)] Loss: 0.148560
Epoch: 8/10. Train set: Average loss: 0.1487
Epoch: 8/10. Validation set: Average loss: 0.1773
Train: [0/2931 (0%)] Loss: 0.197589
Train: [1098/2931 (100%)] Loss: 0.154166
Epoch: 9/10. Train set: Average loss: 0.1543
Epoch: 9/10. Validation set: Average loss: 0.1337
Train: [0/2931 (0%)] Loss: 0.115733
Train: [1098/2931 (100%)] Loss: 0.147844
Epoch: 10/10. Train set: Average loss: 0.1478
Epoch: 10/10. Validation set: Average loss: 0.1323
model_1
Train: [0/2931 (0%)] Loss: 0.186960
Train: [1098/2931 (100%)] Loss: 0.165393
Epoch: 1/10. Train set: Average loss: 0.1655
Epoch: 1/10. Validation set: Average loss: 0.1474
Train: [0/2931 (0%)] Loss: 0.194405
Train: [1098/2931 (100%)] Loss: 0.159772
Epoch: 2/10. Train set: Average loss: 0.1599
Epoch: 2/10. Validation set: Average loss: 0.1843
Train: [0/2931 (0%)] Loss: 0.171187
Train: [1098/2931 (100%)] Loss: 0.159946
Epoch: 3/10. Train set: Average loss: 0.1600
Epoch: 3/10. Validation set: Average loss: 0.1762
Train: [0/2931 (0%)] Loss: 0.148010
Train: [1098/2931 (100%)] Loss: 0.154043
Epoch: 4/10. Train set: Average loss: 0.1540
Epoch: 4/10. Validation set: Average loss: 0.1660
Train: [0/2931 (0%)] Loss: 0.110486
Train: [1098/2931 (100%)] Loss: 0.153204
Epoch: 5/10. Train set: Average loss: 0.1531
Epoch: 5/10. Validation set: Average loss: 0.1624
Train: [0/2931 (0%)] Loss: 0.133013
Train: [1098/2931 (100%)] Loss: 0.153561
Epoch: 6/10. Train set: Average loss: 0.1535
Epoch: 6/10. Validation set: Average loss: 0.1624
Train: [0/2931 (0%)] Loss: 0.150881
Train: [1098/2931 (100%)] Loss: 0.150514
Epoch: 7/10. Train set: Average loss: 0.1505
Epoch: 7/10. Validation set: Average loss: 0.1643
Train: [0/2931 (0%)] Loss: 0.119197
Train: [1098/2931 (100%)] Loss: 0.150980
Epoch: 8/10. Train set: Average loss: 0.1509
Epoch: 8/10. Validation set: Average loss: 0.1676
Train: [0/2931 (0%)] Loss: 0.153296
Train: [1098/2931 (100%)] Loss: 0.147561
Epoch: 9/10. Train set: Average loss: 0.1476
Epoch: 9/10. Validation set: Average loss: 0.1307
Train: [0/2931 (0%)] Loss: 0.103125
Train: [1098/2931 (100%)] Loss: 0.142013
Epoch: 10/10. Train set: Average loss: 0.1419
Epoch: 10/10. Validation set: Average loss: 0.1317
model_2
Train: [0/2931 (0%)] Loss: 0.187086
Train: [1098/2931 (100%)] Loss: 0.163230
Epoch: 1/10. Train set: Average loss: 0.1633
Epoch: 1/10. Validation set: Average loss: 0.1881
Train: [0/2931 (0%)] Loss: 0.273574
Train: [1098/2931 (100%)] Loss: 0.159456
Epoch: 2/10. Train set: Average loss: 0.1598
Epoch: 2/10. Validation set: Average loss: 0.1599
Train: [0/2931 (0%)] Loss: 0.129542
Train: [1098/2931 (100%)] Loss: 0.160175
Epoch: 3/10. Train set: Average loss: 0.1601
Epoch: 3/10. Validation set: Average loss: 0.1885
Train: [0/2931 (0%)] Loss: 0.190118
Train: [1098/2931 (100%)] Loss: 0.162417
Epoch: 4/10. Train set: Average loss: 0.1625
Epoch: 4/10. Validation set: Average loss: 0.1752
Train: [0/2931 (0%)] Loss: 0.211888
Train: [1098/2931 (100%)] Loss: 0.159933
Epoch: 5/10. Train set: Average loss: 0.1601
Epoch: 5/10. Validation set: Average loss: 0.1525
Train: [0/2931 (0%)] Loss: 0.149306
Train: [1098/2931 (100%)] Loss: 0.157651
Epoch: 6/10. Train set: Average loss: 0.1576
Epoch: 6/10. Validation set: Average loss: 0.1589
Train: [0/2931 (0%)] Loss: 0.192896
Train: [1098/2931 (100%)] Loss: 0.150509
Epoch: 7/10. Train set: Average loss: 0.1506
Epoch: 7/10. Validation set: Average loss: 0.1560
Train: [0/2931 (0%)] Loss: 0.132654
Train: [1098/2931 (100%)] Loss: 0.149817
Epoch: 8/10. Train set: Average loss: 0.1498
Epoch: 8/10. Validation set: Average loss: 0.1594
Train: [0/2931 (0%)] Loss: 0.252325
Train: [1098/2931 (100%)] Loss: 0.148748
Epoch: 9/10. Train set: Average loss: 0.1490
Epoch: 9/10. Validation set: Average loss: 0.1368
Train: [0/2931 (0%)] Loss: 0.122695
Train: [1098/2931 (100%)] Loss: 0.146967
Epoch: 10/10. Train set: Average loss: 0.1469
Epoch: 10/10. Validation set: Average loss: 0.1363
Number features: 29
model_0
Train: [0/2931 (0%)] Loss: 0.187151
Train: [1098/2931 (100%)] Loss: 0.169946
Epoch: 1/10. Train set: Average loss: 0.1700
Epoch: 1/10. Validation set: Average loss: 0.1599
Train: [0/2931 (0%)] Loss: 0.139641
Train: [1098/2931 (100%)] Loss: 0.158696
Epoch: 2/10. Train set: Average loss: 0.1586
Epoch: 2/10. Validation set: Average loss: 0.1526
Train: [0/2931 (0%)] Loss: 0.038579
Train: [1098/2931 (100%)] Loss: 0.157308
Epoch: 3/10. Train set: Average loss: 0.1570
Epoch: 3/10. Validation set: Average loss: 0.1329
Train: [0/2931 (0%)] Loss: 0.162208
Train: [1098/2931 (100%)] Loss: 0.154768
Epoch: 4/10. Train set: Average loss: 0.1548
Epoch: 4/10. Validation set: Average loss: 0.1414
Train: [0/2931 (0%)] Loss: 0.117717
Train: [1098/2931 (100%)] Loss: 0.157052
Epoch: 5/10. Train set: Average loss: 0.1569
Epoch: 5/10. Validation set: Average loss: 0.1520
Train: [0/2931 (0%)] Loss: 0.081430
Train: [1098/2931 (100%)] Loss: 0.154324
Epoch: 6/10. Train set: Average loss: 0.1541
Epoch: 6/10. Validation set: Average loss: 0.1461
Train: [0/2931 (0%)] Loss: 0.141874
Train: [1098/2931 (100%)] Loss: 0.155396
Epoch: 7/10. Train set: Average loss: 0.1554
Epoch: 7/10. Validation set: Average loss: 0.1461
Train: [0/2931 (0%)] Loss: 0.148374
Train: [1098/2931 (100%)] Loss: 0.148227
Epoch: 8/10. Train set: Average loss: 0.1482
Epoch: 8/10. Validation set: Average loss: 0.1369
Train: [0/2931 (0%)] Loss: 0.094895
Train: [1098/2931 (100%)] Loss: 0.143871
Epoch: 9/10. Train set: Average loss: 0.1437
Epoch: 9/10. Validation set: Average loss: 0.1272
Train: [0/2931 (0%)] Loss: 0.141674
Train: [1098/2931 (100%)] Loss: 0.143974
Epoch: 10/10. Train set: Average loss: 0.1440
Epoch: 10/10. Validation set: Average loss: 0.1270
model_1
Train: [0/2931 (0%)] Loss: 0.186830
Train: [1098/2931 (100%)] Loss: 0.163896
Epoch: 1/10. Train set: Average loss: 0.1640
Epoch: 1/10. Validation set: Average loss: 0.1362
Train: [0/2931 (0%)] Loss: 0.264611
Train: [1098/2931 (100%)] Loss: 0.161377
Epoch: 2/10. Train set: Average loss: 0.1617
Epoch: 2/10. Validation set: Average loss: 0.1423
Train: [0/2931 (0%)] Loss: 0.138614
Train: [1098/2931 (100%)] Loss: 0.159175
Epoch: 3/10. Train set: Average loss: 0.1591
Epoch: 3/10. Validation set: Average loss: 0.1338
Train: [0/2931 (0%)] Loss: 0.146621
Train: [1098/2931 (100%)] Loss: 0.163240
Epoch: 4/10. Train set: Average loss: 0.1632
Epoch: 4/10. Validation set: Average loss: 0.1466
Train: [0/2931 (0%)] Loss: 0.113783
Train: [1098/2931 (100%)] Loss: 0.153152
Epoch: 5/10. Train set: Average loss: 0.1530
Epoch: 5/10. Validation set: Average loss: 0.1358
Train: [0/2931 (0%)] Loss: 0.096575
Train: [1098/2931 (100%)] Loss: 0.149658
Epoch: 6/10. Train set: Average loss: 0.1495
Epoch: 6/10. Validation set: Average loss: 0.1412
Train: [0/2931 (0%)] Loss: 0.179798
Train: [1098/2931 (100%)] Loss: 0.149637
Epoch: 7/10. Train set: Average loss: 0.1497
Epoch: 7/10. Validation set: Average loss: 0.1390
Train: [0/2931 (0%)] Loss: 0.097009
Train: [1098/2931 (100%)] Loss: 0.153575
Epoch: 8/10. Train set: Average loss: 0.1534
Epoch: 8/10. Validation set: Average loss: 0.1336
Train: [0/2931 (0%)] Loss: 0.251368
Train: [1098/2931 (100%)] Loss: 0.148225
Epoch: 9/10. Train set: Average loss: 0.1485
Epoch: 9/10. Validation set: Average loss: 0.1322
Train: [0/2931 (0%)] Loss: 0.070218
Train: [1098/2931 (100%)] Loss: 0.143443
Epoch: 10/10. Train set: Average loss: 0.1432
Epoch: 10/10. Validation set: Average loss: 0.1325
model_2
Train: [0/2931 (0%)] Loss: 0.187329
Train: [1098/2931 (100%)] Loss: 0.161806
Epoch: 1/10. Train set: Average loss: 0.1619
Epoch: 1/10. Validation set: Average loss: 0.1414
Train: [0/2931 (0%)] Loss: 0.173998
Train: [1098/2931 (100%)] Loss: 0.156668
Epoch: 2/10. Train set: Average loss: 0.1567
Epoch: 2/10. Validation set: Average loss: 0.1346
Train: [0/2931 (0%)] Loss: 0.164049
Train: [1098/2931 (100%)] Loss: 0.163945
Epoch: 3/10. Train set: Average loss: 0.1639
Epoch: 3/10. Validation set: Average loss: 0.1298
Train: [0/2931 (0%)] Loss: 0.154958
Train: [1098/2931 (100%)] Loss: 0.155633
Epoch: 4/10. Train set: Average loss: 0.1556
Epoch: 4/10. Validation set: Average loss: 0.1251
Train: [0/2931 (0%)] Loss: 0.108935
Train: [1098/2931 (100%)] Loss: 0.152857
Epoch: 5/10. Train set: Average loss: 0.1527
Epoch: 5/10. Validation set: Average loss: 0.1289
Train: [0/2931 (0%)] Loss: 0.075868
Train: [1098/2931 (100%)] Loss: 0.155791
Epoch: 6/10. Train set: Average loss: 0.1556
Epoch: 6/10. Validation set: Average loss: 0.1540
Train: [0/2931 (0%)] Loss: 0.154149
Train: [1098/2931 (100%)] Loss: 0.151899
Epoch: 7/10. Train set: Average loss: 0.1519
Epoch: 7/10. Validation set: Average loss: 0.1378
Train: [0/2931 (0%)] Loss: 0.129345
Train: [1098/2931 (100%)] Loss: 0.149898
Epoch: 8/10. Train set: Average loss: 0.1498
Epoch: 8/10. Validation set: Average loss: 0.1228
Train: [0/2931 (0%)] Loss: 0.145263
Train: [1098/2931 (100%)] Loss: 0.145774
Epoch: 9/10. Train set: Average loss: 0.1458
Epoch: 9/10. Validation set: Average loss: 0.1263
Train: [0/2931 (0%)] Loss: 0.087092
Train: [1098/2931 (100%)] Loss: 0.147097
Epoch: 10/10. Train set: Average loss: 0.1469
Epoch: 10/10. Validation set: Average loss: 0.1264
Number features: 30
model_0
Train: [0/2931 (0%)] Loss: 0.124691
Train: [1098/2931 (100%)] Loss: 0.169133
Epoch: 1/10. Train set: Average loss: 0.1690
Epoch: 1/10. Validation set: Average loss: 0.1721
Train: [0/2931 (0%)] Loss: 0.103310
Train: [1098/2931 (100%)] Loss: 0.158031
Epoch: 2/10. Train set: Average loss: 0.1579
Epoch: 2/10. Validation set: Average loss: 0.1485
Train: [0/2931 (0%)] Loss: 0.102753
Train: [1098/2931 (100%)] Loss: 0.153030
Epoch: 3/10. Train set: Average loss: 0.1529
Epoch: 3/10. Validation set: Average loss: 0.1535
Train: [0/2931 (0%)] Loss: 0.055598
Train: [1098/2931 (100%)] Loss: 0.160293
Epoch: 4/10. Train set: Average loss: 0.1600
Epoch: 4/10. Validation set: Average loss: 0.1688
Train: [0/2931 (0%)] Loss: 0.057415
Train: [1098/2931 (100%)] Loss: 0.156311
Epoch: 5/10. Train set: Average loss: 0.1560
Epoch: 5/10. Validation set: Average loss: 0.1649
Train: [0/2931 (0%)] Loss: 0.088550
Train: [1098/2931 (100%)] Loss: 0.158179
Epoch: 6/10. Train set: Average loss: 0.1580
Epoch: 6/10. Validation set: Average loss: 0.1460
Train: [0/2931 (0%)] Loss: 0.075446
Train: [1098/2931 (100%)] Loss: 0.148325
Epoch: 7/10. Train set: Average loss: 0.1481
Epoch: 7/10. Validation set: Average loss: 0.1554
Train: [0/2931 (0%)] Loss: 0.100993
Train: [1098/2931 (100%)] Loss: 0.146673
Epoch: 8/10. Train set: Average loss: 0.1465
Epoch: 8/10. Validation set: Average loss: 0.1666
Train: [0/2931 (0%)] Loss: 0.078778
Train: [1098/2931 (100%)] Loss: 0.154838
Epoch: 9/10. Train set: Average loss: 0.1546
Epoch: 9/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.051940
Train: [1098/2931 (100%)] Loss: 0.142689
Epoch: 10/10. Train set: Average loss: 0.1424
Epoch: 10/10. Validation set: Average loss: 0.1378
model_1
Train: [0/2931 (0%)] Loss: 0.062425
Train: [1098/2931 (100%)] Loss: 0.164526
Epoch: 1/10. Train set: Average loss: 0.1642
Epoch: 1/10. Validation set: Average loss: 0.2111
Train: [0/2931 (0%)] Loss: 0.218390
Train: [1098/2931 (100%)] Loss: 0.156381
Epoch: 2/10. Train set: Average loss: 0.1566
Epoch: 2/10. Validation set: Average loss: 0.1735
Train: [0/2931 (0%)] Loss: 0.099059
Train: [1098/2931 (100%)] Loss: 0.151440
Epoch: 3/10. Train set: Average loss: 0.1513
Epoch: 3/10. Validation set: Average loss: 0.1487
Train: [0/2931 (0%)] Loss: 0.125248
Train: [1098/2931 (100%)] Loss: 0.151136
Epoch: 4/10. Train set: Average loss: 0.1511
Epoch: 4/10. Validation set: Average loss: 0.1570
Train: [0/2931 (0%)] Loss: 0.109496
Train: [1098/2931 (100%)] Loss: 0.152981
Epoch: 5/10. Train set: Average loss: 0.1529
Epoch: 5/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.062087
Train: [1098/2931 (100%)] Loss: 0.151760
Epoch: 6/10. Train set: Average loss: 0.1515
Epoch: 6/10. Validation set: Average loss: 0.1728
Train: [0/2931 (0%)] Loss: 0.133733
Train: [1098/2931 (100%)] Loss: 0.147460
Epoch: 7/10. Train set: Average loss: 0.1474
Epoch: 7/10. Validation set: Average loss: 0.1560
Train: [0/2931 (0%)] Loss: 0.179337
Train: [1098/2931 (100%)] Loss: 0.150683
Epoch: 8/10. Train set: Average loss: 0.1508
Epoch: 8/10. Validation set: Average loss: 0.1669
Train: [0/2931 (0%)] Loss: 0.169967
Train: [1098/2931 (100%)] Loss: 0.149116
Epoch: 9/10. Train set: Average loss: 0.1492
Epoch: 9/10. Validation set: Average loss: 0.1410
Train: [0/2931 (0%)] Loss: 0.144343
Train: [1098/2931 (100%)] Loss: 0.143989
Epoch: 10/10. Train set: Average loss: 0.1440
Epoch: 10/10. Validation set: Average loss: 0.1406
model_2
Train: [0/2931 (0%)] Loss: 0.310828
Train: [1098/2931 (100%)] Loss: 0.166207
Epoch: 1/10. Train set: Average loss: 0.1666
Epoch: 1/10. Validation set: Average loss: 0.1622
Train: [0/2931 (0%)] Loss: 0.096186
Train: [1098/2931 (100%)] Loss: 0.153853
Epoch: 2/10. Train set: Average loss: 0.1537
Epoch: 2/10. Validation set: Average loss: 0.1625
Train: [0/2931 (0%)] Loss: 0.115149
Train: [1098/2931 (100%)] Loss: 0.152969
Epoch: 3/10. Train set: Average loss: 0.1529
Epoch: 3/10. Validation set: Average loss: 0.1734
Train: [0/2931 (0%)] Loss: 0.158804
Train: [1098/2931 (100%)] Loss: 0.146850
Epoch: 4/10. Train set: Average loss: 0.1469
Epoch: 4/10. Validation set: Average loss: 0.1736
Train: [0/2931 (0%)] Loss: 0.119094
Train: [1098/2931 (100%)] Loss: 0.150322
Epoch: 5/10. Train set: Average loss: 0.1502
Epoch: 5/10. Validation set: Average loss: 0.1666
Train: [0/2931 (0%)] Loss: 0.149572
Train: [1098/2931 (100%)] Loss: 0.158912
Epoch: 6/10. Train set: Average loss: 0.1589
Epoch: 6/10. Validation set: Average loss: 0.1765
Train: [0/2931 (0%)] Loss: 0.076768
Train: [1098/2931 (100%)] Loss: 0.151203
Epoch: 7/10. Train set: Average loss: 0.1510
Epoch: 7/10. Validation set: Average loss: 0.1636
Train: [0/2931 (0%)] Loss: 0.202072
Train: [1098/2931 (100%)] Loss: 0.143306
Epoch: 8/10. Train set: Average loss: 0.1435
Epoch: 8/10. Validation set: Average loss: 0.1582
Train: [0/2931 (0%)] Loss: 0.154329
Train: [1098/2931 (100%)] Loss: 0.141083
Epoch: 9/10. Train set: Average loss: 0.1411
Epoch: 9/10. Validation set: Average loss: 0.1479
Train: [0/2931 (0%)] Loss: 0.151627
Train: [1098/2931 (100%)] Loss: 0.143546
Epoch: 10/10. Train set: Average loss: 0.1436
Epoch: 10/10. Validation set: Average loss: 0.1475
Number features: 31
model_0
Train: [0/2931 (0%)] Loss: 0.248521
Train: [1098/2931 (100%)] Loss: 0.166950
Epoch: 1/10. Train set: Average loss: 0.1672
Epoch: 1/10. Validation set: Average loss: 0.1457
Train: [0/2931 (0%)] Loss: 0.182320
Train: [1098/2931 (100%)] Loss: 0.156441
Epoch: 2/10. Train set: Average loss: 0.1565
Epoch: 2/10. Validation set: Average loss: 0.1462
Train: [0/2931 (0%)] Loss: 0.185170
Train: [1098/2931 (100%)] Loss: 0.155619
Epoch: 3/10. Train set: Average loss: 0.1557
Epoch: 3/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.105308
Train: [1098/2931 (100%)] Loss: 0.156844
Epoch: 4/10. Train set: Average loss: 0.1567
Epoch: 4/10. Validation set: Average loss: 0.1973
Train: [0/2931 (0%)] Loss: 0.163831
Train: [1098/2931 (100%)] Loss: 0.158001
Epoch: 5/10. Train set: Average loss: 0.1580
Epoch: 5/10. Validation set: Average loss: 0.1698
Train: [0/2931 (0%)] Loss: 0.181152
Train: [1098/2931 (100%)] Loss: 0.157006
Epoch: 6/10. Train set: Average loss: 0.1571
Epoch: 6/10. Validation set: Average loss: 0.1367
Train: [0/2931 (0%)] Loss: 0.209435
Train: [1098/2931 (100%)] Loss: 0.149188
Epoch: 7/10. Train set: Average loss: 0.1494
Epoch: 7/10. Validation set: Average loss: 0.1610
Train: [0/2931 (0%)] Loss: 0.129841
Train: [1098/2931 (100%)] Loss: 0.149563
Epoch: 8/10. Train set: Average loss: 0.1495
Epoch: 8/10. Validation set: Average loss: 0.1446
Train: [0/2931 (0%)] Loss: 0.147648
Train: [1098/2931 (100%)] Loss: 0.149522
Epoch: 9/10. Train set: Average loss: 0.1495
Epoch: 9/10. Validation set: Average loss: 0.1375
Train: [0/2931 (0%)] Loss: 0.145646
Train: [1098/2931 (100%)] Loss: 0.136413
Epoch: 10/10. Train set: Average loss: 0.1364
Epoch: 10/10. Validation set: Average loss: 0.1355
model_1
Train: [0/2931 (0%)] Loss: 0.124579
Train: [1098/2931 (100%)] Loss: 0.164295
Epoch: 1/10. Train set: Average loss: 0.1642
Epoch: 1/10. Validation set: Average loss: 0.1395
Train: [0/2931 (0%)] Loss: 0.126717
Train: [1098/2931 (100%)] Loss: 0.158481
Epoch: 2/10. Train set: Average loss: 0.1584
Epoch: 2/10. Validation set: Average loss: 0.1513
Train: [0/2931 (0%)] Loss: 0.061076
Train: [1098/2931 (100%)] Loss: 0.152363
Epoch: 3/10. Train set: Average loss: 0.1521
Epoch: 3/10. Validation set: Average loss: 0.1411
Train: [0/2931 (0%)] Loss: 0.121035
Train: [1098/2931 (100%)] Loss: 0.149102
Epoch: 4/10. Train set: Average loss: 0.1490
Epoch: 4/10. Validation set: Average loss: 0.1338
Train: [0/2931 (0%)] Loss: 0.206215
Train: [1098/2931 (100%)] Loss: 0.152189
Epoch: 5/10. Train set: Average loss: 0.1523
Epoch: 5/10. Validation set: Average loss: 0.1385
Train: [0/2931 (0%)] Loss: 0.087041
Train: [1098/2931 (100%)] Loss: 0.151436
Epoch: 6/10. Train set: Average loss: 0.1513
Epoch: 6/10. Validation set: Average loss: 0.1473
Train: [0/2931 (0%)] Loss: 0.087386
Train: [1098/2931 (100%)] Loss: 0.155614
Epoch: 7/10. Train set: Average loss: 0.1554
Epoch: 7/10. Validation set: Average loss: 0.1404
Train: [0/2931 (0%)] Loss: 0.080025
Train: [1098/2931 (100%)] Loss: 0.143509
Epoch: 8/10. Train set: Average loss: 0.1433
Epoch: 8/10. Validation set: Average loss: 0.1415
Train: [0/2931 (0%)] Loss: 0.095473
Train: [1098/2931 (100%)] Loss: 0.137173
Epoch: 9/10. Train set: Average loss: 0.1371
Epoch: 9/10. Validation set: Average loss: 0.1334
Train: [0/2931 (0%)] Loss: 0.107733
Train: [1098/2931 (100%)] Loss: 0.139883
Epoch: 10/10. Train set: Average loss: 0.1398
Epoch: 10/10. Validation set: Average loss: 0.1337
model_2
Train: [0/2931 (0%)] Loss: 0.186934
Train: [1098/2931 (100%)] Loss: 0.158998
Epoch: 1/10. Train set: Average loss: 0.1591
Epoch: 1/10. Validation set: Average loss: 0.1657
Train: [0/2931 (0%)] Loss: 0.086062
Train: [1098/2931 (100%)] Loss: 0.159568
Epoch: 2/10. Train set: Average loss: 0.1594
Epoch: 2/10. Validation set: Average loss: 0.1436
Train: [0/2931 (0%)] Loss: 0.220575
Train: [1098/2931 (100%)] Loss: 0.154894
Epoch: 3/10. Train set: Average loss: 0.1551
Epoch: 3/10. Validation set: Average loss: 0.1369
Train: [0/2931 (0%)] Loss: 0.142200
Train: [1098/2931 (100%)] Loss: 0.151321
Epoch: 4/10. Train set: Average loss: 0.1513
Epoch: 4/10. Validation set: Average loss: 0.1328
Train: [0/2931 (0%)] Loss: 0.149073
Train: [1098/2931 (100%)] Loss: 0.148801
Epoch: 5/10. Train set: Average loss: 0.1488
Epoch: 5/10. Validation set: Average loss: 0.1397
Train: [0/2931 (0%)] Loss: 0.102809
Train: [1098/2931 (100%)] Loss: 0.146777
Epoch: 6/10. Train set: Average loss: 0.1467
Epoch: 6/10. Validation set: Average loss: 0.1468
Train: [0/2931 (0%)] Loss: 0.094628
Train: [1098/2931 (100%)] Loss: 0.145475
Epoch: 7/10. Train set: Average loss: 0.1453
Epoch: 7/10. Validation set: Average loss: 0.1496
Train: [0/2931 (0%)] Loss: 0.122882
Train: [1098/2931 (100%)] Loss: 0.143522
Epoch: 8/10. Train set: Average loss: 0.1435
Epoch: 8/10. Validation set: Average loss: 0.1651
Train: [0/2931 (0%)] Loss: 0.143122
Train: [1098/2931 (100%)] Loss: 0.140964
Epoch: 9/10. Train set: Average loss: 0.1410
Epoch: 9/10. Validation set: Average loss: 0.1403
Train: [0/2931 (0%)] Loss: 0.145220
Train: [1098/2931 (100%)] Loss: 0.137472
Epoch: 10/10. Train set: Average loss: 0.1375
Epoch: 10/10. Validation set: Average loss: 0.1421
Number features: 32
model_0
Train: [0/2931 (0%)] Loss: 0.124792
Train: [1098/2931 (100%)] Loss: 0.170625
Epoch: 1/10. Train set: Average loss: 0.1705
Epoch: 1/10. Validation set: Average loss: 0.1553
Train: [0/2931 (0%)] Loss: 0.124237
Train: [1098/2931 (100%)] Loss: 0.159808
Epoch: 2/10. Train set: Average loss: 0.1597
Epoch: 2/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.213428
Train: [1098/2931 (100%)] Loss: 0.155619
Epoch: 3/10. Train set: Average loss: 0.1558
Epoch: 3/10. Validation set: Average loss: 0.1505
Train: [0/2931 (0%)] Loss: 0.129372
Train: [1098/2931 (100%)] Loss: 0.156187
Epoch: 4/10. Train set: Average loss: 0.1561
Epoch: 4/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.176683
Train: [1098/2931 (100%)] Loss: 0.154962
Epoch: 5/10. Train set: Average loss: 0.1550
Epoch: 5/10. Validation set: Average loss: 0.1547
Train: [0/2931 (0%)] Loss: 0.139751
Train: [1098/2931 (100%)] Loss: 0.152004
Epoch: 6/10. Train set: Average loss: 0.1520
Epoch: 6/10. Validation set: Average loss: 0.1655
Train: [0/2931 (0%)] Loss: 0.129002
Train: [1098/2931 (100%)] Loss: 0.147168
Epoch: 7/10. Train set: Average loss: 0.1471
Epoch: 7/10. Validation set: Average loss: 0.1448
Train: [0/2931 (0%)] Loss: 0.117316
Train: [1098/2931 (100%)] Loss: 0.151833
Epoch: 8/10. Train set: Average loss: 0.1517
Epoch: 8/10. Validation set: Average loss: 0.1671
Train: [0/2931 (0%)] Loss: 0.213926
Train: [1098/2931 (100%)] Loss: 0.147364
Epoch: 9/10. Train set: Average loss: 0.1475
Epoch: 9/10. Validation set: Average loss: 0.1437
Train: [0/2931 (0%)] Loss: 0.089912
Train: [1098/2931 (100%)] Loss: 0.141491
Epoch: 10/10. Train set: Average loss: 0.1414
Epoch: 10/10. Validation set: Average loss: 0.1411
model_1
Train: [0/2931 (0%)] Loss: 0.186993
Train: [1098/2931 (100%)] Loss: 0.167124
Epoch: 1/10. Train set: Average loss: 0.1672
Epoch: 1/10. Validation set: Average loss: 0.1585
Train: [0/2931 (0%)] Loss: 0.158679
Train: [1098/2931 (100%)] Loss: 0.156631
Epoch: 2/10. Train set: Average loss: 0.1566
Epoch: 2/10. Validation set: Average loss: 0.1683
Train: [0/2931 (0%)] Loss: 0.205695
Train: [1098/2931 (100%)] Loss: 0.164443
Epoch: 3/10. Train set: Average loss: 0.1646
Epoch: 3/10. Validation set: Average loss: 0.1494
Train: [0/2931 (0%)] Loss: 0.102792
Train: [1098/2931 (100%)] Loss: 0.153869
Epoch: 4/10. Train set: Average loss: 0.1537
Epoch: 4/10. Validation set: Average loss: 0.1426
Train: [0/2931 (0%)] Loss: 0.158867
Train: [1098/2931 (100%)] Loss: 0.151130
Epoch: 5/10. Train set: Average loss: 0.1512
Epoch: 5/10. Validation set: Average loss: 0.1535
Train: [0/2931 (0%)] Loss: 0.156237
Train: [1098/2931 (100%)] Loss: 0.146861
Epoch: 6/10. Train set: Average loss: 0.1469
Epoch: 6/10. Validation set: Average loss: 0.1568
Train: [0/2931 (0%)] Loss: 0.139261
Train: [1098/2931 (100%)] Loss: 0.148405
Epoch: 7/10. Train set: Average loss: 0.1484
Epoch: 7/10. Validation set: Average loss: 0.1573
Train: [0/2931 (0%)] Loss: 0.110176
Train: [1098/2931 (100%)] Loss: 0.147770
Epoch: 8/10. Train set: Average loss: 0.1477
Epoch: 8/10. Validation set: Average loss: 0.1644
Train: [0/2931 (0%)] Loss: 0.147693
Train: [1098/2931 (100%)] Loss: 0.149522
Epoch: 9/10. Train set: Average loss: 0.1495
Epoch: 9/10. Validation set: Average loss: 0.1356
Train: [0/2931 (0%)] Loss: 0.171455
Train: [1098/2931 (100%)] Loss: 0.142475
Epoch: 10/10. Train set: Average loss: 0.1426
Epoch: 10/10. Validation set: Average loss: 0.1357
model_2
Train: [0/2931 (0%)] Loss: 0.249710
Train: [1098/2931 (100%)] Loss: 0.163486
Epoch: 1/10. Train set: Average loss: 0.1637
Epoch: 1/10. Validation set: Average loss: 0.1406
Train: [0/2931 (0%)] Loss: 0.151565
Train: [1098/2931 (100%)] Loss: 0.160338
Epoch: 2/10. Train set: Average loss: 0.1603
Epoch: 2/10. Validation set: Average loss: 0.1521
Train: [0/2931 (0%)] Loss: 0.111501
Train: [1098/2931 (100%)] Loss: 0.154286
Epoch: 3/10. Train set: Average loss: 0.1542
Epoch: 3/10. Validation set: Average loss: 0.1526
Train: [0/2931 (0%)] Loss: 0.142037
Train: [1098/2931 (100%)] Loss: 0.150081
Epoch: 4/10. Train set: Average loss: 0.1501
Epoch: 4/10. Validation set: Average loss: 0.1545
Train: [0/2931 (0%)] Loss: 0.117892
Train: [1098/2931 (100%)] Loss: 0.153028
Epoch: 5/10. Train set: Average loss: 0.1529
Epoch: 5/10. Validation set: Average loss: 0.1590
Train: [0/2931 (0%)] Loss: 0.181125
Train: [1098/2931 (100%)] Loss: 0.147474
Epoch: 6/10. Train set: Average loss: 0.1476
Epoch: 6/10. Validation set: Average loss: 0.1542
Train: [0/2931 (0%)] Loss: 0.134787
Train: [1098/2931 (100%)] Loss: 0.146949
Epoch: 7/10. Train set: Average loss: 0.1469
Epoch: 7/10. Validation set: Average loss: 0.1455
Train: [0/2931 (0%)] Loss: 0.124792
Train: [1098/2931 (100%)] Loss: 0.147673
Epoch: 8/10. Train set: Average loss: 0.1476
Epoch: 8/10. Validation set: Average loss: 0.1497
Train: [0/2931 (0%)] Loss: 0.164092
Train: [1098/2931 (100%)] Loss: 0.145936
Epoch: 9/10. Train set: Average loss: 0.1460
Epoch: 9/10. Validation set: Average loss: 0.1371
Train: [0/2931 (0%)] Loss: 0.148835
Train: [1098/2931 (100%)] Loss: 0.139359
Epoch: 10/10. Train set: Average loss: 0.1394
Epoch: 10/10. Validation set: Average loss: 0.1368
Number features: 33
model_0
Train: [0/2931 (0%)] Loss: 0.311555
Train: [1098/2931 (100%)] Loss: 0.181689
Epoch: 1/10. Train set: Average loss: 0.1820
Epoch: 1/10. Validation set: Average loss: 0.1782
Train: [0/2931 (0%)] Loss: 0.245067
Train: [1098/2931 (100%)] Loss: 0.163333
Epoch: 2/10. Train set: Average loss: 0.1636
Epoch: 2/10. Validation set: Average loss: 0.1573
Train: [0/2931 (0%)] Loss: 0.187579
Train: [1098/2931 (100%)] Loss: 0.162447
Epoch: 3/10. Train set: Average loss: 0.1625
Epoch: 3/10. Validation set: Average loss: 0.1648
Train: [0/2931 (0%)] Loss: 0.197714
Train: [1098/2931 (100%)] Loss: 0.160323
Epoch: 4/10. Train set: Average loss: 0.1604
Epoch: 4/10. Validation set: Average loss: 0.1961
Train: [0/2931 (0%)] Loss: 0.229260
Train: [1098/2931 (100%)] Loss: 0.158844
Epoch: 5/10. Train set: Average loss: 0.1590
Epoch: 5/10. Validation set: Average loss: 0.1617
Train: [0/2931 (0%)] Loss: 0.203248
Train: [1098/2931 (100%)] Loss: 0.159427
Epoch: 6/10. Train set: Average loss: 0.1595
Epoch: 6/10. Validation set: Average loss: 0.1762
Train: [0/2931 (0%)] Loss: 0.234571
Train: [1098/2931 (100%)] Loss: 0.154569
Epoch: 7/10. Train set: Average loss: 0.1548
Epoch: 7/10. Validation set: Average loss: 0.1501
Train: [0/2931 (0%)] Loss: 0.199279
Train: [1098/2931 (100%)] Loss: 0.148488
Epoch: 8/10. Train set: Average loss: 0.1486
Epoch: 8/10. Validation set: Average loss: 0.1489
Train: [0/2931 (0%)] Loss: 0.149129
Train: [1098/2931 (100%)] Loss: 0.150771
Epoch: 9/10. Train set: Average loss: 0.1508
Epoch: 9/10. Validation set: Average loss: 0.1356
Train: [0/2931 (0%)] Loss: 0.076484
Train: [1098/2931 (100%)] Loss: 0.143744
Epoch: 10/10. Train set: Average loss: 0.1436
Epoch: 10/10. Validation set: Average loss: 0.1332
model_1
Train: [0/2931 (0%)] Loss: 0.312039
Train: [1098/2931 (100%)] Loss: 0.171468
Epoch: 1/10. Train set: Average loss: 0.1719
Epoch: 1/10. Validation set: Average loss: 0.1520
Train: [0/2931 (0%)] Loss: 0.205132
Train: [1098/2931 (100%)] Loss: 0.180827
Epoch: 2/10. Train set: Average loss: 0.1809
Epoch: 2/10. Validation set: Average loss: 0.1568
Train: [0/2931 (0%)] Loss: 0.223264
Train: [1098/2931 (100%)] Loss: 0.174050
Epoch: 3/10. Train set: Average loss: 0.1742
Epoch: 3/10. Validation set: Average loss: 0.1663
Train: [0/2931 (0%)] Loss: 0.282959
Train: [1098/2931 (100%)] Loss: 0.171359
Epoch: 4/10. Train set: Average loss: 0.1717
Epoch: 4/10. Validation set: Average loss: 0.1687
Train: [0/2931 (0%)] Loss: 0.112082
Train: [1098/2931 (100%)] Loss: 0.161597
Epoch: 5/10. Train set: Average loss: 0.1615
Epoch: 5/10. Validation set: Average loss: 0.1611
Train: [0/2931 (0%)] Loss: 0.159162
Train: [1098/2931 (100%)] Loss: 0.161314
Epoch: 6/10. Train set: Average loss: 0.1613
Epoch: 6/10. Validation set: Average loss: 0.1691
Train: [0/2931 (0%)] Loss: 0.231847
Train: [1098/2931 (100%)] Loss: 0.154679
Epoch: 7/10. Train set: Average loss: 0.1549
Epoch: 7/10. Validation set: Average loss: 0.1647
Train: [0/2931 (0%)] Loss: 0.254145
Train: [1098/2931 (100%)] Loss: 0.157473
Epoch: 8/10. Train set: Average loss: 0.1577
Epoch: 8/10. Validation set: Average loss: 0.1654
Train: [0/2931 (0%)] Loss: 0.135722
Train: [1098/2931 (100%)] Loss: 0.147874
Epoch: 9/10. Train set: Average loss: 0.1478
Epoch: 9/10. Validation set: Average loss: 0.1382
Train: [0/2931 (0%)] Loss: 0.107603
Train: [1098/2931 (100%)] Loss: 0.145651
Epoch: 10/10. Train set: Average loss: 0.1455
Epoch: 10/10. Validation set: Average loss: 0.1396
model_2
Train: [0/2931 (0%)] Loss: 0.124685
Train: [1098/2931 (100%)] Loss: 0.178927
Epoch: 1/10. Train set: Average loss: 0.1788
Epoch: 1/10. Validation set: Average loss: 0.1581
Train: [0/2931 (0%)] Loss: 0.160745
Train: [1098/2931 (100%)] Loss: 0.171517
Epoch: 2/10. Train set: Average loss: 0.1715
Epoch: 2/10. Validation set: Average loss: 0.1503
Train: [0/2931 (0%)] Loss: 0.124435
Train: [1098/2931 (100%)] Loss: 0.167214
Epoch: 3/10. Train set: Average loss: 0.1671
Epoch: 3/10. Validation set: Average loss: 0.1908
Train: [0/2931 (0%)] Loss: 0.203212
Train: [1098/2931 (100%)] Loss: 0.158691
Epoch: 4/10. Train set: Average loss: 0.1588
Epoch: 4/10. Validation set: Average loss: 0.1517
Train: [0/2931 (0%)] Loss: 0.191518
Train: [1098/2931 (100%)] Loss: 0.156507
Epoch: 5/10. Train set: Average loss: 0.1566
Epoch: 5/10. Validation set: Average loss: 0.1351
Train: [0/2931 (0%)] Loss: 0.119405
Train: [1098/2931 (100%)] Loss: 0.170215
Epoch: 6/10. Train set: Average loss: 0.1701
Epoch: 6/10. Validation set: Average loss: 0.1810
Train: [0/2931 (0%)] Loss: 0.199800
Train: [1098/2931 (100%)] Loss: 0.168013
Epoch: 7/10. Train set: Average loss: 0.1681
Epoch: 7/10. Validation set: Average loss: 0.1592
Train: [0/2931 (0%)] Loss: 0.191394
Train: [1098/2931 (100%)] Loss: 0.173104
Epoch: 8/10. Train set: Average loss: 0.1732
Epoch: 8/10. Validation set: Average loss: 0.1554
Train: [0/2931 (0%)] Loss: 0.107772
Train: [1098/2931 (100%)] Loss: 0.157716
Epoch: 9/10. Train set: Average loss: 0.1576
Epoch: 9/10. Validation set: Average loss: 0.1369
Train: [0/2931 (0%)] Loss: 0.192774
Train: [1098/2931 (100%)] Loss: 0.149948
Epoch: 10/10. Train set: Average loss: 0.1501
Epoch: 10/10. Validation set: Average loss: 0.1358
Number features: 34
model_0
Train: [0/2931 (0%)] Loss: 0.062457
Train: [1098/2931 (100%)] Loss: 0.165004
Epoch: 1/10. Train set: Average loss: 0.1647
Epoch: 1/10. Validation set: Average loss: 0.1294
Train: [0/2931 (0%)] Loss: 0.175003
Train: [1098/2931 (100%)] Loss: 0.172725
Epoch: 2/10. Train set: Average loss: 0.1727
Epoch: 2/10. Validation set: Average loss: 0.1557
Train: [0/2931 (0%)] Loss: 0.132159
Train: [1098/2931 (100%)] Loss: 0.167866
Epoch: 3/10. Train set: Average loss: 0.1678
Epoch: 3/10. Validation set: Average loss: 0.1622
Train: [0/2931 (0%)] Loss: 0.192787
Train: [1098/2931 (100%)] Loss: 0.162413
Epoch: 4/10. Train set: Average loss: 0.1625
Epoch: 4/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.068253
Train: [1098/2931 (100%)] Loss: 0.166362
Epoch: 5/10. Train set: Average loss: 0.1661
Epoch: 5/10. Validation set: Average loss: 0.1322
Train: [0/2931 (0%)] Loss: 0.127605
Train: [1098/2931 (100%)] Loss: 0.157693
Epoch: 6/10. Train set: Average loss: 0.1576
Epoch: 6/10. Validation set: Average loss: 0.1317
Train: [0/2931 (0%)] Loss: 0.274311
Train: [1098/2931 (100%)] Loss: 0.160635
Epoch: 7/10. Train set: Average loss: 0.1609
Epoch: 7/10. Validation set: Average loss: 0.1214
Train: [0/2931 (0%)] Loss: 0.044392
Train: [1098/2931 (100%)] Loss: 0.162736
Epoch: 8/10. Train set: Average loss: 0.1624
Epoch: 8/10. Validation set: Average loss: 0.1251
Train: [0/2931 (0%)] Loss: 0.146603
Train: [1098/2931 (100%)] Loss: 0.153547
Epoch: 9/10. Train set: Average loss: 0.1535
Epoch: 9/10. Validation set: Average loss: 0.1216
Train: [0/2931 (0%)] Loss: 0.172937
Train: [1098/2931 (100%)] Loss: 0.145991
Epoch: 10/10. Train set: Average loss: 0.1461
Epoch: 10/10. Validation set: Average loss: 0.1201
model_1
Train: [0/2931 (0%)] Loss: 0.436338
Train: [1098/2931 (100%)] Loss: 0.176047
Epoch: 1/10. Train set: Average loss: 0.1768
Epoch: 1/10. Validation set: Average loss: 0.1370
Train: [0/2931 (0%)] Loss: 0.228215
Train: [1098/2931 (100%)] Loss: 0.174336
Epoch: 2/10. Train set: Average loss: 0.1745
Epoch: 2/10. Validation set: Average loss: 0.1292
Train: [0/2931 (0%)] Loss: 0.110806
Train: [1098/2931 (100%)] Loss: 0.168301
Epoch: 3/10. Train set: Average loss: 0.1681
Epoch: 3/10. Validation set: Average loss: 0.1162
Train: [0/2931 (0%)] Loss: 0.141143
Train: [1098/2931 (100%)] Loss: 0.158369
Epoch: 4/10. Train set: Average loss: 0.1583
Epoch: 4/10. Validation set: Average loss: 0.1316
Train: [0/2931 (0%)] Loss: 0.151365
Train: [1098/2931 (100%)] Loss: 0.169423
Epoch: 5/10. Train set: Average loss: 0.1694
Epoch: 5/10. Validation set: Average loss: 0.1154
Train: [0/2931 (0%)] Loss: 0.254184
Train: [1098/2931 (100%)] Loss: 0.158071
Epoch: 6/10. Train set: Average loss: 0.1583
Epoch: 6/10. Validation set: Average loss: 0.1356
Train: [0/2931 (0%)] Loss: 0.280129
Train: [1098/2931 (100%)] Loss: 0.162013
Epoch: 7/10. Train set: Average loss: 0.1623
Epoch: 7/10. Validation set: Average loss: 0.1215
Train: [0/2931 (0%)] Loss: 0.146424
Train: [1098/2931 (100%)] Loss: 0.153105
Epoch: 8/10. Train set: Average loss: 0.1531
Epoch: 8/10. Validation set: Average loss: 0.1366
Train: [0/2931 (0%)] Loss: 0.189054
Train: [1098/2931 (100%)] Loss: 0.150295
Epoch: 9/10. Train set: Average loss: 0.1504
Epoch: 9/10. Validation set: Average loss: 0.1216
Train: [0/2931 (0%)] Loss: 0.155357
Train: [1098/2931 (100%)] Loss: 0.149919
Epoch: 10/10. Train set: Average loss: 0.1499
Epoch: 10/10. Validation set: Average loss: 0.1203
model_2
Train: [0/2931 (0%)] Loss: 0.436170
Train: [1098/2931 (100%)] Loss: 0.168749
Epoch: 1/10. Train set: Average loss: 0.1695
Epoch: 1/10. Validation set: Average loss: 0.1230
Train: [0/2931 (0%)] Loss: 0.142632
Train: [1098/2931 (100%)] Loss: 0.170083
Epoch: 2/10. Train set: Average loss: 0.1700
Epoch: 2/10. Validation set: Average loss: 0.1155
Train: [0/2931 (0%)] Loss: 0.115458
Train: [1098/2931 (100%)] Loss: 0.153247
Epoch: 3/10. Train set: Average loss: 0.1531
Epoch: 3/10. Validation set: Average loss: 0.1252
Train: [0/2931 (0%)] Loss: 0.150961
Train: [1098/2931 (100%)] Loss: 0.164710
Epoch: 4/10. Train set: Average loss: 0.1647
Epoch: 4/10. Validation set: Average loss: 0.1282
Train: [0/2931 (0%)] Loss: 0.143600
Train: [1098/2931 (100%)] Loss: 0.157371
Epoch: 5/10. Train set: Average loss: 0.1573
Epoch: 5/10. Validation set: Average loss: 0.1312
Train: [0/2931 (0%)] Loss: 0.217085
Train: [1098/2931 (100%)] Loss: 0.152936
Epoch: 6/10. Train set: Average loss: 0.1531
Epoch: 6/10. Validation set: Average loss: 0.1242
Train: [0/2931 (0%)] Loss: 0.253683
Train: [1098/2931 (100%)] Loss: 0.157539
Epoch: 7/10. Train set: Average loss: 0.1578
Epoch: 7/10. Validation set: Average loss: 0.1304
Train: [0/2931 (0%)] Loss: 0.243777
Train: [1098/2931 (100%)] Loss: 0.159105
Epoch: 8/10. Train set: Average loss: 0.1593
Epoch: 8/10. Validation set: Average loss: 0.1265
Train: [0/2931 (0%)] Loss: 0.133187
Train: [1098/2931 (100%)] Loss: 0.144832
Epoch: 9/10. Train set: Average loss: 0.1448
Epoch: 9/10. Validation set: Average loss: 0.1294
Train: [0/2931 (0%)] Loss: 0.114142
Train: [1098/2931 (100%)] Loss: 0.145371
Epoch: 10/10. Train set: Average loss: 0.1453
Epoch: 10/10. Validation set: Average loss: 0.1297
Number features: 35
model_0
Train: [0/2931 (0%)] Loss: 0.187167
Train: [1098/2931 (100%)] Loss: 0.165577
Epoch: 1/10. Train set: Average loss: 0.1656
Epoch: 1/10. Validation set: Average loss: 0.1462
Train: [0/2931 (0%)] Loss: 0.161368
Train: [1098/2931 (100%)] Loss: 0.158705
Epoch: 2/10. Train set: Average loss: 0.1587
Epoch: 2/10. Validation set: Average loss: 0.1274
Train: [0/2931 (0%)] Loss: 0.206349
Train: [1098/2931 (100%)] Loss: 0.166990
Epoch: 3/10. Train set: Average loss: 0.1671
Epoch: 3/10. Validation set: Average loss: 0.1374
Train: [0/2931 (0%)] Loss: 0.143094
Train: [1098/2931 (100%)] Loss: 0.170386
Epoch: 4/10. Train set: Average loss: 0.1703
Epoch: 4/10. Validation set: Average loss: 0.1437
Train: [0/2931 (0%)] Loss: 0.121423
Train: [1098/2931 (100%)] Loss: 0.166122
Epoch: 5/10. Train set: Average loss: 0.1660
Epoch: 5/10. Validation set: Average loss: 0.1550
Train: [0/2931 (0%)] Loss: 0.126751
Train: [1098/2931 (100%)] Loss: 0.163985
Epoch: 6/10. Train set: Average loss: 0.1639
Epoch: 6/10. Validation set: Average loss: 0.1438
Train: [0/2931 (0%)] Loss: 0.142331
Train: [1098/2931 (100%)] Loss: 0.156948
Epoch: 7/10. Train set: Average loss: 0.1569
Epoch: 7/10. Validation set: Average loss: 0.1223
Train: [0/2931 (0%)] Loss: 0.103553
Train: [1098/2931 (100%)] Loss: 0.156223
Epoch: 8/10. Train set: Average loss: 0.1561
Epoch: 8/10. Validation set: Average loss: 0.1381
Train: [0/2931 (0%)] Loss: 0.140961
Train: [1098/2931 (100%)] Loss: 0.150303
Epoch: 9/10. Train set: Average loss: 0.1503
Epoch: 9/10. Validation set: Average loss: 0.1231
Train: [0/2931 (0%)] Loss: 0.151432
Train: [1098/2931 (100%)] Loss: 0.143701
Epoch: 10/10. Train set: Average loss: 0.1437
Epoch: 10/10. Validation set: Average loss: 0.1245
model_1
Train: [0/2931 (0%)] Loss: 0.186880
Train: [1098/2931 (100%)] Loss: 0.169931
Epoch: 1/10. Train set: Average loss: 0.1700
Epoch: 1/10. Validation set: Average loss: 0.1390
Train: [0/2931 (0%)] Loss: 0.201012
Train: [1098/2931 (100%)] Loss: 0.162249
Epoch: 2/10. Train set: Average loss: 0.1624
Epoch: 2/10. Validation set: Average loss: 0.1479
Train: [0/2931 (0%)] Loss: 0.105832
Train: [1098/2931 (100%)] Loss: 0.162010
Epoch: 3/10. Train set: Average loss: 0.1619
Epoch: 3/10. Validation set: Average loss: 0.1617
Train: [0/2931 (0%)] Loss: 0.105352
Train: [1098/2931 (100%)] Loss: 0.158815
Epoch: 4/10. Train set: Average loss: 0.1587
Epoch: 4/10. Validation set: Average loss: 0.1520
Train: [0/2931 (0%)] Loss: 0.172469
Train: [1098/2931 (100%)] Loss: 0.157941
Epoch: 5/10. Train set: Average loss: 0.1580
Epoch: 5/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.127207
Train: [1098/2931 (100%)] Loss: 0.160353
Epoch: 6/10. Train set: Average loss: 0.1603
Epoch: 6/10. Validation set: Average loss: 0.1374
Train: [0/2931 (0%)] Loss: 0.193114
Train: [1098/2931 (100%)] Loss: 0.152416
Epoch: 7/10. Train set: Average loss: 0.1525
Epoch: 7/10. Validation set: Average loss: 0.1355
Train: [0/2931 (0%)] Loss: 0.243055
Train: [1098/2931 (100%)] Loss: 0.147828
Epoch: 8/10. Train set: Average loss: 0.1481
Epoch: 8/10. Validation set: Average loss: 0.1421
Train: [0/2931 (0%)] Loss: 0.142753
Train: [1098/2931 (100%)] Loss: 0.144622
Epoch: 9/10. Train set: Average loss: 0.1446
Epoch: 9/10. Validation set: Average loss: 0.1354
Train: [0/2931 (0%)] Loss: 0.088517
Train: [1098/2931 (100%)] Loss: 0.145148
Epoch: 10/10. Train set: Average loss: 0.1450
Epoch: 10/10. Validation set: Average loss: 0.1366
model_2
Train: [0/2931 (0%)] Loss: 0.249401
Train: [1098/2931 (100%)] Loss: 0.164106
Epoch: 1/10. Train set: Average loss: 0.1643
Epoch: 1/10. Validation set: Average loss: 0.1407
Train: [0/2931 (0%)] Loss: 0.121642
Train: [1098/2931 (100%)] Loss: 0.157301
Epoch: 2/10. Train set: Average loss: 0.1572
Epoch: 2/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.137923
Train: [1098/2931 (100%)] Loss: 0.153939
Epoch: 3/10. Train set: Average loss: 0.1539
Epoch: 3/10. Validation set: Average loss: 0.1448
Train: [0/2931 (0%)] Loss: 0.142807
Train: [1098/2931 (100%)] Loss: 0.154789
Epoch: 4/10. Train set: Average loss: 0.1548
Epoch: 4/10. Validation set: Average loss: 0.1424
Train: [0/2931 (0%)] Loss: 0.111903
Train: [1098/2931 (100%)] Loss: 0.152794
Epoch: 5/10. Train set: Average loss: 0.1527
Epoch: 5/10. Validation set: Average loss: 0.1528
Train: [0/2931 (0%)] Loss: 0.119518
Train: [1098/2931 (100%)] Loss: 0.153051
Epoch: 6/10. Train set: Average loss: 0.1530
Epoch: 6/10. Validation set: Average loss: 0.1447
Train: [0/2931 (0%)] Loss: 0.135923
Train: [1098/2931 (100%)] Loss: 0.146148
Epoch: 7/10. Train set: Average loss: 0.1461
Epoch: 7/10. Validation set: Average loss: 0.1542
Train: [0/2931 (0%)] Loss: 0.081777
Train: [1098/2931 (100%)] Loss: 0.154478
Epoch: 8/10. Train set: Average loss: 0.1543
Epoch: 8/10. Validation set: Average loss: 0.1395
Train: [0/2931 (0%)] Loss: 0.125604
Train: [1098/2931 (100%)] Loss: 0.145708
Epoch: 9/10. Train set: Average loss: 0.1457
Epoch: 9/10. Validation set: Average loss: 0.1322
Train: [0/2931 (0%)] Loss: 0.118570
Train: [1098/2931 (100%)] Loss: 0.143592
Epoch: 10/10. Train set: Average loss: 0.1435
Epoch: 10/10. Validation set: Average loss: 0.1318
Number features: 36
model_0
Train: [0/2931 (0%)] Loss: 0.187145
Train: [1098/2931 (100%)] Loss: 0.164235
Epoch: 1/10. Train set: Average loss: 0.1643
Epoch: 1/10. Validation set: Average loss: 0.1333
Train: [0/2931 (0%)] Loss: 0.174641
Train: [1098/2931 (100%)] Loss: 0.171380
Epoch: 2/10. Train set: Average loss: 0.1714
Epoch: 2/10. Validation set: Average loss: 0.1393
Train: [0/2931 (0%)] Loss: 0.114669
Train: [1098/2931 (100%)] Loss: 0.154125
Epoch: 3/10. Train set: Average loss: 0.1540
Epoch: 3/10. Validation set: Average loss: 0.1428
Train: [0/2931 (0%)] Loss: 0.136039
Train: [1098/2931 (100%)] Loss: 0.148483
Epoch: 4/10. Train set: Average loss: 0.1484
Epoch: 4/10. Validation set: Average loss: 0.1228
Train: [0/2931 (0%)] Loss: 0.145055
Train: [1098/2931 (100%)] Loss: 0.152375
Epoch: 5/10. Train set: Average loss: 0.1524
Epoch: 5/10. Validation set: Average loss: 0.1442
Train: [0/2931 (0%)] Loss: 0.065265
Train: [1098/2931 (100%)] Loss: 0.156484
Epoch: 6/10. Train set: Average loss: 0.1562
Epoch: 6/10. Validation set: Average loss: 0.1512
Train: [0/2931 (0%)] Loss: 0.097215
Train: [1098/2931 (100%)] Loss: 0.155069
Epoch: 7/10. Train set: Average loss: 0.1549
Epoch: 7/10. Validation set: Average loss: 0.1477
Train: [0/2931 (0%)] Loss: 0.093106
Train: [1098/2931 (100%)] Loss: 0.153638
Epoch: 8/10. Train set: Average loss: 0.1535
Epoch: 8/10. Validation set: Average loss: 0.1469
Train: [0/2931 (0%)] Loss: 0.193940
Train: [1098/2931 (100%)] Loss: 0.149969
Epoch: 9/10. Train set: Average loss: 0.1501
Epoch: 9/10. Validation set: Average loss: 0.1366
Train: [0/2931 (0%)] Loss: 0.112200
Train: [1098/2931 (100%)] Loss: 0.145052
Epoch: 10/10. Train set: Average loss: 0.1450
Epoch: 10/10. Validation set: Average loss: 0.1340
model_1
Train: [0/2931 (0%)] Loss: 0.124879
Train: [1098/2931 (100%)] Loss: 0.169418
Epoch: 1/10. Train set: Average loss: 0.1693
Epoch: 1/10. Validation set: Average loss: 0.1464
Train: [0/2931 (0%)] Loss: 0.127423
Train: [1098/2931 (100%)] Loss: 0.164429
Epoch: 2/10. Train set: Average loss: 0.1643
Epoch: 2/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.131680
Train: [1098/2931 (100%)] Loss: 0.163823
Epoch: 3/10. Train set: Average loss: 0.1637
Epoch: 3/10. Validation set: Average loss: 0.1756
Train: [0/2931 (0%)] Loss: 0.474896
Train: [1098/2931 (100%)] Loss: 0.157335
Epoch: 4/10. Train set: Average loss: 0.1582
Epoch: 4/10. Validation set: Average loss: 0.1661
Train: [0/2931 (0%)] Loss: 0.092826
Train: [1098/2931 (100%)] Loss: 0.154375
Epoch: 5/10. Train set: Average loss: 0.1542
Epoch: 5/10. Validation set: Average loss: 0.1326
Train: [0/2931 (0%)] Loss: 0.277112
Train: [1098/2931 (100%)] Loss: 0.155465
Epoch: 6/10. Train set: Average loss: 0.1558
Epoch: 6/10. Validation set: Average loss: 0.1429
Train: [0/2931 (0%)] Loss: 0.153607
Train: [1098/2931 (100%)] Loss: 0.153119
Epoch: 7/10. Train set: Average loss: 0.1531
Epoch: 7/10. Validation set: Average loss: 0.1864
Train: [0/2931 (0%)] Loss: 0.180777
Train: [1098/2931 (100%)] Loss: 0.153696
Epoch: 8/10. Train set: Average loss: 0.1538
Epoch: 8/10. Validation set: Average loss: 0.1283
Train: [0/2931 (0%)] Loss: 0.194324
Train: [1098/2931 (100%)] Loss: 0.149259
Epoch: 9/10. Train set: Average loss: 0.1494
Epoch: 9/10. Validation set: Average loss: 0.1324
Train: [0/2931 (0%)] Loss: 0.110892
Train: [1098/2931 (100%)] Loss: 0.145672
Epoch: 10/10. Train set: Average loss: 0.1456
Epoch: 10/10. Validation set: Average loss: 0.1317
model_2
Train: [0/2931 (0%)] Loss: 0.311134
Train: [1098/2931 (100%)] Loss: 0.160977
Epoch: 1/10. Train set: Average loss: 0.1614
Epoch: 1/10. Validation set: Average loss: 0.1150
Train: [0/2931 (0%)] Loss: 0.201692
Train: [1098/2931 (100%)] Loss: 0.158651
Epoch: 2/10. Train set: Average loss: 0.1588
Epoch: 2/10. Validation set: Average loss: 0.1325
Train: [0/2931 (0%)] Loss: 0.240207
Train: [1098/2931 (100%)] Loss: 0.164264
Epoch: 3/10. Train set: Average loss: 0.1645
Epoch: 3/10. Validation set: Average loss: 0.1429
Train: [0/2931 (0%)] Loss: 0.056202
Train: [1098/2931 (100%)] Loss: 0.169171
Epoch: 4/10. Train set: Average loss: 0.1689
Epoch: 4/10. Validation set: Average loss: 0.1149
Train: [0/2931 (0%)] Loss: 0.156901
Train: [1098/2931 (100%)] Loss: 0.160805
Epoch: 5/10. Train set: Average loss: 0.1608
Epoch: 5/10. Validation set: Average loss: 0.1247
Train: [0/2931 (0%)] Loss: 0.145813
Train: [1098/2931 (100%)] Loss: 0.162189
Epoch: 6/10. Train set: Average loss: 0.1621
Epoch: 6/10. Validation set: Average loss: 0.1206
Train: [0/2931 (0%)] Loss: 0.178289
Train: [1098/2931 (100%)] Loss: 0.155908
Epoch: 7/10. Train set: Average loss: 0.1560
Epoch: 7/10. Validation set: Average loss: 0.1138
Train: [0/2931 (0%)] Loss: 0.089758
Train: [1098/2931 (100%)] Loss: 0.155382
Epoch: 8/10. Train set: Average loss: 0.1552
Epoch: 8/10. Validation set: Average loss: 0.1207
Train: [0/2931 (0%)] Loss: 0.067720
Train: [1098/2931 (100%)] Loss: 0.151655
Epoch: 9/10. Train set: Average loss: 0.1514
Epoch: 9/10. Validation set: Average loss: 0.1231
Train: [0/2931 (0%)] Loss: 0.078304
Train: [1098/2931 (100%)] Loss: 0.143651
Epoch: 10/10. Train set: Average loss: 0.1435
Epoch: 10/10. Validation set: Average loss: 0.1230
Number features: 37
model_0
Train: [0/2931 (0%)] Loss: 0.124904
Train: [1098/2931 (100%)] Loss: 0.173594
Epoch: 1/10. Train set: Average loss: 0.1735
Epoch: 1/10. Validation set: Average loss: 0.1401
Train: [0/2931 (0%)] Loss: 0.374623
Train: [1098/2931 (100%)] Loss: 0.157901
Epoch: 2/10. Train set: Average loss: 0.1585
Epoch: 2/10. Validation set: Average loss: 0.1356
Train: [0/2931 (0%)] Loss: 0.195865
Train: [1098/2931 (100%)] Loss: 0.157412
Epoch: 3/10. Train set: Average loss: 0.1575
Epoch: 3/10. Validation set: Average loss: 0.1549
Train: [0/2931 (0%)] Loss: 0.062250
Train: [1098/2931 (100%)] Loss: 0.173095
Epoch: 4/10. Train set: Average loss: 0.1728
Epoch: 4/10. Validation set: Average loss: 0.1521
Train: [0/2931 (0%)] Loss: 0.183280
Train: [1098/2931 (100%)] Loss: 0.157040
Epoch: 5/10. Train set: Average loss: 0.1571
Epoch: 5/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.178847
Train: [1098/2931 (100%)] Loss: 0.159511
Epoch: 6/10. Train set: Average loss: 0.1596
Epoch: 6/10. Validation set: Average loss: 0.1553
Train: [0/2931 (0%)] Loss: 0.135965
Train: [1098/2931 (100%)] Loss: 0.154278
Epoch: 7/10. Train set: Average loss: 0.1542
Epoch: 7/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.103346
Train: [1098/2931 (100%)] Loss: 0.154749
Epoch: 8/10. Train set: Average loss: 0.1546
Epoch: 8/10. Validation set: Average loss: 0.1461
Train: [0/2931 (0%)] Loss: 0.141792
Train: [1098/2931 (100%)] Loss: 0.145361
Epoch: 9/10. Train set: Average loss: 0.1454
Epoch: 9/10. Validation set: Average loss: 0.1480
Train: [0/2931 (0%)] Loss: 0.154545
Train: [1098/2931 (100%)] Loss: 0.150745
Epoch: 10/10. Train set: Average loss: 0.1508
Epoch: 10/10. Validation set: Average loss: 0.1474
model_1
Train: [0/2931 (0%)] Loss: 0.249605
Train: [1098/2931 (100%)] Loss: 0.161057
Epoch: 1/10. Train set: Average loss: 0.1613
Epoch: 1/10. Validation set: Average loss: 0.1206
Train: [0/2931 (0%)] Loss: 0.137545
Train: [1098/2931 (100%)] Loss: 0.151744
Epoch: 2/10. Train set: Average loss: 0.1517
Epoch: 2/10. Validation set: Average loss: 0.1354
Train: [0/2931 (0%)] Loss: 0.277964
Train: [1098/2931 (100%)] Loss: 0.160319
Epoch: 3/10. Train set: Average loss: 0.1606
Epoch: 3/10. Validation set: Average loss: 0.1274
Train: [0/2931 (0%)] Loss: 0.181225
Train: [1098/2931 (100%)] Loss: 0.152072
Epoch: 4/10. Train set: Average loss: 0.1522
Epoch: 4/10. Validation set: Average loss: 0.1532
Train: [0/2931 (0%)] Loss: 0.130306
Train: [1098/2931 (100%)] Loss: 0.152315
Epoch: 5/10. Train set: Average loss: 0.1523
Epoch: 5/10. Validation set: Average loss: 0.1336
Train: [0/2931 (0%)] Loss: 0.082418
Train: [1098/2931 (100%)] Loss: 0.156693
Epoch: 6/10. Train set: Average loss: 0.1565
Epoch: 6/10. Validation set: Average loss: 0.1515
Train: [0/2931 (0%)] Loss: 0.149462
Train: [1098/2931 (100%)] Loss: 0.154960
Epoch: 7/10. Train set: Average loss: 0.1549
Epoch: 7/10. Validation set: Average loss: 0.1460
Train: [0/2931 (0%)] Loss: 0.130043
Train: [1098/2931 (100%)] Loss: 0.151264
Epoch: 8/10. Train set: Average loss: 0.1512
Epoch: 8/10. Validation set: Average loss: 0.1541
Train: [0/2931 (0%)] Loss: 0.174236
Train: [1098/2931 (100%)] Loss: 0.142538
Epoch: 9/10. Train set: Average loss: 0.1426
Epoch: 9/10. Validation set: Average loss: 0.1490
Train: [0/2931 (0%)] Loss: 0.134005
Train: [1098/2931 (100%)] Loss: 0.145009
Epoch: 10/10. Train set: Average loss: 0.1450
Epoch: 10/10. Validation set: Average loss: 0.1495
model_2
Train: [0/2931 (0%)] Loss: 0.311720
Train: [1098/2931 (100%)] Loss: 0.168823
Epoch: 1/10. Train set: Average loss: 0.1692
Epoch: 1/10. Validation set: Average loss: 0.1666
Train: [0/2931 (0%)] Loss: 0.186440
Train: [1098/2931 (100%)] Loss: 0.160802
Epoch: 2/10. Train set: Average loss: 0.1609
Epoch: 2/10. Validation set: Average loss: 0.1543
Train: [0/2931 (0%)] Loss: 0.165093
Train: [1098/2931 (100%)] Loss: 0.160324
Epoch: 3/10. Train set: Average loss: 0.1603
Epoch: 3/10. Validation set: Average loss: 0.1392
Train: [0/2931 (0%)] Loss: 0.164449
Train: [1098/2931 (100%)] Loss: 0.161411
Epoch: 4/10. Train set: Average loss: 0.1614
Epoch: 4/10. Validation set: Average loss: 0.1492
Train: [0/2931 (0%)] Loss: 0.221218
Train: [1098/2931 (100%)] Loss: 0.151665
Epoch: 5/10. Train set: Average loss: 0.1519
Epoch: 5/10. Validation set: Average loss: 0.1362
Train: [0/2931 (0%)] Loss: 0.216765
Train: [1098/2931 (100%)] Loss: 0.158543
Epoch: 6/10. Train set: Average loss: 0.1587
Epoch: 6/10. Validation set: Average loss: 0.1541
Train: [0/2931 (0%)] Loss: 0.171907
Train: [1098/2931 (100%)] Loss: 0.156598
Epoch: 7/10. Train set: Average loss: 0.1566
Epoch: 7/10. Validation set: Average loss: 0.1391
Train: [0/2931 (0%)] Loss: 0.125405
Train: [1098/2931 (100%)] Loss: 0.156337
Epoch: 8/10. Train set: Average loss: 0.1563
Epoch: 8/10. Validation set: Average loss: 0.1445
Train: [0/2931 (0%)] Loss: 0.220299
Train: [1098/2931 (100%)] Loss: 0.152550
Epoch: 9/10. Train set: Average loss: 0.1527
Epoch: 9/10. Validation set: Average loss: 0.1405
Train: [0/2931 (0%)] Loss: 0.211647
Train: [1098/2931 (100%)] Loss: 0.145686
Epoch: 10/10. Train set: Average loss: 0.1459
Epoch: 10/10. Validation set: Average loss: 0.1464
Number features: 38
model_0
Train: [0/2931 (0%)] Loss: 0.374015
Train: [1098/2931 (100%)] Loss: 0.170900
Epoch: 1/10. Train set: Average loss: 0.1715
Epoch: 1/10. Validation set: Average loss: 0.1189
Train: [0/2931 (0%)] Loss: 0.186039
Train: [1098/2931 (100%)] Loss: 0.157581
Epoch: 2/10. Train set: Average loss: 0.1577
Epoch: 2/10. Validation set: Average loss: 0.1341
Train: [0/2931 (0%)] Loss: 0.197636
Train: [1098/2931 (100%)] Loss: 0.158327
Epoch: 3/10. Train set: Average loss: 0.1584
Epoch: 3/10. Validation set: Average loss: 0.1285
Train: [0/2931 (0%)] Loss: 0.136584
Train: [1098/2931 (100%)] Loss: 0.158994
Epoch: 4/10. Train set: Average loss: 0.1589
Epoch: 4/10. Validation set: Average loss: 0.1171
Train: [0/2931 (0%)] Loss: 0.122069
Train: [1098/2931 (100%)] Loss: 0.154802
Epoch: 5/10. Train set: Average loss: 0.1547
Epoch: 5/10. Validation set: Average loss: 0.1263
Train: [0/2931 (0%)] Loss: 0.154101
Train: [1098/2931 (100%)] Loss: 0.156248
Epoch: 6/10. Train set: Average loss: 0.1562
Epoch: 6/10. Validation set: Average loss: 0.1197
Train: [0/2931 (0%)] Loss: 0.107832
Train: [1098/2931 (100%)] Loss: 0.149526
Epoch: 7/10. Train set: Average loss: 0.1494
Epoch: 7/10. Validation set: Average loss: 0.1390
Train: [0/2931 (0%)] Loss: 0.197323
Train: [1098/2931 (100%)] Loss: 0.149429
Epoch: 8/10. Train set: Average loss: 0.1496
Epoch: 8/10. Validation set: Average loss: 0.1546
Train: [0/2931 (0%)] Loss: 0.265598
Train: [1098/2931 (100%)] Loss: 0.156032
Epoch: 9/10. Train set: Average loss: 0.1563
Epoch: 9/10. Validation set: Average loss: 0.1254
Train: [0/2931 (0%)] Loss: 0.168248
Train: [1098/2931 (100%)] Loss: 0.146522
Epoch: 10/10. Train set: Average loss: 0.1466
Epoch: 10/10. Validation set: Average loss: 0.1286
model_1
Train: [0/2931 (0%)] Loss: 0.249486
Train: [1098/2931 (100%)] Loss: 0.163218
Epoch: 1/10. Train set: Average loss: 0.1635
Epoch: 1/10. Validation set: Average loss: 0.1415
Train: [0/2931 (0%)] Loss: 0.138787
Train: [1098/2931 (100%)] Loss: 0.153329
Epoch: 2/10. Train set: Average loss: 0.1533
Epoch: 2/10. Validation set: Average loss: 0.1238
Train: [0/2931 (0%)] Loss: 0.141318
Train: [1098/2931 (100%)] Loss: 0.152498
Epoch: 3/10. Train set: Average loss: 0.1525
Epoch: 3/10. Validation set: Average loss: 0.1325
Train: [0/2931 (0%)] Loss: 0.125936
Train: [1098/2931 (100%)] Loss: 0.163375
Epoch: 4/10. Train set: Average loss: 0.1633
Epoch: 4/10. Validation set: Average loss: 0.1378
Train: [0/2931 (0%)] Loss: 0.182872
Train: [1098/2931 (100%)] Loss: 0.153710
Epoch: 5/10. Train set: Average loss: 0.1538
Epoch: 5/10. Validation set: Average loss: 0.1394
Train: [0/2931 (0%)] Loss: 0.126086
Train: [1098/2931 (100%)] Loss: 0.153043
Epoch: 6/10. Train set: Average loss: 0.1530
Epoch: 6/10. Validation set: Average loss: 0.1327
Train: [0/2931 (0%)] Loss: 0.192564
Train: [1098/2931 (100%)] Loss: 0.154274
Epoch: 7/10. Train set: Average loss: 0.1544
Epoch: 7/10. Validation set: Average loss: 0.1450
Train: [0/2931 (0%)] Loss: 0.191864
Train: [1098/2931 (100%)] Loss: 0.146446
Epoch: 8/10. Train set: Average loss: 0.1466
Epoch: 8/10. Validation set: Average loss: 0.1235
Train: [0/2931 (0%)] Loss: 0.150450
Train: [1098/2931 (100%)] Loss: 0.143710
Epoch: 9/10. Train set: Average loss: 0.1437
Epoch: 9/10. Validation set: Average loss: 0.1343
Train: [0/2931 (0%)] Loss: 0.090538
Train: [1098/2931 (100%)] Loss: 0.142512
Epoch: 10/10. Train set: Average loss: 0.1424
Epoch: 10/10. Validation set: Average loss: 0.1280
model_2
Train: [0/2931 (0%)] Loss: 0.311360
Train: [1098/2931 (100%)] Loss: 0.161207
Epoch: 1/10. Train set: Average loss: 0.1616
Epoch: 1/10. Validation set: Average loss: 0.1190
Train: [0/2931 (0%)] Loss: 0.126628
Train: [1098/2931 (100%)] Loss: 0.155627
Epoch: 2/10. Train set: Average loss: 0.1555
Epoch: 2/10. Validation set: Average loss: 0.1443
Train: [0/2931 (0%)] Loss: 0.309209
Train: [1098/2931 (100%)] Loss: 0.159180
Epoch: 3/10. Train set: Average loss: 0.1596
Epoch: 3/10. Validation set: Average loss: 0.1385
Train: [0/2931 (0%)] Loss: 0.116303
Train: [1098/2931 (100%)] Loss: 0.169306
Epoch: 4/10. Train set: Average loss: 0.1692
Epoch: 4/10. Validation set: Average loss: 0.1189
Train: [0/2931 (0%)] Loss: 0.106532
Train: [1098/2931 (100%)] Loss: 0.152887
Epoch: 5/10. Train set: Average loss: 0.1528
Epoch: 5/10. Validation set: Average loss: 0.1452
Train: [0/2931 (0%)] Loss: 0.211309
Train: [1098/2931 (100%)] Loss: 0.154559
Epoch: 6/10. Train set: Average loss: 0.1547
Epoch: 6/10. Validation set: Average loss: 0.1267
Train: [0/2931 (0%)] Loss: 0.152117
Train: [1098/2931 (100%)] Loss: 0.150025
Epoch: 7/10. Train set: Average loss: 0.1500
Epoch: 7/10. Validation set: Average loss: 0.1209
Train: [0/2931 (0%)] Loss: 0.112135
Train: [1098/2931 (100%)] Loss: 0.150626
Epoch: 8/10. Train set: Average loss: 0.1505
Epoch: 8/10. Validation set: Average loss: 0.1223
Train: [0/2931 (0%)] Loss: 0.085205
Train: [1098/2931 (100%)] Loss: 0.141495
Epoch: 9/10. Train set: Average loss: 0.1413
Epoch: 9/10. Validation set: Average loss: 0.1251
Train: [0/2931 (0%)] Loss: 0.115540
Train: [1098/2931 (100%)] Loss: 0.140546
Epoch: 10/10. Train set: Average loss: 0.1405
Epoch: 10/10. Validation set: Average loss: 0.1231
Number features: 39
model_0
Train: [0/2931 (0%)] Loss: 0.311825
Train: [1098/2931 (100%)] Loss: 0.161946
Epoch: 1/10. Train set: Average loss: 0.1624
Epoch: 1/10. Validation set: Average loss: 0.1380
Train: [0/2931 (0%)] Loss: 0.135542
Train: [1098/2931 (100%)] Loss: 0.172361
Epoch: 2/10. Train set: Average loss: 0.1723
Epoch: 2/10. Validation set: Average loss: 0.1621
Train: [0/2931 (0%)] Loss: 0.156691
Train: [1098/2931 (100%)] Loss: 0.154289
Epoch: 3/10. Train set: Average loss: 0.1543
Epoch: 3/10. Validation set: Average loss: 0.1688
Train: [0/2931 (0%)] Loss: 0.150875
Train: [1098/2931 (100%)] Loss: 0.152154
Epoch: 4/10. Train set: Average loss: 0.1522
Epoch: 4/10. Validation set: Average loss: 0.1592
Train: [0/2931 (0%)] Loss: 0.161415
Train: [1098/2931 (100%)] Loss: 0.148924
Epoch: 5/10. Train set: Average loss: 0.1490
Epoch: 5/10. Validation set: Average loss: 0.1418
Train: [0/2931 (0%)] Loss: 0.105070
Train: [1098/2931 (100%)] Loss: 0.143753
Epoch: 6/10. Train set: Average loss: 0.1436
Epoch: 6/10. Validation set: Average loss: 0.1325
Train: [0/2931 (0%)] Loss: 0.184294
Train: [1098/2931 (100%)] Loss: 0.141899
Epoch: 7/10. Train set: Average loss: 0.1420
Epoch: 7/10. Validation set: Average loss: 0.1275
Train: [0/2931 (0%)] Loss: 0.130162
Train: [1098/2931 (100%)] Loss: 0.145793
Epoch: 8/10. Train set: Average loss: 0.1458
Epoch: 8/10. Validation set: Average loss: 0.1438
Train: [0/2931 (0%)] Loss: 0.103289
Train: [1098/2931 (100%)] Loss: 0.142804
Epoch: 9/10. Train set: Average loss: 0.1427
Epoch: 9/10. Validation set: Average loss: 0.1411
Train: [0/2931 (0%)] Loss: 0.132904
Train: [1098/2931 (100%)] Loss: 0.141884
Epoch: 10/10. Train set: Average loss: 0.1419
Epoch: 10/10. Validation set: Average loss: 0.1389
model_1
Train: [0/2931 (0%)] Loss: 0.186824
Train: [1098/2931 (100%)] Loss: 0.171816
Epoch: 1/10. Train set: Average loss: 0.1719
Epoch: 1/10. Validation set: Average loss: 0.1529
Train: [0/2931 (0%)] Loss: 0.165474
Train: [1098/2931 (100%)] Loss: 0.158320
Epoch: 2/10. Train set: Average loss: 0.1583
Epoch: 2/10. Validation set: Average loss: 0.1516
Train: [0/2931 (0%)] Loss: 0.110868
Train: [1098/2931 (100%)] Loss: 0.154972
Epoch: 3/10. Train set: Average loss: 0.1549
Epoch: 3/10. Validation set: Average loss: 0.1416
Train: [0/2931 (0%)] Loss: 0.168083
Train: [1098/2931 (100%)] Loss: 0.146895
Epoch: 4/10. Train set: Average loss: 0.1470
Epoch: 4/10. Validation set: Average loss: 0.1360
Train: [0/2931 (0%)] Loss: 0.112856
Train: [1098/2931 (100%)] Loss: 0.150601
Epoch: 5/10. Train set: Average loss: 0.1505
Epoch: 5/10. Validation set: Average loss: 0.1545
Train: [0/2931 (0%)] Loss: 0.098737
Train: [1098/2931 (100%)] Loss: 0.148410
Epoch: 6/10. Train set: Average loss: 0.1483
Epoch: 6/10. Validation set: Average loss: 0.1383
Train: [0/2931 (0%)] Loss: 0.116536
Train: [1098/2931 (100%)] Loss: 0.147573
Epoch: 7/10. Train set: Average loss: 0.1475
Epoch: 7/10. Validation set: Average loss: 0.1297
Train: [0/2931 (0%)] Loss: 0.161279
Train: [1098/2931 (100%)] Loss: 0.151922
Epoch: 8/10. Train set: Average loss: 0.1519
Epoch: 8/10. Validation set: Average loss: 0.1335
Train: [0/2931 (0%)] Loss: 0.170205
Train: [1098/2931 (100%)] Loss: 0.143010
Epoch: 9/10. Train set: Average loss: 0.1431
Epoch: 9/10. Validation set: Average loss: 0.1304
Train: [0/2931 (0%)] Loss: 0.199948
Train: [1098/2931 (100%)] Loss: 0.146452
Epoch: 10/10. Train set: Average loss: 0.1466
Epoch: 10/10. Validation set: Average loss: 0.1288
model_2
Train: [0/2931 (0%)] Loss: 0.374067
Train: [1098/2931 (100%)] Loss: 0.166669
Epoch: 1/10. Train set: Average loss: 0.1672
Epoch: 1/10. Validation set: Average loss: 0.1513
Train: [0/2931 (0%)] Loss: 0.134457
Train: [1098/2931 (100%)] Loss: 0.164409
Epoch: 2/10. Train set: Average loss: 0.1643
Epoch: 2/10. Validation set: Average loss: 0.1531
Train: [0/2931 (0%)] Loss: 0.180841
Train: [1098/2931 (100%)] Loss: 0.168865
Epoch: 3/10. Train set: Average loss: 0.1689
Epoch: 3/10. Validation set: Average loss: 0.1486
Train: [0/2931 (0%)] Loss: 0.202250
Train: [1098/2931 (100%)] Loss: 0.152154
Epoch: 4/10. Train set: Average loss: 0.1523
Epoch: 4/10. Validation set: Average loss: 0.1438
Train: [0/2931 (0%)] Loss: 0.141968
Train: [1098/2931 (100%)] Loss: 0.151632
Epoch: 5/10. Train set: Average loss: 0.1516
Epoch: 5/10. Validation set: Average loss: 0.1382
Train: [0/2931 (0%)] Loss: 0.154754
Train: [1098/2931 (100%)] Loss: 0.148672
Epoch: 6/10. Train set: Average loss: 0.1487
Epoch: 6/10. Validation set: Average loss: 0.1513
Train: [0/2931 (0%)] Loss: 0.202865
Train: [1098/2931 (100%)] Loss: 0.152252
Epoch: 7/10. Train set: Average loss: 0.1524
Epoch: 7/10. Validation set: Average loss: 0.1454
Train: [0/2931 (0%)] Loss: 0.235193
Train: [1098/2931 (100%)] Loss: 0.147270
Epoch: 8/10. Train set: Average loss: 0.1475
Epoch: 8/10. Validation set: Average loss: 0.1417
Train: [0/2931 (0%)] Loss: 0.170854
Train: [1098/2931 (100%)] Loss: 0.147205
Epoch: 9/10. Train set: Average loss: 0.1473
Epoch: 9/10. Validation set: Average loss: 0.1410
Train: [0/2931 (0%)] Loss: 0.218591
Train: [1098/2931 (100%)] Loss: 0.144878
Epoch: 10/10. Train set: Average loss: 0.1451
Epoch: 10/10. Validation set: Average loss: 0.1462
Number features: 40
model_0
Train: [0/2931 (0%)] Loss: 0.187376
Train: [1098/2931 (100%)] Loss: 0.172890
Epoch: 1/10. Train set: Average loss: 0.1729
Epoch: 1/10. Validation set: Average loss: 0.1655
Train: [0/2931 (0%)] Loss: 0.173633
Train: [1098/2931 (100%)] Loss: 0.167800
Epoch: 2/10. Train set: Average loss: 0.1678
Epoch: 2/10. Validation set: Average loss: 0.1510
Train: [0/2931 (0%)] Loss: 0.154663
Train: [1098/2931 (100%)] Loss: 0.155995
Epoch: 3/10. Train set: Average loss: 0.1560
Epoch: 3/10. Validation set: Average loss: 0.1455
Train: [0/2931 (0%)] Loss: 0.116748
Train: [1098/2931 (100%)] Loss: 0.152127
Epoch: 4/10. Train set: Average loss: 0.1520
Epoch: 4/10. Validation set: Average loss: 0.1378
Train: [0/2931 (0%)] Loss: 0.103726
Train: [1098/2931 (100%)] Loss: 0.149029
Epoch: 5/10. Train set: Average loss: 0.1489
Epoch: 5/10. Validation set: Average loss: 0.1392
Train: [0/2931 (0%)] Loss: 0.156946
Train: [1098/2931 (100%)] Loss: 0.150946
Epoch: 6/10. Train set: Average loss: 0.1510
Epoch: 6/10. Validation set: Average loss: 0.1373
Train: [0/2931 (0%)] Loss: 0.194885
Train: [1098/2931 (100%)] Loss: 0.153025
Epoch: 7/10. Train set: Average loss: 0.1531
Epoch: 7/10. Validation set: Average loss: 0.1561
Train: [0/2931 (0%)] Loss: 0.146791
Train: [1098/2931 (100%)] Loss: 0.152269
Epoch: 8/10. Train set: Average loss: 0.1523
Epoch: 8/10. Validation set: Average loss: 0.1513
Train: [0/2931 (0%)] Loss: 0.151004
Train: [1098/2931 (100%)] Loss: 0.153233
Epoch: 9/10. Train set: Average loss: 0.1532
Epoch: 9/10. Validation set: Average loss: 0.1367
Train: [0/2931 (0%)] Loss: 0.146672
Train: [1098/2931 (100%)] Loss: 0.145045
Epoch: 10/10. Train set: Average loss: 0.1450
Epoch: 10/10. Validation set: Average loss: 0.1356
model_1
Train: [0/2931 (0%)] Loss: 0.187028
Train: [1098/2931 (100%)] Loss: 0.178555
Epoch: 1/10. Train set: Average loss: 0.1786
Epoch: 1/10. Validation set: Average loss: 0.1530
Train: [0/2931 (0%)] Loss: 0.074034
Train: [1098/2931 (100%)] Loss: 0.160765
Epoch: 2/10. Train set: Average loss: 0.1605
Epoch: 2/10. Validation set: Average loss: 0.1203
Train: [0/2931 (0%)] Loss: 0.144902
Train: [1098/2931 (100%)] Loss: 0.156239
Epoch: 3/10. Train set: Average loss: 0.1562
Epoch: 3/10. Validation set: Average loss: 0.1573
Train: [0/2931 (0%)] Loss: 0.116086
Train: [1098/2931 (100%)] Loss: 0.154190
Epoch: 4/10. Train set: Average loss: 0.1541
Epoch: 4/10. Validation set: Average loss: 0.1400
Train: [0/2931 (0%)] Loss: 0.134161
Train: [1098/2931 (100%)] Loss: 0.166827
Epoch: 5/10. Train set: Average loss: 0.1667
Epoch: 5/10. Validation set: Average loss: 0.1480
Train: [0/2931 (0%)] Loss: 0.163438
Train: [1098/2931 (100%)] Loss: 0.148924
Epoch: 6/10. Train set: Average loss: 0.1490
Epoch: 6/10. Validation set: Average loss: 0.1549
Train: [0/2931 (0%)] Loss: 0.114868
Train: [1098/2931 (100%)] Loss: 0.148936
Epoch: 7/10. Train set: Average loss: 0.1488
Epoch: 7/10. Validation set: Average loss: 0.1497
Train: [0/2931 (0%)] Loss: 0.088322
Train: [1098/2931 (100%)] Loss: 0.152249
Epoch: 8/10. Train set: Average loss: 0.1521
Epoch: 8/10. Validation set: Average loss: 0.1326
Train: [0/2931 (0%)] Loss: 0.125784
Train: [1098/2931 (100%)] Loss: 0.147777
Epoch: 9/10. Train set: Average loss: 0.1477
Epoch: 9/10. Validation set: Average loss: 0.1361
Train: [0/2931 (0%)] Loss: 0.187914
Train: [1098/2931 (100%)] Loss: 0.142468
Epoch: 10/10. Train set: Average loss: 0.1426
Epoch: 10/10. Validation set: Average loss: 0.1309
model_2
Train: [0/2931 (0%)] Loss: 0.249663
Train: [1098/2931 (100%)] Loss: 0.167130
Epoch: 1/10. Train set: Average loss: 0.1674
Epoch: 1/10. Validation set: Average loss: 0.1253
Train: [0/2931 (0%)] Loss: 0.131634
Train: [1098/2931 (100%)] Loss: 0.153790
Epoch: 2/10. Train set: Average loss: 0.1537
Epoch: 2/10. Validation set: Average loss: 0.1328
Train: [0/2931 (0%)] Loss: 0.143843
Train: [1098/2931 (100%)] Loss: 0.151634
Epoch: 3/10. Train set: Average loss: 0.1516
Epoch: 3/10. Validation set: Average loss: 0.1252
Train: [0/2931 (0%)] Loss: 0.173569
Train: [1098/2931 (100%)] Loss: 0.156962
Epoch: 4/10. Train set: Average loss: 0.1570
Epoch: 4/10. Validation set: Average loss: 0.1445
Train: [0/2931 (0%)] Loss: 0.122052
Train: [1098/2931 (100%)] Loss: 0.165557
Epoch: 5/10. Train set: Average loss: 0.1654
Epoch: 5/10. Validation set: Average loss: 0.1300
Train: [0/2931 (0%)] Loss: 0.097903
Train: [1098/2931 (100%)] Loss: 0.165786
Epoch: 6/10. Train set: Average loss: 0.1656
Epoch: 6/10. Validation set: Average loss: 0.1496
Train: [0/2931 (0%)] Loss: 0.147194
Train: [1098/2931 (100%)] Loss: 0.152778
Epoch: 7/10. Train set: Average loss: 0.1528
Epoch: 7/10. Validation set: Average loss: 0.1405
Train: [0/2931 (0%)] Loss: 0.206992
Train: [1098/2931 (100%)] Loss: 0.155909
Epoch: 8/10. Train set: Average loss: 0.1560
Epoch: 8/10. Validation set: Average loss: 0.1297
Train: [0/2931 (0%)] Loss: 0.192942
Train: [1098/2931 (100%)] Loss: 0.145894
</code>
|
{
"repository": "wconnell/metrx",
"path": "notebook/.ipynb_checkpoints/2020.03.30_feat_sel_shuff_dynamic-checkpoint.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": null,
"size": 280611,
"hexsha": "cb41e283bc42e55e6f88d4320173da0093cf2720",
"max_line_length": 1723,
"avg_line_length": 52.4898989899,
"alphanum_fraction": 0.5646678142
}
|
# Notebook from IoannisGeorgousis/CharityML-Project
Path: project.ipynb
# Introduction to Machine Learning Nanodegree
## Project: Finding Donors for *CharityML*_____no_output_____In this project, we employ several supervised algorithms to accurately model individuals' income using data collected from the 1994 U.S. Census. The best candidate algorithm is then chosen from preliminary results and is further optimized to best model the data. The goal with this implementation is to construct a model that accurately predicts whether an individual makes more than \$50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features._____no_output_____
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries._____no_output_____----
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database._____no_output_____
<code>
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Display the first record
display(data.head(5))_____no_output_____
</code>
### Implementation: Data Exploration
A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, the following information is computed:
- The total number of records, `'n_records'`
- The number of individuals making more than \$50,000 annually, `'n_greater_50k'`.
- The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`.
- The percentage of individuals making more than \$50,000 annually, `'greater_percent'`._____no_output_____
<code>
# Total number of records
n_records = data.shape[0]
# Number of records where individual's income is more than $50,000
n_greater_50k = data['income'].value_counts()[1]
# Number of records where individual's income is at most $50,000
n_at_most_50k = data['income'].value_counts()[0]
# Percentage of individuals whose income is more than $50,000
greater_percent = 100 * (n_greater_50k / (n_greater_50k + n_at_most_50k))
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals making more than $50,000: {}".format(n_greater_50k))
print("Individuals making at most $50,000: {}".format(n_at_most_50k))
print("Percentage of individuals making more than $50,000: {}%".format(greater_percent))
# Check whether records are consistent
if n_records == (n_greater_50k + n_at_most_50k):
print('Records are consistent!')
Total number of records: 45222
Individuals making more than $50,000: 11208
Individuals making at most $50,000: 34014
Percentage of individuals making more than $50,000: 24.78439697492371%
Records are consistent!
</code>
**Featureset Exploration**
* **age**: continuous.
* **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
* **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
* **education-num**: continuous.
* **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
* **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
* **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
* **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other.
* **sex**: Female, Male.
* **capital-gain**: continuous.
* **capital-loss**: continuous.
* **hours-per-week**: continuous.
* **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands._____no_output_____----
## Preparing the Data
Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms._____no_output_____### Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`.
Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed._____no_output_____
<code>
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)C:\Users\johng\python-projects\Udacity Project Supervised ML\intro-to-ml-tensorflow\projects\p1_charityml\visuals.py:48: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
fig.show()
</code>
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully.
Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed. _____no_output_____
<code>
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_log_transformed = pd.DataFrame(data = features_raw)
features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_log_transformed, transformed = True)_____no_output_____
</code>
### Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.
Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this._____no_output_____
<code>
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_log_minmax_transform = pd.DataFrame(data = features_log_transformed)
features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical])
# Show an example of a record with scaling applied
display(features_log_minmax_transform.head(n = 5))_____no_output_____
</code>
### Data Preprocessing
From the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`.
| | someFeature | | someFeature_A | someFeature_B | someFeature_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following:
- Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) to perform one-hot encoding on the `'features_log_minmax_transform'` data.
- Convert the target label `'income_raw'` to numerical entries.
- Set records with "<=50K" to `0` and records with ">50K" to `1`._____no_output_____
<code>
# One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies()
features_final = pd.get_dummies(features_log_minmax_transform)
# Encode the 'income_raw' data to numerical values
income = income_raw.replace(to_replace = {'<=50K': 0, '>50K': 1})
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
#print(encoded)103 total features after one-hot encoding.
['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week', 'workclass_ Federal-gov', 'workclass_ Local-gov', 'workclass_ Private', 'workclass_ Self-emp-inc', 'workclass_ Self-emp-not-inc', 'workclass_ State-gov', 'workclass_ Without-pay', 'education_level_ 10th', 'education_level_ 11th', 'education_level_ 12th', 'education_level_ 1st-4th', 'education_level_ 5th-6th', 'education_level_ 7th-8th', 'education_level_ 9th', 'education_level_ Assoc-acdm', 'education_level_ Assoc-voc', 'education_level_ Bachelors', 'education_level_ Doctorate', 'education_level_ HS-grad', 'education_level_ Masters', 'education_level_ Preschool', 'education_level_ Prof-school', 'education_level_ Some-college', 'marital-status_ Divorced', 'marital-status_ Married-AF-spouse', 'marital-status_ Married-civ-spouse', 'marital-status_ Married-spouse-absent', 'marital-status_ Never-married', 'marital-status_ Separated', 'marital-status_ Widowed', 'occupation_ Adm-clerical', 'occupation_ Armed-Forces', 'occupation_ Craft-repair', 'occupation_ Exec-managerial', 'occupation_ Farming-fishing', 'occupation_ Handlers-cleaners', 'occupation_ Machine-op-inspct', 'occupation_ Other-service', 'occupation_ Priv-house-serv', 'occupation_ Prof-specialty', 'occupation_ Protective-serv', 'occupation_ Sales', 'occupation_ Tech-support', 'occupation_ Transport-moving', 'relationship_ Husband', 'relationship_ Not-in-family', 'relationship_ Other-relative', 'relationship_ Own-child', 'relationship_ Unmarried', 'relationship_ Wife', 'race_ Amer-Indian-Eskimo', 'race_ Asian-Pac-Islander', 'race_ Black', 'race_ Other', 'race_ White', 'sex_ Female', 'sex_ Male', 'native-country_ Cambodia', 'native-country_ Canada', 'native-country_ China', 'native-country_ Columbia', 'native-country_ Cuba', 'native-country_ Dominican-Republic', 'native-country_ Ecuador', 'native-country_ El-Salvador', 'native-country_ England', 'native-country_ France', 'native-country_ Germany', 'native-country_ Greece', 'native-country_ Guatemala', 'native-country_ Haiti', 'native-country_ Holand-Netherlands', 'native-country_ Honduras', 'native-country_ Hong', 'native-country_ Hungary', 'native-country_ India', 'native-country_ Iran', 'native-country_ Ireland', 'native-country_ Italy', 'native-country_ Jamaica', 'native-country_ Japan', 'native-country_ Laos', 'native-country_ Mexico', 'native-country_ Nicaragua', 'native-country_ Outlying-US(Guam-USVI-etc)', 'native-country_ Peru', 'native-country_ Philippines', 'native-country_ Poland', 'native-country_ Portugal', 'native-country_ Puerto-Rico', 'native-country_ Scotland', 'native-country_ South', 'native-country_ Taiwan', 'native-country_ Thailand', 'native-country_ Trinadad&Tobago', 'native-country_ United-States', 'native-country_ Vietnam', 'native-country_ Yugoslavia']
</code>
### Shuffle and Split Data
Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split._____no_output_____
<code>
# Import train_test_split
from sklearn.model_selection import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features_final,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))Training set has 36177 samples.
Testing set has 9045 samples.
</code>
----
## Evaluating Model Performance
In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of our choice, and the fourth algorithm is known as a *naive predictor*._____no_output_____### Metrics and the Naive Predictor
*CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).
Looking at the distribution of classes (those who make at most 50,000, and those who make more), it's clear most individuals do not make more than 50,000. This can greatly affect accuracy, since we could simply say \"this person does not make more than 50,000\" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than 50,000, CharityML would identify no one as donors.
#### Note: Recap of accuracy, precision, recall
**Accuracy** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
**Precision** tells us what proportion of messages we classified as spam, actually were spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of
`[True Positives/(True Positives + False Positives)]`
**Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of
`[True Positives/(True Positives + False Negatives)]`
For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios)._____no_output_____### Naive Predictor Performace
If we chose a model that always predicted an individual made more than $50,000, what would that model's accuracy and F-score be on this dataset? You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later.
**Please note** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from.
**Notes:**
* When we have a model that always predicts '1' (i.e. the individual makes more than 50k) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total.
* Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives._____no_output_____
<code>
'''
TP = np.sum(income) # Counting the ones as this is the naive case. Note that 'income' is the 'income_raw' data
encoded to numerical values done in the data preprocessing step.
FP = income.count() - TP # Specific to the naive case
TN = 0 # No predicted negatives in the naive case
FN = 0 # No predicted negatives in the naive case
'''
# Calculate accuracy, precision and recall
TP = np.sum(income)
FP = income.count() - TP
TN, FN = 0, 0
accuracy = (TP + TN) / (TP + TN + FP + FN)
recall = TP / (TP + FN)
precision = TP / (TP + FP)
# Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
beta = 0.5 # Define beta
fscore = (1 + beta**2) * (precision * recall) / (beta**2 * precision + recall)
# Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))Naive Predictor: [Accuracy score: 0.2478, F-score: 0.2917]
</code>
### Supervised Learning Models
**The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent Classifier (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression_____no_output_____### Model Application
List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen
- Describe one real-world application in industry where the model can be applied.
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
_____no_output_____### Decision Trees
**Describe one real-world application in industry where the model can be applied.**
Decision trees can be used for "Identifying Defective Products in the
Manufacturing Process". [1]
In this regard, decision trees are used as a classification algorithm that is trained on data with features of products that the company manufactures, as well as labels "Defective" and "Non-defective".
After training process, the model should be able to group products into "Defective" and "Non-defective" categories and predict whether a manufactured product is defective or not.
**What are the strengths of the model; when does it perform well?**
1. The data pre-processing step for decision trees requires less effort compared to other algorithms (e.g. no need to normalize/scale data or impute missing values). [2]
2. The way the algorithm works is very intuitive, and thus easier to understand and explain. In addition, they can be used as a white box model. [3]
**What are the weaknesses of the model; when does it perform poorly?**
1. Because decision trees are so simple there is often a need for more complex algorithms (e.g. Random Forest) to achieve a higher accuracy. [3]
2. Decision trees have the tendency to overfit the training set. [3]
3. Decision trees are unstable. The reproducibility of a decision tree model is unreliable since the structure is sensitive to even to small changes in the data. [3]
4. Decision trees can get complex and computationally expensive. [3]
**What makes this model a good candidate for the problem, given what you know about the data?**
I think this model is a good candidate in this situation because, as a white box, and because the features are well-defined, it might provide further insights which CharityML can rely on.
For example, CharityML identified that the most relevant parameter when it comes to determining donation likelihood is individual income.
A decision tree model may find highly accurate predictors of income that can simplify the current process and help draw more valuable conclusions such as this one.
Moreover, due to the algorithms simplicity, the charity members will have the capacity to intuitively understand its basic internal processes.
**References**
[[1]](http://www.kpubs.org/article/articleDownload.kpubs?downType=pdf&articleANo=E1CTBR_2017_v13n2_57)
[[2]](https://medium.com/@dhiraj8899/top-5-advantages-and-disadvantages-of-decision-tree-algorithm-428ebd199d9a)
[[3]](https://botbark.com/2019/12/19/top-6-advantages-and-disadvantages-of-decision-tree-algorithm/)
_____no_output_____### Ensemble Methods (AdaBoost)
**Describe one real-world application in industry where the model can be applied.**
The AdaBoost algorithm can be applied for "Telecommunication Fraud Detection". [1]
The model is trained using features of past telecommunication messages (features) along with whether they ended up being fraudulent or not (labels).
Then, the AdaBoost model should be able to predict whether future telecommunication material is fraudulent or not.
**What are the strengths of the model; when does it perform well?**
1. High flexibility. Different classification algorithms (decision trees, SVMs, etc.) can be used as weak learners to finally constitute a strong learner (final model). [2]
2. High precision. Experiments have shown AdaBoost models to achieve relatively high precision when making predictions. [3]
3. Simple preprocessing. AdaBoost algorithms are not too demanding when it comes to preprocessed data, thus more time is saved during the pre-processing step. [4]
**What are the weaknesses of the model; when does it perform poorly?**
1. Sensitive to noise data and outliers. [4]
2. Requires quality data because the boosting technique learns progressively and is prone to error. [4]
3. Low Accuracy when Data is Imbalanced. [3]
4. Training is mildly computationally expensive, and thus it can be time-consuming. [3]
**What makes this model a good candidate for the problem, given what you know about the data?**
AdaBoost will be tried as a alternative to decision trees with stronger predictive capacity.
An AdaBoost model is a good candidate because it can provide improvements over decision trees to valuable metrics such as accuracy and precision.
Since it has been shown that this algorithm can achieve relatively high precision (which is what we are looking for in this problem), this aspect of it will also benefit us.
**References**
[[1]](https://download.atlantis-press.com/article/25896505.pdf)
[[2]](https://www.educba.com/adaboost-algorithm/)
[[3]](https://easyai.tech/en/ai-definition/adaboost/#:~:text=AdaBoost%20is%20adaptive%20in%20a,problems%20than%20other%20learning%20algorithms.)
[[4]](https://blog.paperspace.com/adaboost-optimizer/)_____no_output_____### Support Vector Machines
**Describe one real-world application in industry where the model can be applied.**
SVM's can be applied in bioinformatics. [1]
For example, an SVM model can be trained on data involving features of cancer tumours and then be able to identify whether a tumour is benign or malignant (labels).
**What are the strengths of the model; when does it perform well?**
1. Effective in high dimensional spaces (i.e. when there numerous features). [2]
2. Generally good algorithm. SVM’s are good when we have almost no information about the data. [3]
3. Relatively low risk of overfitting. This is due to its L2 Regularisation feature. [4]
4. High flexibility. Can handle linear & non-linear data due to variety added by different kernel functions. [3]
5. Stability. Since a small change to the data does not greatly affect the hyperplane. [4]
6. SVM is defined by a convex optimisation problem (i.e. no local minima) [4]
**What are the weaknesses of the model; when does it perform poorly?**
1. Training is very computationally expensive (high memory requirement) and thus it can be time-consuming, especially for large datasets [3]
2. Sensitive to noisy data, i.e. when the target classes are overlapping [2]
3. Hyperparameters can be difficult to tune. (Kernel, C parameter, gamma)
e.g. when choosing a Kernel, if you always go with high-dimensional ones you might generate too many support vectors and reduce training speed drastically. [4]
4. Difficult to understand and interpret, particularly with high dimensional data. Also, the final model is not easy to see, so we cannot do small calibrations based on business intuition. [3]
5. Requires feature scaling. [4]
**What makes this model a good candidate for the problem, given what you know about the data?**
Given what we know about the data, SVM would be a good choice since it can handle its multiple dimensions.
It will also add variety when compared to decision trees and AdaBoost, potentially yielding better results due to its vastly different mechanism.
**References**
[[1]](https://data-flair.training/blogs/applications-of-svm/)
[[2]](https://medium.com/@dhiraj8899/top-4-advantages-and-disadvantages-of-support-vector-machine-or-svm-a3c06a2b107)
[[3]](https://statinfer.com/204-6-8-svm-advantages-disadvantages-applications/)
[[4]](http://theprofessionalspoint.blogspot.com/2019/03/advantages-and-disadvantages-of-svm.html)_____no_output_____### Creating a Training and Predicting Pipeline
To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.
In the code block below, you will need to implement the following:
- Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics).
- Fit the learner to the sampled training data and record the training time.
- Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`.
- Record the total prediction time.
- Calculate the accuracy score for both the training subset and testing set.
- Calculate the F-score for both the training subset and testing set.
- Make sure that you set the `beta` parameter!_____no_output_____
<code>
# Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# Calculate the training time
results['train_time'] = end - start
# Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
results['pred_time'] = end - start
# Compute accuracy on the first 300 training samples
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test, predictions_test)
# Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=beta)
# Compute F-score on the test set which is y_test
results['f_test'] = fbeta_score(y_test, predictions_test, beta=beta)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
_____no_output_____
</code>
### Initial Model Evaluation
In the code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`.
- Use a `'random_state'` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Calculate the number of records equal to 1%, 10%, and 100% of the training data.
- Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively.
**Note:** Depending on which algorithms you chose, the following implementation may take some time to run!_____no_output_____
<code>
# Import the three supervised learning models from sklearn
# Import Algorithms
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.svm import SVC
# Initialize the three models
clf_A = DecisionTreeClassifier(random_state=42)
clf_B = AdaBoostClassifier(random_state=42)
clf_C = SVC(random_state=42)
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_100 = len(y_train)
samples_10 = int(0.1*len(y_train))
samples_1 = int(0.01*len(y_train))
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)DecisionTreeClassifier trained on 361 samples.
DecisionTreeClassifier trained on 3617 samples.
DecisionTreeClassifier trained on 36177 samples.
AdaBoostClassifier trained on 361 samples.
AdaBoostClassifier trained on 3617 samples.
AdaBoostClassifier trained on 36177 samples.
SVC trained on 361 samples.
SVC trained on 3617 samples.
SVC trained on 36177 samples.
</code>
----
## Improving Results
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score. _____no_output_____### Choosing the Best Model
Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000. _____no_output_____
##### AdaBoost
According to the analysis, the most appropriate model for identifying individuals who make more than \$50,000 is the AdaBoost model. This is because of the following reasons:
- AdaBoost yields the best accuracy and F-score on the testing data, meaning that to maximise the number of true potential donors, it is the ideal model to choose.
- The 2nd best competitor (namely, SVM) has a slightly higher tendency to overfit, and is significantly more time-consuming to train.
- AdaBoost is suitable for the given dataset because it yields high precision (i.e. few false positives, which is what we want), and will allow us to interpret the result for potential callibrations more so than an SVM model would. _____no_output_____### Describing the Model in Layman's Terms
In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical jargon, such as describing equations._____no_output_____##### Introduction
AdaBoost is a model that belongs to a group of models called "Ensemble Methods".
As the name suggests, the model trains weaker models on the data (also known as "weak learners"), and then combines them into a single, more powerful model (which we call a "strong learner").
##### Training the AdaBoost Model
In our case, we feed the model the training data from our dataset, and it fits a simple "weak learner" to the data. Then, it augments the errors made by the first learner, and it fits a second learner to correct its mistakes. Then, a 3rd weak learner does the same for the 2nd one, and this process repeats until enough learners have been trained.
Then, the algorithm assigns a weight to each weak learner based on its performance, and combines all the weak learners into a single **Strong Learner**.
When combining the weak learners, the ones with the stronger weights (i.e. the more successful ones) will get more of a say on how the final model is structured.
##### AdaBoost Predictions
After training the model, we will be able to feed to it unseen examples (i.e. new individuals), and the model will use its knowledge on the previous individuals to predict whether or not they make more than /$50,000 per year. _____no_output_____### Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Initialize the classifier you've chosen and store it in `clf`.
- Set a `random_state` if one is available to the same state you set before.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available!
- Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$).
- Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`.
**Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!_____no_output_____
<code>
# Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
# Initialize the classifier
clf = AdaBoostClassifier(random_state=42)
# Create the parameters list you wish to tune, using a dictionary if needed.
parameters = {'n_estimators': [500, 1000, 1500, 2000], 'learning_rate': np.linspace(0.001, 1, 10)}
# Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score, beta=beta)
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf, parameters, scoring=scorer, n_jobs = -1)
# Fit the grid search object to the training data and find the optimal parameters using fit()
start = time()
grid_fit = grid_obj.fit(X_train, y_train)
end = time()
print('Time to tune: ', end - start)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Check hyperparameters
print(clf)
print(best_clf)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))Time to tune: 2075.489259004593
AdaBoostClassifier(algorithm='SAMME.R', base_estimator=None, learning_rate=1.0,
n_estimators=50, random_state=42)
AdaBoostClassifier(algorithm='SAMME.R', base_estimator=None,
learning_rate=0.667, n_estimators=1500, random_state=42)
Unoptimized model
------
Accuracy score on testing data: 0.8576
F-score on testing data: 0.7246
Optimized Model
------
Final accuracy score on the testing data: 0.8676
Final F-score on the testing data: 0.7456
</code>
### Final Model Evaluation
* What is your optimized model's accuracy and F-score on the testing data?
* Are these scores better or worse than the unoptimized model?
* How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_ _____no_output_____#### Results:
| Metric | Unoptimized Model | Optimized Model |
| :------------: | :---------------: | :-------------: |
| Accuracy Score | 0.8576 | 0.8676 |
| F-score | 0.7246 | 0.7456 |
_____no_output_____**Discussion**
My optimised model's accuracy is 86.71% while the F-score (beta = 0.5) is 0.7448.
These scores are slightly better than the optimised model's. Accuracy improved by ~1.2% and F-score by ~2.9%.
The scores are significantly better than the naive predictor's. Accuracy improved by ~350% (3.5+ times higher) and F-score by ~256% (2.5+ times higher).
_____no_output_____----
## Feature Importance
An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.
Here, we choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell, we fit this classifier to the training set and use this attribute to determine the top 5 most important features for the census dataset._____no_output_____### Feature Relevance Observation
When **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?_____no_output_____**Answer:**
1. **Occupation**. I would expect the job that a person has to be a good predictor of income.
2. **Hours per week**. The more hours you work, the more you earn.
3. **Education Number** Because of the positive correlation between education level and income.
4. **Age** Usually older people who've had longer careers have a higher income.
5. **Native Country** Because a US worker earns significantly more than, say, an Argentina one. _____no_output_____### Feature Importance
Choose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.
In the code cell below, you will need to implement the following:
- Import a supervised learning model from sklearn if it is different from the three used earlier.
- Train the supervised model on the entire training set.
- Extract the feature importances using `'.feature_importances_'`._____no_output_____
<code>
# Import a supervised learning model that has 'feature_importances_'
from sklearn.ensemble import AdaBoostClassifier
# Train the supervised model on the training set using .fit(X_train, y_train)
model = AdaBoostClassifier().fit(X_train, y_train)
# Extract the feature importances using .feature_importances_
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)_____no_output_____
</code>
### Extracting Feature Importance
Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
* How do these five features compare to the five features you discussed in **Question 6**?
* If you were close to the same answer, how does this visualization confirm your thoughts?
* If you were not close, why do you think these features are more relevant?_____no_output_____**Answer:**
* *How do these five features compare to the five features you discussed in **Question 6**?*
These five features are significantly different to what I predicted in question 6. While I did mention age, hours-per-week and education-num, I failed to mention two of the most significant features: capital-loss and capital-gain, which together amount to about 37% cumulative feature weight.
* *If you were close to the same answer, how does this visualization confirm your thoughts?*
This visualisation confirms that age plays a large role and that hours-per-week and education-num are among the most relevant features.
This is because of the direct and strong correlation between these variables and individual income.
* *If you were not close, why do you think these features are more relevant?*
I was genuinely surprised that occupation did not make it in the top 5. I suppose it was because the mentioned occupations just do not have a large discrepancy in income. Whereas capital-loss and capital-gain varies more among those individuals and more directly affects their income. Similarly, regarding native-country, I suppose most people were from the US or a similarly developed country and hence the feature didn't have great predictive power. _____no_output_____### Feature Selection
How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*. _____no_output_____
<code>
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print("Final Model trained on full data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("\nFinal Model trained on reduced data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)))Final Model trained on full data
------
Accuracy on testing data: 0.8676
F-score on testing data: 0.7456
Final Model trained on reduced data
------
Accuracy on testing data: 0.8422
F-score on testing data: 0.7021
</code>
### Effects of Feature Selection
* How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?
* If training time was a factor, would you consider using the reduced data as your training set?_____no_output_____**Answer:**
The model trained on reduced data gets an extra of ~2% of testing examples wrong, and its F-score is ~0.04 less.
If training time was a factor, I would probably still not use the reduced data as my training set.
However, if more training examples yielded a significant improvement, I would recommend using lower-dimension data so that we could accommodate more training examples._____no_output_____
|
{
"repository": "IoannisGeorgousis/CharityML-Project",
"path": "project.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": 2,
"size": 187929,
"hexsha": "cb427503286794829cf49f83c0ea11f7dc0e3e88",
"max_line_length": 52432,
"avg_line_length": 139.828125,
"alphanum_fraction": 0.8453830968
}
|
# Notebook from f-grimaldi/Linear-Interactive-Peptides-LIPs-Predictor
Path: .ipynb_checkpoints/dataset-checkpoint.ipynb
### Importing libraries_____no_output_____
<code>
# Import default libraries
import pandas as pd
import numpy as np
import math
from matplotlib import pyplot as plt
import seaborn as sns
import time
import random
import warnings
import os
import requests
import json
import time
import zipfile
import logging
# Import Biopython utils
from Bio.PDB import PDBList, calc_angle, calc_dihedral, PPBuilder, is_aa, PDBIO, NeighborSearch, DSSP, HSExposureCB
from Bio.PDB.PDBParser import PDBParser
from Bio.SeqUtils import IUPACData
from Bio.PDB.PDBIO import Select
# Import custom libraries
from modules.feature_extraction import *_____no_output_____# Set debug info
logging.basicConfig(level=logging.DEBUG)_____no_output_____
</code>
### Helping functions_____no_output_____### Importing original dataset (LIP tagged sequences)_____no_output_____
<code>
def down_sampling(df, number_of_samples, seed = 42):
noLIP_index = set(df[df['LIP'] == 0].index)
indexes = set(np.arange(0, np.shape(df)[0]))
sample = random.sample(noLIP_index, len(noLIP_index) - number_of_samples)
new_index = indexes.difference(sample)
df1 = df.iloc[list(new_index), :]
return df1
# Turns an angle from radiants to degrees
def rad_to_deg(rad_angle):
# If the input is None, then it returns None.
# For numerical input, the output is mapped to [-180,180]
if rad_angle is None :
return None
# Computes angle in degrees
angle = rad_angle * 180 / math.pi
# Handles radiants conversion
while angle > 180 :
angle = angle - 360
while angle < -180 :
angle = angle + 360
return angle_____no_output_____# Read original dataset (lips_dataset)
ds_original = pd.read_csv('./datasets/lips_dataset_02.txt', sep='\t')
# Define new dataset
ds_original.head()_____no_output_____
</code>
### Downloading proteins (automatically skips a protein if it has already been downloaded)_____no_output_____
<code>
# Select all proteins (pdb column)
pdb_ids = ds_original.pdb.unique()
# Define pdb files dir
pdb_dir = './pdb_files'
# Define pdb file fetching class
pdbl = PDBList()
# Fetch every protein
for pdb_id in pdb_ids:
# Execute fetching of the protein (pdb file)
pdbl.retrieve_pdb_file(pdb_id, pdir=pdb_dir, file_format='pdb')Structure exists: './pdb_files\pdb1cee.ent'
Structure exists: './pdb_files\pdb1dev.ent'
Structure exists: './pdb_files\pdb1dow.ent'
Structure exists: './pdb_files\pdb1fqj.ent'
Structure exists: './pdb_files\pdb1g3j.ent'
Structure exists: './pdb_files\pdb1hrt.ent'
Structure exists: './pdb_files\pdb1i7w.ent'
Structure exists: './pdb_files\pdb1j2j.ent'
Structure exists: './pdb_files\pdb1jsu.ent'
Structure exists: './pdb_files\pdb1kil.ent'
Structure exists: './pdb_files\pdb1l8c.ent'
Structure exists: './pdb_files\pdb1p4q.ent'
Structure exists: './pdb_files\pdb1pq1.ent'
Structure exists: './pdb_files\pdb1q68.ent'
Structure exists: './pdb_files\pdb1rf8.ent'
Structure exists: './pdb_files\pdb1sc5.ent'
Structure exists: './pdb_files\pdb1sqq.ent'
Structure exists: './pdb_files\pdb1tba.ent'
Structure exists: './pdb_files\pdb1th1.ent'
Structure exists: './pdb_files\pdb1xtg.ent'
Structure exists: './pdb_files\pdb1ymh.ent'
Structure exists: './pdb_files\pdb1zoq.ent'
Structure exists: './pdb_files\pdb2a6q.ent'
Structure exists: './pdb_files\pdb2auh.ent'
Structure exists: './pdb_files\pdb2c1t.ent'
Structure exists: './pdb_files\pdb2o8a.ent'
Structure exists: './pdb_files\pdb3b71.ent'
Structure exists: './pdb_files\pdb1a3b.ent'
Structure exists: './pdb_files\pdb1k2d.ent'
Structure exists: './pdb_files\pdb1ej4.ent'
Structure exists: './pdb_files\pdb1mv0.ent'
Structure exists: './pdb_files\pdb1t08.ent'
Structure exists: './pdb_files\pdb1hv2.ent'
Structure exists: './pdb_files\pdb1p16.ent'
Structure exists: './pdb_files\pdb1ee5.ent'
Structure exists: './pdb_files\pdb1ozs.ent'
Structure exists: './pdb_files\pdb2phe.ent'
Structure exists: './pdb_files\pdb1sb0.ent'
Structure exists: './pdb_files\pdb1j2x.ent'
Structure exists: './pdb_files\pdb1axc.ent'
Structure exists: './pdb_files\pdb2gl7.ent'
Structure exists: './pdb_files\pdb1h2k.ent'
Structure exists: './pdb_files\pdb1ycq.ent'
Structure exists: './pdb_files\pdb1p22.ent'
Structure exists: './pdb_files\pdb2iv8.ent'
Structure exists: './pdb_files\pdb1tce.ent'
Structure exists: './pdb_files\pdb1r1r.ent'
Structure exists: './pdb_files\pdb1mxl.ent'
Structure exists: './pdb_files\pdb2fym.ent'
Structure exists: './pdb_files\pdb1iwq.ent'
Structure exists: './pdb_files\pdb1fv1.ent'
Structure exists: './pdb_files\pdb1dpj.ent'
Structure exists: './pdb_files\pdb2b3g.ent'
Structure exists: './pdb_files\pdb2nl9.ent'
Structure exists: './pdb_files\pdb1o9a.ent'
Structure exists: './pdb_files\pdb1sqk.ent'
Structure exists: './pdb_files\pdb1nx1.ent'
Structure exists: './pdb_files\pdb2gsi.ent'
Structure exists: './pdb_files\pdb1i8h.ent'
Structure exists: './pdb_files\pdb1p4b.ent'
Structure exists: './pdb_files\pdb2ivz.ent'
Structure exists: './pdb_files\pdb1lm8.ent'
Structure exists: './pdb_files\pdb1emu.ent'
Structure exists: './pdb_files\pdb1un0.ent'
Structure exists: './pdb_files\pdb1a81.ent'
Structure exists: './pdb_files\pdb2oq1.ent'
Structure exists: './pdb_files\pdb1kdx.ent'
Structure exists: './pdb_files\pdb1h8b.ent'
Structure exists: './pdb_files\pdb1dt7.ent'
Structure exists: './pdb_files\pdb2pg1.ent'
Structure exists: './pdb_files\pdb1apm.ent'
Structure exists: './pdb_files\pdb1cqt.ent'
</code>
### Creating redidues dataset_____no_output_____
<code>
# Select all proteins (pdb column)
pdb_ids = ds_original.pdb.unique()
# Define pdb files dir
pdb_dir = './pdb_files'
# Define pdb file fetching class
pdbl = PDBList()_____no_output_____# Define a set containing (pdb_id, chain_id)
valid_chains = set([(row['pdb'], row['chain']) for idx, row in ds_original.iterrows()])_____no_output_____# New list for residues
ds_residues = list()
# Loop thorugh every protein
for pdb_id in ds_original.pdb.unique():
# Get structure of the protein
structure = PDBParser(QUIET=True).get_structure(pdb_id, pdb_dir + '/pdb{}.ent'.format(pdb_id))
# We select only the 0-th model
model = structure[0]
# Loop through every model's chain
for chain in model:
# Skip if the chain is not valid
if (pdb_id, chain.id) not in valid_chains:
continue
for residue in chain:
# Do not take into account non-aminoacidic residues (e.g. water molecules)
if(not is_aa(residue)):
continue
# Add an entry to the residues list
ds_residues.append((pdb_id, model.id, chain.id, residue.id[1], residue.get_resname(), 0, 0))
# Turn list into dataframe
ds_residues = pd.DataFrame(ds_residues)
# Define dataset column names
ds_residues.columns = ['PDB_ID', 'MODEL_ID', 'CHAIN_ID', 'RES_ID', 'RES_NAME', 'LIP_SCORE', 'LIP']
# Show some info about the dataset
print("Numbers of proteins: {}".format(np.shape(ds_original)[0]))
print("Numbers of res: {}".format(np.shape(ds_residues)[0]))
# Show first rows
ds_residues.head()Numbers of proteins: 143
Numbers of res: 17911
</code>
### Tagging LIP residues_____no_output_____
<code>
# Launch tagging algorithm: we have 0 positively tagged residues
LIP_tag(ds_original, ds_residues)
# Check that the number of residues positively LIP-tagged is higher than 0
assert True, any(ds_residues['LIP'] == 1)
# Show first positively tagged LIP residues
ds_residues[ds_residues.LIP == 1].head()_____no_output_____
</code>
### Check dataset balancement
We check if we have the same numerosity of LIP and npn-LIP tagged residues._____no_output_____
<code>
# Compute numerosity of LIP tagged residues
print('Numerosity of LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 1].shape[0]))
# Compute numerosity of non-LIP tagged residues
print('Numerosity of non-LIP tagged residues: {}'.format(ds_residues[ds_residues.LIP == 0].shape[0]))Numerosity of LIP tagged residues: 1883
Numerosity of non-LIP tagged residues: 16028
# Add plot
fig, ax = plt.subplots(1, 1)
# Add frequency plot
ax = plt.hist(ds_residues['LIP'], bins=2)DEBUG:matplotlib.font_manager:findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=10.0 to DejaVu Sans ('C:\\Users\\fgrim\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\matplotlib\\mpl-data\\fonts\\ttf\\DejaVuSans.ttf') with score of 0.050000
</code>
## Feature extraction_____no_output_____### DSSP features (angles, etc.)_____no_output_____
<code>
# Get DSSP dataframe
ds_dssp = get_DSSP(ds_original.pdb.unique(), pdb_dir)
# Show dataframe
ds_dssp.head()DEBUG:root:PDB ids:
DEBUG:root:['1cee' '1dev' '1dow' '1fqj' '1g3j' '1hrt' '1i7w' '1j2j' '1jsu' '1kil'
'1l8c' '1p4q' '1pq1' '1q68' '1rf8' '1sc5' '1sqq' '1tba' '1th1' '1xtg'
'1ymh' '1zoq' '2a6q' '2auh' '2c1t' '2o8a' '3b71' '1a3b' '1k2d' '1ej4'
'1mv0' '1t08' '1hv2' '1p16' '1ee5' '1ozs' '2phe' '1sb0' '1j2x' '1axc'
'2gl7' '1h2k' '1ycq' '1p22' '2iv8' '1tce' '1r1r' '1mxl' '2fym' '1iwq'
'1fv1' '1dpj' '2b3g' '2nl9' '1o9a' '1sqk' '1nx1' '2gsi' '1i8h' '1p4b'
'2ivz' '1lm8' '1emu' '1un0' '1a81' '2oq1' '1kdx' '1h8b' '1dt7' '2pg1'
'1apm' '1cqt']
DEBUG:root:PDB directory: './pdb_files'
# Check NULL values in PHI and PSI columns
assert False == bool(ds_dssp.PHI.isnull().any())_____no_output_____# Drop useless features
ds_dssp.drop(['DSSP_ID', 'AA'], axis=1, inplace=True)
ds_dssp.head()_____no_output_____# Drop useless columns from residues dataset
if 'PHI' in ds_residues.columns:
ds_residues.drop(['PHI', 'PSI'], axis=1, inplace=True)
# Merge DSSP features in ds_residues dataset
ds_residues = ds_residues.merge(ds_dssp, on=['PDB_ID', 'CHAIN_ID', 'RES_ID'], how='left')
# Check new datset
ds_residues.head()_____no_output_____fig, ax = plt.subplots(1, 2)
sns.boxplot(x='LIP', y='PHI',data=ds_residues, ax=ax[0])
sns.boxplot(x='LIP', y='PSI',data=ds_residues, ax=ax[1])_____no_output_____
</code>
### RING features_____no_output_____
<code>
# Define folder for ring files
ring_dir = './ring_files'
# Define PDB files for which RING feature extraction is required
pdb_ids = ds_original.pdb.unique()
# Define contact treshold to consider
contact_threshold = 3.5
# Flag for actually extract RING files
enable_ring = False_____no_output_____if enable_ring:
# Download chunk of 5 files per time
for i in range(0, len(pdb_ids), 5):
# Download required RING files
download_RING(pdb_ids[i:i+5], ring_dir)_____no_output_____# Get edges info from RING
ds_ring = get_RING(pdb_ids, pdb_dir, ring_dir, contact_threshold)
ds_ring.head()_____no_output_____# Get the number of intra chains contacts for every residue
intra_contacts = (ds_ring[ds_ring.CHAIN_ID_A == ds_ring.CHAIN_ID_B]
.groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False)
.size()
.reset_index(name='COUNTS'))
intra_contacts.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'INTRA_CONTACTS']
intra_contacts.RES_ID = intra_contacts.RES_ID.astype(int)
intra_contacts.head()_____no_output_____# Get the number of inter chains contacts for every residue
inter_contacts = (ds_ring[ds_ring.CHAIN_ID_A != ds_ring.CHAIN_ID_B]
.groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False)
.size()
.reset_index(name='COUNTS'))
inter_contacts.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'INTER_CONTACTS']
inter_contacts.RES_ID = inter_contacts.RES_ID.astype(int)
inter_contacts.head()_____no_output_____# Merge intra chain contacts into the main dataset
ds_residues = pd.merge(ds_residues, intra_contacts, how="left", on=['PDB_ID', 'CHAIN_ID', 'RES_ID'])
ds_residues.head()_____no_output_____# Merge inter chain contacts into the main dataset
ds_residues = pd.merge(ds_residues, inter_contacts, how="left", on=['PDB_ID', 'CHAIN_ID', 'RES_ID'])
ds_residues.head()_____no_output_____# Fill Nan with zeroes
ds_residues.fillna(0, inplace=True)
ds_residues.head()_____no_output_____# Group every contact by residue
groupby = ds_ring.groupby(['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'], as_index=False)
# Get edge locations
edges_loc = groupby['EDGE_LOC'].apply(lambda x: ' '.join(x)).reset_index(name='EDGE_LOC')
# Get edge types
edges_type = groupby['EDGE_TYPE'].apply(lambda x: ' '.join(x)).reset_index(name='EDGE_TYPE')
# Merge loc and type
edges = pd.merge(edges_loc, edges_type, on=['PDB_ID', 'CHAIN_ID_A', 'RES_ID_A'])
edges.columns = ['PDB_ID', 'CHAIN_ID', 'RES_ID', 'EDGE_LOC', 'EDGE_TYPE']
edges.RES_ID = edges.RES_ID.astype(int)
edges.head()_____no_output_____# Merge edges locations and types into the main dataframe
ds_residues = ds_residues.merge(edges, how='left', on=['PDB_ID', 'CHAIN_ID', 'RES_ID'])
# Handle NaNs
ds_residues.EDGE_LOC = ds_residues.EDGE_LOC.fillna('')
ds_residues.EDGE_TYPE = ds_residues.EDGE_TYPE.fillna('')
# Show new dataset
ds_residues.head()_____no_output_____# Save residues dataset to disk
ds_residues.to_csv('./datasets/residues.csv')_____no_output_____
</code>
|
{
"repository": "f-grimaldi/Linear-Interactive-Peptides-LIPs-Predictor",
"path": ".ipynb_checkpoints/dataset-checkpoint.ipynb",
"matched_keywords": [
"BioPython"
],
"stars": null,
"size": 102489,
"hexsha": "cb42a5acc035de46a8ba37afaf924f7a49b08de2",
"max_line_length": 7896,
"avg_line_length": 38.5007513148,
"alphanum_fraction": 0.4365248954
}
|
# Notebook from AbeelLab/phasm-benchmarks
Path: analysis/Phasing Probabilistic Model Analysis.ipynb
<code>
%matplotlib inline
import sys
import os
import json
from glob import glob
from collections import defaultdict, OrderedDict
import dinopy
import yaml
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
import seaborn
import numpy
import pandas as pd
import networkx
from scipy.special import binom
from scipy import stats
from IPython.display import Image, display
from phasm.io import gfa
from phasm.alignments import AlignmentType
from phasm.assembly_graph import AssemblyGraph
from phasm.bubbles import find_superbubbles
BASE_DIR = os.path.realpath(os.path.join(os.getcwd(), '..'))
with open(os.path.join(BASE_DIR, "config.yml")) as f:
config = yaml.load(f)
seaborn.set_style('whitegrid')_____no_output_____spanning_read_stats = []
candidate_prob_stats = []
bubble_map = defaultdict(dict)
for assembly, asm_config in config['assemblies'].items():
parts = assembly.split('-')
ploidy = int(parts[0].replace("ploidy", ""))
coverage = int(parts[1].replace("x", ""))
asm_folder = os.path.join(BASE_DIR, "assemblies", assembly)
for debugdata in glob("{}/04_phase/component[0-9].bubblechain[0-9]-debugdata.json".format(asm_folder)):
print(debugdata)
graphml = debugdata.replace("04_phase", "03_chain").replace("-debugdata.json", ".graphml")
g = AssemblyGraph(networkx.read_graphml(graphml))
curr_bubble = None
bubble_num = 0
num_candidates = -1
with open(debugdata) as f:
for line in f:
data = json.loads(line)
if data['type'] == "new_bubble":
curr_bubble = data
bubble_map[ploidy, coverage][(data['entrance'], data['exit'])] = data
if data['start_of_block'] == True:
bubble_num = 1
else:
dist_between_bubbles = (
min(e[2] for e in g.out_edges_iter(data['entrance'], data=g.edge_len))
)
spanning_read_stats.append({
'dist': dist_between_bubbles,
'spanning_reads': len(data['rel_read_info']),
'ploidy': ploidy
})
bubble_num += 1
if data['type'] == "candidate_set":
p_sr = data['p_sr']
prior = data['prior']
prob = 10**(p_sr + prior)
entrance = curr_bubble['entrance']
exit = curr_bubble['exit']
candidate_prob_stats.append({
'bubble': (entrance, exit),
'bubble_num': bubble_num,
'candidate_prob': prob,
'ploidy': ploidy,
'coverage': coverage
})/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy2-60x-error-free/04_phase/component0.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy2-60x-error-free/04_phase/component1.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy3-60x-error-free/04_phase/component0.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy3-60x-error-free/04_phase/component1.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy4-60x-error-free/04_phase/component0.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy4-60x-error-free/04_phase/component1.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy6-60x-error-free/04_phase/component0.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy6-60x-error-free/04_phase/component1.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy8-60x-error-free/04_phase/component0.bubblechain0-debugdata.json
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy8-60x-error-free/04_phase/component1.bubblechain0-debugdata.json
srdf = pd.DataFrame(spanning_read_stats)
srdf['spanning_reads_norm'] = srdf['spanning_reads'] / srdf['ploidy']
g = seaborn.JointGrid(x="dist", y="spanning_reads_norm", data=srdf, size=7)
x_bin_size = 2500
g.ax_marg_x.hist(srdf['dist'], alpha=0.6, bins=numpy.arange(0, srdf['dist'].max()+x_bin_size, x_bin_size))
y_bin_size = 10
g.ax_marg_y.hist(srdf['spanning_reads_norm'], alpha=0.6, orientation="horizontal",
bins=numpy.arange(0, srdf['spanning_reads_norm'].max()+y_bin_size, y_bin_size))
g.plot_joint(seaborn.regplot)
g.annotate(stats.pearsonr)
seaborn.plt.suptitle("Number of spanning reads against the distance between two bubbles,\n normalised for ploidy")
plt.ylim(ymin=0)
plt.xlabel("Distance between two bubbles [bases]")
plt.ylabel("Number of spanning reads")
plt.subplots_adjust(top=0.9)
plt.savefig(os.path.join(BASE_DIR, 'figures', 'spanning-reads.png'), transparent=True, dpi=256)
_____no_output_____candidate_df = pd.DataFrame(candidate_prob_stats)
candidate_df.set_index('bubble')
plt.figure()
seaborn.distplot(candidate_df['candidate_prob'], kde=False, hist_kws={"alpha": 0.8})
plt.title("Distribution of candidate extension relative likelihoods")
plt.xlabel("Relative likelihood of an extension")
plt.ylabel("Count")
# plt.xlim(xmax=1.0)
plt.axvline(1e-3, linestyle='--', color='black')
plt.savefig(os.path.join(BASE_DIR, 'figures', 'rel-likelihood-abs.png'), transparent=True, dpi=256)_____no_output_____grouped = candidate_df.groupby(['bubble', 'ploidy'])['candidate_prob']
max_probs = grouped.max()
for bubble, ploidy in grouped.groups.keys():
candidate_df.loc[grouped.groups[bubble, ploidy], 'max_prob'] = max_probs[bubble, ploidy]
candidate_df['relative_prob'] = candidate_df['candidate_prob'] / candidate_df['max_prob']
candidate_df
plt.figure()
seaborn.distplot(candidate_df[candidate_df['relative_prob'] < 1.0]['relative_prob'], kde=False, hist_kws={"alpha": 0.8})
plt.title("Distribution of relative probabilities for each candidate extension\n"
"at each superbubble")
plt.xlabel(r"$RL[E|H]\ /\ \omega$")
plt.ylabel("Count")
plt.savefig(os.path.join(BASE_DIR, "figures", "rl-relative-dist.png"), transparent=True, dpi=256)_____no_output_____c1, c2, c3, c4, c5 = seaborn.color_palette(n_colors=5)
pruning_stats = []
for assembly, asm_config in config['assemblies'].items():
parts = assembly.split('-')
ploidy = int(parts[0].replace("ploidy", ""))
coverage = int(parts[1].replace("x", ""))
if coverage != 60:
continue
asm_folder = os.path.join(BASE_DIR, "assemblies", assembly)
for chain_num, graphml in enumerate(glob("{}/03_chain/component[0-9].bubblechain[0-9].graphml".format(asm_folder))):
print(graphml)
# Calculate effect of pruning
g = AssemblyGraph(networkx.read_graphml(graphml))
bubbles = OrderedDict(find_superbubbles(g, report_nested=False))
bubble_num = 0
for i, bubble in enumerate(reversed(bubbles.items())):
entrance, exit = bubble
num_paths = len(list(networkx.all_simple_paths(g, entrance, exit)))
if not bubble in bubble_map[ploidy, coverage]:
continue
bubble_data = bubble_map[ploidy, coverage][bubble]
if bubble_data['start_of_block']:
bubble_num = 1
else:
bubble_num += 1
kappa = 0.0
pruned = 0
num_candidates_left = sys.maxsize
while num_candidates_left > 500 and kappa < 1.0:
kappa += 0.1
num_candidates_left = len(
candidate_df.query('(bubble == @bubble) and (ploidy == @ploidy) and (relative_prob >= @kappa)')
)
pruned = len(
candidate_df.query('(bubble == @bubble) and (ploidy == @ploidy) and (relative_prob < @kappa)')
)
pruning_stats.append({
'ploidy': ploidy,
'coverage': coverage,
'bubble_num': bubble_num,
'pruned': pruned,
'kappa': kappa
})
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy2-60x-error-free/03_chain/component0.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy2-60x-error-free/03_chain/component1.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy3-60x-error-free/03_chain/component0.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy3-60x-error-free/03_chain/component1.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy4-60x-error-free/03_chain/component0.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy4-60x-error-free/03_chain/component1.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy6-60x-error-free/03_chain/component0.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy6-60x-error-free/03_chain/component1.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy8-60x-error-free/03_chain/component0.bubblechain0.graphml
/run/media/lucas/data/bioinformatics/thesis-data/assemblies/ploidy8-60x-error-free/03_chain/component1.bubblechain0.graphml
pruning_df = pd.DataFrame(pruning_stats)
agg_df = pd.DataFrame(pruning_df.groupby(['bubble_num', 'kappa']).size().rename('counts'))
agg_df.reset_index(level=agg_df.index.names, inplace=True)
agg_df = agg_df.query('kappa <= 1.0')
_____no_output_____sum_df = pd.DataFrame(agg_df.groupby('bubble_num')['counts'].sum()).reset_index()
sum_df
for i in sum_df['bubble_num'].unique():
agg_df.loc[agg_df['bubble_num'] == i, 'total'] = int(sum_df['counts'].loc[sum_df['bubble_num'] == i].values[0])
agg_df['fraction'] = agg_df['counts'] / agg_df['total']
agg_df_____no_output_____plt.figure()
g = seaborn.factorplot(x="kappa", y="fraction", col="bubble_num",
kind="bar", col_wrap=3, sharex=False, color=c1,
data=agg_df.query('(bubble_num < 7) and (kappa <= 1.0)'))
seaborn.plt.suptitle('The maximum pruning factor $\kappa$ at different stages of the phasing process')
plt.subplots_adjust(top=0.9, hspace=0.3)
for i, ax in enumerate(g.axes):
ax.set_xlabel("$\kappa$")
if i % 3 == 0:
ax.set_ylabel("Fraction")
ax.set_title("Superbubble {}".format(i+1))
plt.savefig(os.path.join(BASE_DIR, 'figures', 'pruning.png'), transparent=True, dpi=256)_____no_output_____
</code>
|
{
"repository": "AbeelLab/phasm-benchmarks",
"path": "analysis/Phasing Probabilistic Model Analysis.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": 1,
"size": 158698,
"hexsha": "cb431c193431149aa1088aed266d478d94e3f1eb",
"max_line_length": 55538,
"avg_line_length": 193.5341463415,
"alphanum_fraction": 0.8558772007
}
|
# Notebook from jjc2718/mutation-fn
Path: 6_survival_analysis/expression_eda.ipynb
## Explore one-hit vs. two-hit samples in expression space_____no_output_____
<code>
from pathlib import Path
import pickle as pkl
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
import sys; sys.path.append('..')
import config as cfg
from data_utilities import load_cnv_data
%load_ext autoreload
%autoreload 2_____no_output_____# park et al. geneset info
park_loss_data = cfg.data_dir / 'park_loss_df.tsv'
park_gain_data = cfg.data_dir / 'park_gain_df.tsv'
# park et al. significant gene info
park_loss_sig_data = cfg.data_dir / 'park_loss_df_sig_only.tsv'
park_gain_sig_data = cfg.data_dir / 'park_gain_df_sig_only.tsv'
# park et al. gene/cancer type predictions
park_preds_dir = cfg.data_dir / 'park_genes_all_preds'
# mutation and copy number data
pancancer_pickle = Path('/home/jake/research/mpmp/data/pancancer_data.pkl')
# gene expression/rppa data files
data_type = 'gene expression'
subset_feats = 10000
gene_expression_data_file = Path(
'/home/jake/research/mpmp/data/tcga_expression_matrix_processed.tsv.gz'
)
rppa_data_file = Path(
'/home/jake/research/mpmp/data/tcga_rppa_matrix_processed.tsv'
)_____no_output_____
</code>
### Load mutation info
For now, just use binary mutation status from the pancancer repo. In the future we could pull more granular info from MC3, but it would take some engineering of `1_get_mutation_counts` to do this for lots of genes._____no_output_____
<code>
park_loss_df = pd.read_csv(park_loss_data, sep='\t', index_col=0)
park_loss_df.head()_____no_output_____park_gain_df = pd.read_csv(park_gain_data, sep='\t', index_col=0)
park_gain_df.head()_____no_output_____with open(pancancer_pickle, 'rb') as f:
pancancer_data = pkl.load(f)_____no_output_____# get (binary) mutation data
# 1 = observed non-silent mutation in this gene for this sample, 0 otherwise
mutation_df = pancancer_data[1]
print(mutation_df.shape)
mutation_df.iloc[:5, :5](9074, 20938)
</code>
### Load copy number info
Get copy loss/gain info directly from GISTIC "thresholded" output. This should be the same as (or very similar to) what the Park et al. study uses._____no_output_____
<code>
sample_freeze_df = pancancer_data[0]
copy_samples = set(sample_freeze_df.SAMPLE_BARCODE)
print(len(copy_samples))9074
copy_loss_df, copy_gain_df = load_cnv_data(
cfg.data_dir / 'pancan_GISTIC_threshold.tsv',
copy_samples
)
print(copy_loss_df.shape)
copy_loss_df.iloc[:5, :5](9068, 25128)
print(copy_gain_df.shape)
copy_gain_df.iloc[:5, :5](9068, 25128)
sample_freeze_df.head()_____no_output_____
</code>
### Load expression data
We'll also standardize each feature, and subset to the top features by mean absolute deviation if `subset_feats` is set._____no_output_____
<code>
if data_type == 'gene expression':
exp_df = pd.read_csv(gene_expression_data_file, sep='\t', index_col=0)
elif data_type == 'rppa':
exp_df = pd.read_csv(rppa_data_file, sep='\t', index_col=0)
print(exp_df.shape)
exp_df.iloc[:5, :5](11060, 15369)
# standardize features first
exp_df = pd.DataFrame(
StandardScaler().fit_transform(exp_df),
index=exp_df.index.copy(),
columns=exp_df.columns.copy()
)
print(exp_df.shape)
exp_df.iloc[:5, :5](11060, 15369)
# subset to subset_feats features by mean absolute deviation
if subset_feats is not None:
mad_ranking = (
exp_df.mad(axis=0)
.sort_values(ascending=False)
)
top_feats = mad_ranking[:subset_feats].index.astype(str).values
exp_mad_df = exp_df.reindex(top_feats, axis='columns')
else:
exp_mad_df = exp_df
print(exp_mad_df.shape)
exp_mad_df.iloc[:5, :5](11060, 10000)
</code>
### Get sample info and hit groups for gene/cancer type_____no_output_____
<code>
def get_hits_for_gene_and_tissue(identifier, cancer_classification):
"""Given a gene and tissue, load the relevant mutation/CNV information,
and divide the samples into groups to compare survival.
"""
# get patient ids in given cancer type
gene, tissue = identifier.split('_')
tissue_ids = (sample_freeze_df
.query('DISEASE == @tissue')
.SAMPLE_BARCODE
)
# get mutation and copy status
mutation_status = mutation_df.loc[tissue_ids, gene]
if cancer_classification == 'TSG':
copy_status = copy_loss_df.loc[tissue_ids, gene]
elif cancer_classification == 'Oncogene':
copy_status = copy_gain_df.loc[tissue_ids, gene]
# get hit groups from mutation/CNV data
two_hit_samples = (mutation_status & copy_status).astype(int)
one_hit_samples = (mutation_status | copy_status).astype(int)
return pd.DataFrame(
{'group': one_hit_samples + two_hit_samples}
)_____no_output_____identifier = 'ATRX_LGG'
cancer_classification = 'Oncogene'
sample_mut_df = get_hits_for_gene_and_tissue(identifier, cancer_classification)
# make sure sample data overlaps exactly with expression data
overlap_ixs = sample_mut_df.index.intersection(exp_mad_df.index)
sample_mut_df = sample_mut_df.loc[overlap_ixs, :].copy()
exp_mad_df = exp_mad_df.loc[overlap_ixs, :].copy()
# add group info for legends
sample_mut_df['group'] = sample_mut_df.group.map({
0: 'wild-type',
1: 'one-hit',
2: 'two-hit'
})
print(sample_mut_df.shape)
print(sample_mut_df.group.unique())
sample_mut_df.iloc[:5, :5](507, 1)
['wild-type' 'two-hit' 'one-hit']
</code>
### Plot samples by hit group_____no_output_____
<code>
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_proj_pca = pca.fit_transform(exp_mad_df)
print(X_proj_pca.shape)
X_proj_pca[:5, :5](507, 2)
sns.set({'figure.figsize': (8, 6)})
sns.scatterplot(x=X_proj_pca[:, 0],
y=X_proj_pca[:, 1],
hue=sample_mut_df.group,
hue_order=['wild-type', 'one-hit', 'two-hit'])
plt.title('PCA of {} {} features, colored by {} status'.format(
subset_feats, data_type, identifier))
plt.xlabel('PC1')
plt.ylabel('PC2')_____no_output_____from umap import UMAP
reducer = UMAP(n_components=2, random_state=42)
X_proj_umap = reducer.fit_transform(exp_mad_df)
print(X_proj_umap.shape)
X_proj_umap[:5, :5]/home/jake/anaconda3/envs/mutation_fn/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
sns.set({'figure.figsize': (8, 6)})
sns.scatterplot(x=X_proj_umap[:, 0],
y=X_proj_umap[:, 1],
hue=sample_mut_df.group,
hue_order=['wild-type', 'one-hit', 'two-hit'])
plt.title('UMAP of {} {} features, colored by {} status'.format(
subset_feats, data_type, identifier))
plt.xlabel('UMAP1')
plt.ylabel('UMAP2')_____no_output_____
</code>
### Plot samples by hit group, using features selected by pan-cancer classifiers_____no_output_____
<code>
coefs_file = Path(
'/home/jake/research/mpmp/data/final_models/final_expression_all_merged_coefs.tsv'
)
coefs_df = pd.read_csv(coefs_file, sep='\t', index_col=0)
coefs_df.iloc[:5, :5]/home/jake/anaconda3/envs/mutation_fn/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3457: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False.
exec(code_obj, self.user_global_ns, self.user_ns)
gene, tissue = identifier.split('_')
coefs_gene = coefs_df.loc[:, gene]
coefs_gene = coefs_gene[(~coefs_gene.isna()) &
(~(coefs_gene == 0.0)) &
# get rid of log10_mut and cancer type covariates
(coefs_gene.index.astype(str).str.isdigit())]
coefs_gene.index = coefs_gene.index.astype(str)
print(coefs_gene.shape)
coefs_gene.head()(465,)
print(coefs_gene.index)
print(coefs_gene.index.isna().sum())Index(['100134938', '10014', '10018', '10072', '10094', '10142', '10217',
'10228', '10417', '10423',
...
'9761', '9843', '9846', '9858', '9877', '9905', '9909', '9922', '9924',
'9926'],
dtype='object', length=465)
0
exp_coefs_df = exp_df.loc[overlap_ixs, coefs_gene.index].copy()
print(exp_coefs_df.shape)
exp_coefs_df.iloc[:5, :5](507, 465)
sns.set({'figure.figsize': (8, 6)})
pca = PCA(n_components=2)
X_proj_pca = pca.fit_transform(exp_coefs_df)
sns.scatterplot(x=X_proj_pca[:, 0],
y=X_proj_pca[:, 1],
hue=sample_mut_df.group,
hue_order=['wild-type', 'one-hit', 'two-hit'])
plt.title('PCA of non-zero {} features, colored by {} status'.format(
data_type, identifier))
plt.xlabel('PC1')
plt.ylabel('PC2')_____no_output_____sns.set({'figure.figsize': (8, 6)})
reducer = UMAP(n_components=2, random_state=42)
X_proj_umap = reducer.fit_transform(exp_coefs_df)
sns.scatterplot(x=X_proj_umap[:, 0],
y=X_proj_umap[:, 1],
hue=sample_mut_df.group,
hue_order=['wild-type', 'one-hit', 'two-hit'])
plt.title('UMAP of nonzero {} features, colored by {} status'.format(
data_type, identifier))
plt.xlabel('UMAP1')
plt.ylabel('UMAP2')_____no_output_____
</code>
|
{
"repository": "jjc2718/mutation-fn",
"path": "6_survival_analysis/expression_eda.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": null,
"size": 410097,
"hexsha": "cb43732872ebf75d87ad65d8ec67835c3be84bee",
"max_line_length": 95724,
"avg_line_length": 221.5542949757,
"alphanum_fraction": 0.8917621928
}
|
# Notebook from tandelDipak/pymc3
Path: docs/source/notebooks/ODE_with_manual_gradients.ipynb
<code>
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
from scipy.integrate import odeint
from theano import *
THEANO_FLAGS = "optimizer=fast_compile"_____no_output_____
</code>
# Lotka-Volterra with manual gradients
by [Sanmitra Ghosh](https://www.mrc-bsu.cam.ac.uk/people/in-alphabetical-order/a-to-g/sanmitra-ghosh/)_____no_output_____Mathematical models are used ubiquitously in a variety of science and engineering domains to model the time evolution of physical variables. These mathematical models are often described as ODEs that are characterised by model structure - the functions of the dynamical variables - and model parameters. However, for the vast majority of systems of practical interest it is necessary to infer both the model parameters and an appropriate model structure from experimental observations. This experimental data often appears to be scarce and incomplete. Furthermore, a large variety of models described as dynamical systems show traits of sloppiness (see [Gutenkunst et al., 2007](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030189)) and have unidentifiable parameter combinations. The task of inferring model parameters and structure from experimental data is of paramount importance to reliably analyse the behaviour of dynamical systems and draw faithful predictions in light of the difficulties posit by their complexities. Moreover, any future model prediction should encompass and propagate variability and uncertainty in model parameters and/or structure. Thus, it is also important that the inference methods are equipped to quantify and propagate the aforementioned uncertainties from the model descriptions to model predictions. As a natural choice to handle uncertainty, at least in the parameters, Bayesian inference is increasingly used to fit ODE models to experimental data ([Mark Girolami, 2008](https://www.sciencedirect.com/science/article/pii/S030439750800501X)). However, due to some of the difficulties that I pointed above, fitting an ODE model using Bayesian inference is a challenging task. In this tutorial I am going to take up that challenge and will show how PyMC3 could be potentially used for this purpose.
I must point out that model fitting (inference of the unknown parameters) is just one of many crucial tasks that a modeller has to complete in order to gain a deeper understanding of a physical process. However, success in this task is crucial and this is where PyMC3, and probabilistic programming (ppl) in general, is extremely useful. The modeller can take full advantage of the variety of samplers and distributions provided by PyMC3 to automate inference.
In this tutorial I will focus on the fitting exercise, that is estimating the posterior distribution of the parameters given some noisy experimental time series. _____no_output_____## Bayesian inference of the parameters of an ODE
I begin by first introducing the Bayesian framework for inference in a coupled non-linear ODE defined as
$$
\frac{d X(t)}{dt}=\boldsymbol{f}\big(X(t),\boldsymbol{\theta}\big),
$$
where $X(t)\in\mathbb{R}^K$ is the solution, at each time point, of the system composed of $K$ coupled ODEs - the state vector - and $\boldsymbol{\theta}\in\mathbb{R}^D$ is the parameter vector that we wish to infer. $\boldsymbol{f}(\cdot)$ is a non-linear function that describes the governing dynamics. Also, in case of an initial value problem, let the matrix $\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$ denote the solution of the above system of equations at some specified time points for the parameters $\boldsymbol{\theta}$ and initial conditions $\mathbf{x_0}$.
Consider a set of noisy experimental observations $\boldsymbol{Y} \in \mathbb{R}^{T\times K}$ observed at $T$ experimental time points for the $K$ states. We can obtain the likelihood $p(\boldsymbol{Y}|\boldsymbol{X})$, where I use the symbol $\boldsymbol{X}:=\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$, and combine that with a prior distribution $p(\boldsymbol{\theta})$ on the parameters, using the Bayes theorem, to obtain the posterior distribution as
$$
p(\boldsymbol{\theta}|\boldsymbol{Y})=\frac{1}{Z}p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}),
$$
where $Z=\int p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}) d\boldsymbol{\theta} $ is the intractable marginal likelihood. Due to this intractability we resort to approximate inference and apply MCMC.
For this tutorial I have chosen two ODEs:
1. The [__Lotka-Volterra predator prey model__ ](http://www.scholarpedia.org/article/Predator-prey_model)
2. The [__Fitzhugh-Nagumo action potential model__](http://www.scholarpedia.org/article/FitzHugh-Nagumo_model)
I will showcase two distinctive approaches (__NUTS__ and __SMC__ step methods), supported by PyMC3, for the estimation of unknown parameters in these models. _____no_output_____## Lotka-Volterra predator prey model
The Lotka Volterra model depicts an ecological system that is used to describe the interaction between a predator and prey species. This ODE given by
$$
\begin{aligned}
\frac{d x}{dt} &=\alpha x -\beta xy \\
\frac{d y}{dt} &=-\gamma y + \delta xy,
\end{aligned}
$$
shows limit cycle behaviour and has often been used for benchmarking Bayesian inference methods. $\boldsymbol{\theta}=(\alpha,\beta,\gamma,\delta, x(0),y(0))$ is the set of unknown parameters that we wish to infer from experimental observations of the state vector $X(t)=(x(t),y(t))$ comprising the concentrations of the prey and the predator species respectively. $x(0), y(0)$ are the initial values of the states needed to solve the ODE, which are also treated as unknown quantities. The predator prey model was recently used to demonstrate the applicability of the NUTS sampler, and the Stan ppl in general, for inference in ODE models. I will closely follow [this](https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html) Stan tutorial and thus I will setup this model and associated inference problem (including the data) exactly as was done for the Stan tutorial. Let me first write down the code to solve this ODE using the SciPy's `odeint`. Note that the methods in this tutorial is not limited or tied to `odeint`. Here I have chosen `odeint` to simply stay within PyMC3's dependencies (SciPy in this case). _____no_output_____
<code>
class LotkaVolterraModel:
def __init__(self, y0=None):
self._y0 = y0
def simulate(self, parameters, times):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def rhs(y, t, p):
X, Y = y
dX_dt = alpha * X - beta * X * Y
dY_dt = -gamma * Y + delta * X * Y
return dX_dt, dY_dt
values = odeint(rhs, [Xt0, Yt0], times, (parameters,))
return values
ode_model = LotkaVolterraModel()_____no_output_____
</code>
## Handling ODE gradients
NUTS requires the gradient of the log of the target density w.r.t. the unknown parameters, $\nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y})$, which can be evaluated using the chain rule of differentiation as
$$ \nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y}) = \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}.$$
The gradient of an ODE w.r.t. its parameters, the term $\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, can be obtained using local sensitivity analysis, although this is not the only method to obtain gradients. However, just like solving an ODE (a non-linear one to be precise) evaluation of the gradients can only be carried out using some sort of numerical method, say for example the famous Runge-Kutta method for non-stiff ODEs. PyMC3 uses Theano as the automatic differentiation engine and thus all models are implemented by stitching together available primitive operations (Ops) supported by Theano. Even to extend PyMC3 we need to compose models that can be expressed as symbolic combinations of Theano's Ops. However, if we take a step back and think about Theano then it is apparent that neither the ODE solution nor its gradient w.r.t. to the parameters can be expressed symbolically as combinations of Theano’s primitive Ops. Hence, from Theano’s perspective an ODE (and for that matter any other form of a non-linear differential equation) is a non-differentiable black-box function. However, one might argue that if a numerical method is coded up in Theano (using say the `scan` Op), then it is possible to symbolically express the numerical method that evaluates the ODE states, and then we can easily use Theano’s automatic differentiation engine to obtain the gradients as well by differentiating through the numerical solver itself. I like to point out that the former, obtaining the solution, is indeed possible this way but the obtained gradient would be error-prone. Additionally, this entails to a complete ‘re-inventing the wheel’ as one would have to implement decades old sophisticated numerical algorithms again from scratch in Theano.
Thus, in this tutorial I am going to present the alternative approach which consists of defining new [custom Theano Ops](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html), extending Theano, that will wrap both the numerical solution and the vector-Matrix product, $ \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, often known as the _**vector-Jacobian product**_ (VJP) in automatic differentiation literature. I like to point out here that in the context of non-linear ODEs the term Jacobian is used to denote gradients of the ODE dynamics $\boldsymbol{f}$ w.r.t. the ODE states $X(t)$. Thus, to avoid confusion, from now on I will use the term _**vector-sensitivity product**_ (VSP) to denote the same quantity that the term VJP denotes.
I will start by introducing the forward sensitivity analysis.
## ODE sensitivity analysis
For a coupled ODE system $\frac{d X(t)}{dt} = \boldsymbol{f}(X(t),\boldsymbol{\theta})$, the local sensitivity of the solution to a parameter is defined by how much the solution would change by changes in the parameter, i.e. the sensitivity of the the $k$-th state is simply put the time evolution of its graident w.r.t. the $d$-th parameter. This quantitiy, denoted as $Z_{kd}(t)$, is given by
$$Z_{kd}(t)=\frac{d }{d t} \left\{\frac{\partial X_k (t)}{\partial \theta_d}\right\} = \sum_{i=1}^K \frac{\partial f_k}{\partial X_i (t)}\frac{\partial X_i (t)}{\partial \theta_d} + \frac{\partial f_k}{\partial \theta_d}.$$
Using forward sensitivity analysis we can obtain both the state $X(t)$ and its derivative w.r.t the parameters, at each time point, as the solution to an initial value problem by augmenting the original ODE system with the sensitivity equations $Z_{kd}$. The augmented ODE system $\big(X(t), Z(t)\big)$ can then be solved together using a chosen numerical method. The augmented ODE system needs the initial values for the sensitivity equations. All of these should be set to zero except the ones where the sensitivity of a state w.r.t. its own initial value is sought, that is $ \frac{\partial X_k(t)}{\partial X_k (0)} =1 $. Note that in order to solve this augmented system we have to embark in the tedious process of deriving $ \frac{\partial f_k}{\partial X_i (t)}$, also known as the Jacobian of an ODE, and $\frac{\partial f_k}{\partial \theta_d}$ terms. Thankfully, many ODE solvers calculate these terms and solve the augmented system when asked for by the user. An example would be the [SUNDIAL CVODES solver suite](https://computation.llnl.gov/projects/sundials/cvodes). A Python wrapper for CVODES can be found [here](https://jmodelica.org/assimulo/).
However, for this tutorial I would go ahead and derive the terms mentioned above, manually, and solve the Lotka-Volterra ODEs alongwith the sensitivites in the following code block. The functions `jac` and `dfdp` below calculate $ \frac{\partial f_k}{\partial X_i (t)}$ and $\frac{\partial f_k}{\partial \theta_d}$ respectively for the Lotka-Volterra model. For conveniance I have transformed the sensitivity equation in a matrix form. Here I extended the solver code snippet above to include sensitivities when asked for._____no_output_____
<code>
n_states = 2
n_odeparams = 4
n_ivs = 2
class LotkaVolterraModel:
def __init__(self, n_states, n_odeparams, n_ivs, y0=None):
self._n_states = n_states
self._n_odeparams = n_odeparams
self._n_ivs = n_ivs
self._y0 = y0
def simulate(self, parameters, times):
return self._simulate(parameters, times, False)
def simulate_with_sensitivities(self, parameters, times):
return self._simulate(parameters, times, True)
def _simulate(self, parameters, times, sensitivities):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def r(y, t, p):
X, Y = y
dX_dt = alpha * X - beta * X * Y
dY_dt = -gamma * Y + delta * X * Y
return dX_dt, dY_dt
if sensitivities:
def jac(y):
X, Y = y
ret = np.zeros((self._n_states, self._n_states))
ret[0, 0] = alpha - beta * Y
ret[0, 1] = -beta * X
ret[1, 0] = delta * Y
ret[1, 1] = -gamma + delta * X
return ret
def dfdp(y):
X, Y = y
ret = np.zeros(
(self._n_states, self._n_odeparams + self._n_ivs)
) # except the following entries
ret[
0, 0
] = X # \frac{\partial [\alpha X - \beta XY]}{\partial \alpha}, and so on...
ret[0, 1] = -X * Y
ret[1, 2] = -Y
ret[1, 3] = X * Y
return ret
def rhs(y_and_dydp, t, p):
y = y_and_dydp[0 : self._n_states]
dydp = y_and_dydp[self._n_states :].reshape(
(self._n_states, self._n_odeparams + self._n_ivs)
)
dydt = r(y, t, p)
d_dydp_dt = np.matmul(jac(y), dydp) + dfdp(y)
return np.concatenate((dydt, d_dydp_dt.reshape(-1)))
y0 = np.zeros((2 * (n_odeparams + n_ivs)) + n_states)
y0[6] = 1.0 # \frac{\partial [X]}{\partial Xt0} at t==0, and same below for Y
y0[13] = 1.0
y0[0:n_states] = [Xt0, Yt0]
result = odeint(rhs, y0, times, (parameters,), rtol=1e-6, atol=1e-5)
values = result[:, 0 : self._n_states]
dvalues_dp = result[:, self._n_states :].reshape(
(len(times), self._n_states, self._n_odeparams + self._n_ivs)
)
return values, dvalues_dp
else:
values = odeint(r, [Xt0, Yt0], times, (parameters,), rtol=1e-6, atol=1e-5)
return values
ode_model = LotkaVolterraModel(n_states, n_odeparams, n_ivs)_____no_output_____
</code>
For this model I have set the relative and absolute tolerances to $10^{-6}$ and $10^{-5}$ respectively, as was suggested in the Stan tutorial. This will produce sufficiently accurate solutions. Further reducing the tolerances will increase accuracy but at the cost of increasing the computational time. A thorough discussion on the choice and use of a numerical method for solving the ODE is out of the scope of this tutorial. However, I must point out that the inaccuracies of the ODE solver do affect the likelihood and as a result the inference. This is more so the case for stiff systems. I would recommend interested readers to this nice blog article where this effect is discussed thoroughly for a [cardiac ODE model](https://mirams.wordpress.com/2018/10/17/ode-errors-and-optimisation/). There is also an emerging area of uncertainty quantification that attacks the problem of noise arisng from impreciseness of numerical algorithms, [probabilistic numerics](http://probabilistic-numerics.org/). This is indeed an elegant framework to carry out inference while taking into account the errors coming from the numeric ODE solvers.
## Custom ODE Op
In order to define the custom `Op` I have written down two `theano.Op` classes `ODEGradop`, `ODEop`. `ODEop` essentially wraps the ODE solution and will be called by PyMC3. The `ODEGradop` wraps the numerical VSP and this op is then in turn used inside the `grad` method in the `ODEop` to return the VSP. Note that we pass in two functions: `state`, `numpy_vsp` as arguments to respective Ops. I will define these functions later. These functions act as shims using which we connect the python code for numerical solution of sate and VSP to Theano and thus PyMC3._____no_output_____
<code>
class ODEGradop(theano.Op):
def __init__(self, numpy_vsp):
self._numpy_vsp = numpy_vsp
def make_node(self, x, g):
x = theano.tensor.as_tensor_variable(x)
g = theano.tensor.as_tensor_variable(g)
node = theano.Apply(self, [x, g], [g.type()])
return node
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
g = inputs_storage[1]
out = output_storage[0]
out[0] = self._numpy_vsp(x, g) # get the numerical VSP
class ODEop(theano.Op):
def __init__(self, state, numpy_vsp):
self._state = state
self._numpy_vsp = numpy_vsp
def make_node(self, x):
x = theano.tensor.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
out = output_storage[0]
out[0] = self._state(x) # get the numerical solution of ODE states
def grad(self, inputs, output_grads):
x = inputs[0]
g = output_grads[0]
grad_op = ODEGradop(self._numpy_vsp) # pass the VSP when asked for gradient
grad_op_apply = grad_op(x, g)
return [grad_op_apply]_____no_output_____
</code>
I must point out that the way I have defined the custom ODE Ops above there is the possibility that the ODE is solved twice for the same parameter values, once for the states and another time for the VSP. To avoid this behaviour I have written a helper class which stops this double evaluation._____no_output_____
<code>
class solveCached:
def __init__(self, times, n_params, n_outputs):
self._times = times
self._n_params = n_params
self._n_outputs = n_outputs
self._cachedParam = np.zeros(n_params)
self._cachedSens = np.zeros((len(times), n_outputs, n_params))
self._cachedState = np.zeros((len(times), n_outputs))
def __call__(self, x):
if np.all(x == self._cachedParam):
state, sens = self._cachedState, self._cachedSens
else:
state, sens = ode_model.simulate_with_sensitivities(x, times)
return state, sens
times = np.arange(0, 21) # number of measurement points (see below)
cached_solver = solveCached(times, n_odeparams + n_ivs, n_states)_____no_output_____
</code>
### The ODE state & VSP evaluation
Most ODE systems of practical interest will have multiple states and thus the output of the solver, which I have denoted so far as $\boldsymbol{X}$, for a system with $K$ states solved on $T$ time points, would be a $T \times K$-dimensional matrix. For the Lotka-Volterra model the columns of this matrix represent the time evolution of the individual species concentrations. I flatten this matrix to a $TK$-dimensional vector $vec(\boldsymbol{X})$, and also rearrange the sensitivities accordingly to obtain the desired vector-matrix product. It is beneficial at this point to test the custom Op as described [here](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html#how-to-test-it)._____no_output_____
<code>
def state(x):
State, Sens = cached_solver(np.array(x, dtype=np.float64))
cached_solver._cachedState, cached_solver._cachedSens, cached_solver._cachedParam = (
State,
Sens,
x,
)
return State.reshape((2 * len(State),))
def numpy_vsp(x, g):
numpy_sens = cached_solver(np.array(x, dtype=np.float64))[1].reshape(
(n_states * len(times), len(x))
)
return numpy_sens.T.dot(g)_____no_output_____
</code>
## The Hudson's Bay Company data
The Lotka-Volterra predator prey model has been used previously to successfully explain the dynamics of natural populations of predators and prey, such as the lynx and snowshoe hare data of the Hudson's Bay Company. This is the same data (that was shared [here](https://github.com/stan-dev/example-models/tree/master/knitr/lotka-volterra)) used in the Stan example and thus I will use this data-set as the experimental observations $\boldsymbol{Y}(t)$ to infer the parameters. _____no_output_____
<code>
Year = np.arange(1900, 1921, 1)
# fmt: off
Lynx = np.array([4.0, 6.1, 9.8, 35.2, 59.4, 41.7, 19.0, 13.0, 8.3, 9.1, 7.4,
8.0, 12.3, 19.5, 45.7, 51.1, 29.7, 15.8, 9.7, 10.1, 8.6])
Hare = np.array([30.0, 47.2, 70.2, 77.4, 36.3, 20.6, 18.1, 21.4, 22.0, 25.4,
27.1, 40.3, 57.0, 76.6, 52.3, 19.5, 11.2, 7.6, 14.6, 16.2, 24.7])
# fmt: on
plt.figure(figsize=(15, 7.5))
plt.plot(Year, Lynx, color="b", lw=4, label="Lynx")
plt.plot(Year, Hare, color="g", lw=4, label="Hare")
plt.legend(fontsize=15)
plt.xlim([1900, 1920])
plt.xlabel("Year", fontsize=15)
plt.ylabel("Concentrations", fontsize=15)
plt.xticks(Year, rotation=45)
plt.title("Lynx (predator) - Hare (prey): oscillatory dynamics", fontsize=25);_____no_output_____
</code>
## The probablistic model
I have now got all the ingredients needed in order to define the probabilistic model in PyMC3. As I have mentioned previously I will set up the probabilistic model with the exact same likelihood and priors used in the Stan example. The observed data is defined as follows:
$$\log (\boldsymbol{Y(t)}) = \log (\boldsymbol{X(t)}) + \eta(t),$$
where $\eta(t)$ is assumed to be zero mean i.i.d Gaussian noise with an unknown standard deviation $\sigma$, that needs to be estimated. The above multiplicative (on the natural scale) noise model encodes a lognormal distribution as the likelihood:
$$\boldsymbol{Y(t)} \sim \mathcal{L}\mathcal{N}(\log (\boldsymbol{X(t)}), \sigma^2).$$
The following priors are then placed on the parameters:
$$
\begin{aligned}
x(0), y(0) &\sim \mathcal{L}\mathcal{N}(\log(10),1),\\
\alpha, \gamma &\sim \mathcal{N}(1,0.5),\\
\beta, \delta &\sim \mathcal{N}(0.05,0.05),\\
\sigma &\sim \mathcal{L}\mathcal{N}(-1,1).
\end{aligned}
$$
For an intuitive explanation, which I am omitting for brevity, regarding the choice of priors as well as the likelihood model, I would recommend the Stan example mentioned above. The above probabilistic model is defined in PyMC3 below. Note that the flattened state vector is reshaped to match the data dimensionality.
Finally, I use the `pm.sample` method to run NUTS by default and obtain $1500$ post warm-up samples from the posterior._____no_output_____
<code>
theano.config.exception_verbosity = "high"
theano.config.floatX = "float64"
# Define the data matrix
Y = np.vstack((Hare, Lynx)).T
# Now instantiate the theano custom ODE op
my_ODEop = ODEop(state, numpy_vsp)
# The probabilistic model
with pm.Model() as LV_model:
# Priors for unknown model parameters
alpha = pm.Normal("alpha", mu=1, sd=0.5)
beta = pm.Normal("beta", mu=0.05, sd=0.05)
gamma = pm.Normal("gamma", mu=1, sd=0.5)
delta = pm.Normal("delta", mu=0.05, sd=0.05)
xt0 = pm.Lognormal("xto", mu=np.log(10), sd=1)
yt0 = pm.Lognormal("yto", mu=np.log(10), sd=1)
sigma = pm.Lognormal("sigma", mu=-1, sd=1, shape=2)
# Forward model
all_params = pm.math.stack([alpha, beta, gamma, delta, xt0, yt0], axis=0)
ode_sol = my_ODEop(all_params)
forward = ode_sol.reshape(Y.shape)
# Likelihood
Y_obs = pm.Lognormal("Y_obs", mu=pm.math.log(forward), sd=sigma, observed=Y)
trace = pm.sample(1500, tune=1000, init="adapt_diag")
trace["diverging"].sum()Auto-assigning NUTS sampler...
Initializing NUTS using adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [sigma, yto, xto, delta, gamma, beta, alpha]
Sampling 2 chains, 0 divergences: 2%|▏ | 94/5000 [01:02<59:45, 1.37draws/s] /Users/demetri/anaconda3/envs/gsoc/lib/python3.6/site-packages/scipy/integrate/odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
Sampling 2 chains, 0 divergences: 2%|▏ | 108/5000 [01:09<54:44, 1.49draws/s] /Users/demetri/anaconda3/envs/gsoc/lib/python3.6/site-packages/scipy/integrate/odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
Sampling 2 chains, 0 divergences: 100%|██████████| 5000/5000 [12:57<00:00, 4.16draws/s]
The acceptance probability does not match the target. It is 0.6992852935132228, but should be close to 0.8. Try to increase the number of tuning steps.
with LV_model:
pm.traceplot(trace);_____no_output_____import pandas as pd
summary = pm.summary(trace)
STAN_mus = [0.549, 0.028, 0.797, 0.024, 33.960, 5.949, 0.248, 0.252]
STAN_sds = [0.065, 0.004, 0.091, 0.004, 2.909, 0.533, 0.045, 0.044]
summary["STAN_mus"] = pd.Series(np.array(STAN_mus), index=summary.index)
summary["STAN_sds"] = pd.Series(np.array(STAN_sds), index=summary.index)
summary/Users/demetri/Documents/GitHub/pymc3/pymc3/stats.py:991: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality.
axis=1, join_axes=[dforg.index])
</code>
These estimates are almost identical to those obtained in the Stan tutorial (see the last two columns above), which is what we can expect. Posterior predictives can be drawn as below. _____no_output_____
<code>
ppc_samples = pm.sample_posterior_predictive(trace, samples=1000, model=LV_model)["Y_obs"]
mean_ppc = ppc_samples.mean(axis=0)
CriL_ppc = np.percentile(ppc_samples, q=2.5, axis=0)
CriU_ppc = np.percentile(ppc_samples, q=97.5, axis=0)/Users/demetri/Documents/GitHub/pymc3/pymc3/sampling.py:1078: UserWarning: samples parameter is smaller than nchains times ndraws, some draws and/or chains may not be represented in the returned posterior predictive sample
warnings.warn("samples parameter is smaller than nchains times ndraws, some draws "
100%|██████████| 1000/1000 [00:10<00:00, 98.26it/s]
plt.figure(figsize=(15, 2 * (5)))
plt.subplot(2, 1, 1)
plt.plot(Year, Lynx, "o", color="b", lw=4, ms=10.5)
plt.plot(Year, mean_ppc[:, 1], color="b", lw=4)
plt.plot(Year, CriL_ppc[:, 1], "--", color="b", lw=2)
plt.plot(Year, CriU_ppc[:, 1], "--", color="b", lw=2)
plt.xlim([1900, 1920])
plt.ylabel("Lynx conc", fontsize=15)
plt.xticks(Year, rotation=45)
plt.subplot(2, 1, 2)
plt.plot(Year, Hare, "o", color="g", lw=4, ms=10.5, label="Observed")
plt.plot(Year, mean_ppc[:, 0], color="g", lw=4, label="mean of ppc")
plt.plot(Year, CriL_ppc[:, 0], "--", color="g", lw=2, label="credible intervals")
plt.plot(Year, CriU_ppc[:, 0], "--", color="g", lw=2)
plt.legend(fontsize=15)
plt.xlim([1900, 1920])
plt.xlabel("Year", fontsize=15)
plt.ylabel("Hare conc", fontsize=15)
plt.xticks(Year, rotation=45);_____no_output_____
</code>
# Efficient exploration of the posterior landscape with SMC
It has been pointed out in several papers that the complex non-linear dynamics of an ODE results in a posterior landscape that is extremely difficult to navigate efficiently by many MCMC samplers. Thus, recently the curvature information of the posterior surface has been used to construct powerful geometrically aware samplers ([Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x)) that perform extremely well in ODE inference problems. Another set of ideas suggest breaking down a complex inference task into a sequence of simpler tasks. In essence the idea is to use sequential-importance-sampling to sample from an artificial sequence of increasingly complex distributions where the first in the sequence is a distribution that is easy to sample from, the prior, and the last in the sequence is the actual complex target distribution. The associated importance distribution is constructed by moving the set of particles sampled at the previous step using a Markov kernel, say for example the MH kernel.
A simple way of building the sequence of distributions is to use a temperature $\beta$, that is raised slowly from $0$ to $1$. Using this temperature variable $\beta$ we can write down the annealed intermediate distribution as
$$p_{\beta}(\boldsymbol{\theta}|\boldsymbol{y})\propto p(\boldsymbol{y}|\boldsymbol{\theta})^{\beta} p(\boldsymbol{\theta}).$$
Samplers that carry out sequential-importance-sampling from these artificial sequence of distributions, to avoid the difficult task of sampling directly from $p(\boldsymbol{\theta}|\boldsymbol{y})$, are known as Sequential Monte Carlo (SMC) samplers ([P Del Moral et al., 2006](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9868.2006.00553.x)). The performance of these samplers are sensitive to the choice of the temperature schedule, that is the set of user-defined increasing values of $\beta$ between $0$ and $1$. Fortunately, PyMC3 provides a version of the SMC sampler ([Jianye Ching and Yi-Chu Chen, 2007](https://ascelibrary.org/doi/10.1061/%28ASCE%290733-9399%282007%29133%3A7%28816%29)) that automatically figures out this temperature schedule. Moreover, the PyMC3's SMC sampler does not require the gradient of the log target density. As a result it is extremely easy to use this sampler for inference in ODE models. In the next example I will apply this SMC sampler to estimate the parameters of the Fitzhugh-Nagumo model. _____no_output_____## The Fitzhugh-Nagumo model
The Fitzhugh-Nagumo model given by
$$
\begin{aligned}
\frac{dV}{dt}&=(V - \frac{V^3}{3} + R)c\\
\frac{dR}{dt}&=\frac{-(V-a+bR)}{c},
\end{aligned}
$$
consisting of a membrane voltage variable $V(t)$ and a recovery variable $R(t)$ is a two-dimensional simplification of the [Hodgkin-Huxley](http://www.scholarpedia.org/article/Conductance-based_models) model of spike (action potential) generation in squid giant axons and where $a$, $b$, $c$ are the model parameters. This model produces a rich dynamics and as a result a complex geometry of the posterior surface that often leads to poor performance of many MCMC samplers. As a result this model was used to test the efficacy of the discussed geometric MCMC scheme and since then has been used to benchmark other novel MCMC methods. Following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will also use artificially generated data from this model to setup the inference task for estimating $\boldsymbol{\theta}=(a,b,c)$._____no_output_____
<code>
class FitzhughNagumoModel:
def __init__(self, times, y0=None):
self._y0 = np.array([-1, 1], dtype=np.float64)
self._times = times
def _simulate(self, parameters, times):
a, b, c = [float(x) for x in parameters]
def rhs(y, t, p):
V, R = y
dV_dt = (V - V ** 3 / 3 + R) * c
dR_dt = (V - a + b * R) / -c
return dV_dt, dR_dt
values = odeint(rhs, self._y0, times, (parameters,), rtol=1e-6, atol=1e-6)
return values
def simulate(self, x):
return self._simulate(x, self._times)_____no_output_____
</code>
## Simulated Data
For this example I am going to use simulated data that is I will generate noisy traces from the forward model defined above with parameters $\theta$ set to $(0.2,0.2,3)$ respectively and corrupted by i.i.d Gaussian noise with a standard deviation $\sigma=0.5$. The initial values are set to $V(0)=-1$ and $R(0)=1$ respectively. Again following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will assume that the initial values are known. These parameter values pushes the model into the oscillatory regime._____no_output_____
<code>
n_states = 2
n_times = 200
true_params = [0.2, 0.2, 3.0]
noise_sigma = 0.5
FN_solver_times = np.linspace(0, 20, n_times)
ode_model = FitzhughNagumoModel(FN_solver_times)
sim_data = ode_model.simulate(true_params)
np.random.seed(42)
Y_sim = sim_data + np.random.randn(n_times, n_states) * noise_sigma
plt.figure(figsize=(15, 7.5))
plt.plot(FN_solver_times, sim_data[:, 0], color="darkblue", lw=4, label=r"$V(t)$")
plt.plot(FN_solver_times, sim_data[:, 1], color="darkgreen", lw=4, label=r"$R(t)$")
plt.plot(FN_solver_times, Y_sim[:, 0], "o", color="darkblue", ms=4.5, label="Noisy traces")
plt.plot(FN_solver_times, Y_sim[:, 1], "o", color="darkgreen", ms=4.5)
plt.legend(fontsize=15)
plt.xlabel("Time", fontsize=15)
plt.ylabel("Values", fontsize=15)
plt.title("Fitzhugh-Nagumo Action Potential Model", fontsize=25);_____no_output_____
</code>
## Define a non-differentiable black-box op using Theano @as_op
Remember that I told SMC sampler does not require gradients, this is by the way the case for other samplers such as the Metropolis-Hastings, Slice sampler that are also supported in PyMC3. For all these gradient-free samplers I will show a simple and quick way of wrapping the forward model i.e. the ODE solution in Theano. All we have to do is to simply to use the decorator `as_op` that converts a python function into a basic Theano Op. We also tell Theano using the `as_op` decorator that we have three parameters each being a Theano scalar. The output then is a Theano matrix whose columns are the state vectors._____no_output_____
<code>
import theano.tensor as tt
from theano.compile.ops import as_op
@as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar], otypes=[tt.dmatrix])
def th_forward_model(param1, param2, param3):
param = [param1, param2, param3]
th_states = ode_model.simulate(param)
return th_states_____no_output_____
</code>
## Generative model
Since I have corrupted the original traces with i.i.d Gaussian thus the likelihood is given by
$$\boldsymbol{Y} = \prod_{i=1}^T \mathcal{N}(\boldsymbol{X}(t_i)), \sigma^2\mathbb{I}),$$
where $\mathbb{I}\in \mathbb{R}^{K \times K}$. We place a Gamma, Normal, Uniform prior on $(a,b,c)$ and a HalfNormal prior on $\sigma$ as follows:
$$
\begin{aligned}
a & \sim \mathcal{Gamma}(2,1),\\
b & \sim \mathcal{N}(0,1),\\
c & \sim \mathcal{U}(0.1,1),\\
\sigma & \sim \mathcal{H}(1).
\end{aligned}
$$
Notice how I have used the `start` argument for this example. Just like `pm.sample` `pm.sample_smc` has a number of settings, but I found the default ones good enough for simple models such as this one._____no_output_____
<code>
draws = 1000
with pm.Model() as FN_model:
a = pm.Gamma("a", alpha=2, beta=1)
b = pm.Normal("b", mu=0, sd=1)
c = pm.Uniform("c", lower=0.1, upper=10)
sigma = pm.HalfNormal("sigma", sd=1)
forward = th_forward_model(a, b, c)
cov = np.eye(2) * sigma ** 2
Y_obs = pm.MvNormal("Y_obs", mu=forward, cov=cov, observed=Y_sim)
startsmc = {v.name: np.random.uniform(1e-3, 2, size=draws) for v in FN_model.free_RVs}
trace_FN = pm.sample_smc(draws, start=startsmc)Sample initial stage: ...
Stage: 0 Beta: 0.009 Steps: 25
Stage: 1 Beta: 0.015 Steps: 8
Stage: 2 Beta: 0.020 Steps: 4
Stage: 3 Beta: 0.030 Steps: 13
Stage: 4 Beta: 0.049 Steps: 3
Stage: 5 Beta: 0.089 Steps: 10
Stage: 6 Beta: 0.178 Steps: 3
Stage: 7 Beta: 0.368 Steps: 8
Stage: 8 Beta: 0.782 Steps: 3
Stage: 9 Beta: 1.000 Steps: 7
pm.plot_posterior(trace_FN, kind="hist", bins=30, color="seagreen");_____no_output_____
</code>
## Inference summary
With `pm.SMC`, do I get similar performance to geometric MCMC samplers (see [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x))? I think so !_____no_output_____
<code>
results = [
pm.summary(trace_FN, ["a"]),
pm.summary(trace_FN, ["b"]),
pm.summary(trace_FN, ["c"]),
pm.summary(trace_FN, ["sigma"]),
]
results = pd.concat(results)
true_params.append(noise_sigma)
results["True values"] = pd.Series(np.array(true_params), index=results.index)
true_params.pop()
results_____no_output_____
</code>
## Reconstruction of the phase portrait
Its good to check that we can reconstruct the (famous) pahse portrait for this model based on the obtained samples._____no_output_____
<code>
params = np.array([trace_FN.get_values("a"), trace_FN.get_values("b"), trace_FN.get_values("c")]).T
params.shape
new_values = []
for ind in range(len(params)):
ppc_sol = ode_model.simulate(params[ind])
new_values.append(ppc_sol)
new_values = np.array(new_values)
mean_values = np.mean(new_values, axis=0)
plt.figure(figsize=(15, 7.5))
plt.plot(
mean_values[:, 0],
mean_values[:, 1],
color="black",
lw=4,
label="Inferred (mean of sampled) phase portrait",
)
plt.plot(
sim_data[:, 0], sim_data[:, 1], "--", color="#ff7f0e", lw=4, ms=6, label="True phase portrait"
)
plt.legend(fontsize=15)
plt.xlabel(r"$V(t)$", fontsize=15)
plt.ylabel(r"$R(t)$", fontsize=15);_____no_output_____
</code>
# Perspectives
### Using some other ODE models
I have tried to keep everything as general as possible. So, my custom ODE Op, the state and VSP evaluator as well as the cached solver are not tied to a specific ODE model. Thus, to use any other ODE model one only needs to implement a `simulate_with_sensitivities` method according to their own specific ODE model.
### Other forms of differential equation (DDE, DAE, PDE)
I hope the two examples have elucidated the applicability of PyMC3 in regards to fitting ODE models. Although ODEs are the most fundamental constituent of a mathematical model, there are indeed other forms of dynamical systems such as a delay differential equation (DDE), a differential algebraic equation (DAE) and the partial differential equation (PDE) whose parameter estimation is equally important. The SMC and for that matter any other non-gradient sampler supported by PyMC3 can be used to fit all these forms of differential equation, of course using the `as_op`. However, just like an ODE we can solve augmented systems of DDE/DAE along with their sensitivity equations. The sensitivity equations for a DDE and a DAE can be found in this recent paper, [C Rackauckas et al., 2018](https://arxiv.org/abs/1812.01892) (Equation 9 and 10). Thus we can easily apply NUTS sampler to these models.
### Stan already supports ODEs
Well there are many problems where I believe SMC sampler would be more suitable than NUTS and thus its good to have that option.
### Model selection
Most ODE inference literature since [Vladislav Vyshemirsky and Mark Girolami, 2008](https://academic.oup.com/bioinformatics/article/24/6/833/192524) recommend the usage of Bayes factor for the purpose of model selection/comparison. This involves the calculation of the marginal likelihood which is a much more nuanced topic and I would refrain from any discussion about that. Fortunately, the SMC sampler calculates the marginal likelihood as a by product so this can be used for obtaining Bayes factors. Follow PyMC3's other tutorials for further information regarding how to obtain the marginal likelihood after running the SMC sampler.
Since we generally frame the ODE inference as a regression problem (along with the i.i.d measurement noise assumption in most cases) we can straight away use any of the supported information criterion, such as the widely available information criterion (WAIC), irrespective of what sampler is used for inference. See the PyMC3's API for further information regarding WAIC.
### Other AD packages
Although this is a slight digression nonetheless I would still like to point out my observations on this issue. The approach that I have presented here for embedding an ODE (also extends to DDE/DAE) as a custom Op can be trivially carried forward to other AD packages such as TensorFlow and PyTorch. I had been able to use TensorFlow's [py_func](https://www.tensorflow.org/api_docs/python/tf/py_func) to build a custom TensorFlow ODE Op and then use that in the [Edward](http://edwardlib.org/) ppl. I would recommend [this](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html) tutorial, for writing PyTorch extensions, to those who are interested in using the [Pyro](http://pyro.ai/) ppl.
_____no_output_____
<code>
%load_ext watermark
%watermark -n -u -v -iv -wpymc3 3.8
arviz 0.7.0
pandas 0.25.3
seaborn 0.9.0
numpy 1.17.5
last updated: Wed Apr 22 2020
CPython 3.8.0
IPython 7.11.0
watermark 2.0.2
</code>
|
{
"repository": "tandelDipak/pymc3",
"path": "docs/source/notebooks/ODE_with_manual_gradients.ipynb",
"matched_keywords": [
"bioinformatics",
"evolution"
],
"stars": 1,
"size": 680111,
"hexsha": "cb43a7f34aa04f33515a3013ccb5ad51fa909e73",
"max_line_length": 332208,
"avg_line_length": 602.9352836879,
"alphanum_fraction": 0.9363250999
}
|
# Notebook from ZYVE255/ebm-optimizer
Path: MiscNotebookFiles/Testing.ipynb
<code>
#==========Imports==========
import numpy as np
import matplotlib.pyplot as plt
import astropy.constants as const
import time
from scipy import interpolate
import Zach_OPTIMIZER.EBMFunctions as opt
import Bell_EBM as ebm_____no_output_____#==========Set Up System==========
planet = ebm.Planet(rad=1.500*const.R_jup.value, mass=1.170*const.M_jup.value,
Porb=1.09142030, a=0.02340*2*const.au.value, inc=83.37, vWind=5e3, nlat = 8, e=0.2)
star = ebm.Star(teff=6300., rad=1.59, mass=1.20)
system = ebm.System(star, planet)_____no_output_____def CreateBaseline(star, planet, temporal=5000, spacial=32,orbit=2):
_star = star
_planet = planet
_system = ebm.System(_star, _planet)
Teq = _system.get_teq()
T0 = np.ones_like(_system.planet.map.values)*Teq
t0 = 0.
t1 = t0+_system.planet.Porb*orbit
dt = _system.planet.Porb/temporal
baselineTimes, baselineMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=False)
if (planet.orbit.e != 0.):
T0 = baselineMaps[-1]
t0 = baselineTimes[-1]
t1 = t0+system.planet.Porb
dt = (system.planet.Porb)/1000.
baselineTimes, baselineMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
baselineLightcurve = system.lightcurve(baselineTimes, baselineMaps, bolo=False, wav=4.5e-6)
# phaseBaseline = system.get_phase(baselineTimes).flatten()
# order = np.argsort(phaseBaseline)
# baselineLightcurve = baselineLightcurve[order]
# phaseBaseline = phaseBaseline[order]
else:
baselineLightcurve = system.lightcurve(bolo=False, wav=4.5e-6)
return baselineTimes, baselineMaps, baselineLightcurve_____no_output_____blt, blm, blc = opt.CreateBaseline(star,planet)_____no_output_____plt.plot(blc)_____no_output_____def RunTests(star, planet, points, base):
data = np.zeros(shape=(points.shape[0],4))
_star = star
_planet = planet
_system = ebm.System(_star,_planet)
for i in range(0, points.shape[0]):
_star = star
_planet = planet
_planet.map = ebm.Map.Map(nlat=points[i,1])
_system = ebm.System(_star, _planet)
data[i,0] = points[i,0]
data[i,1] = points[i,1]
tInt = time.time()
Teq = _system.get_teq()
T0 = np.ones_like(_system.planet.map.values)*Teq
t0 = 0.
t1 = t0+_system.planet.Porb
dt = _system.planet.Porb/points[i,0]
testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False)
if (_planet.orbit.e != 0):
T0 = testMaps[-1]
t0 = testTimes[-1]
t1 = t0+_system.planet.Porb
dt = system.planet.Porb/points[i,0]
testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
testLightcurve = system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6)
phaseTest = _system.get_phase(testTimes).flatten()
order = np.argsort(phaseTest)
testLightcurve = testLightcurve[order]
phaseTest = phaseTest[order]
testLightcurve = np.interp(base, phaseTest, testLightcurve)
else:
testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6)
tFin = time.time()
data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve)))
data[i,2] = (tFin - tInt)*(1e3)
return testLightcurve, data
_____no_output_____p = np.zeros(shape=((10),2))_____no_output_____p[:,0]=500
p[:,1]=8
p[9,0] = 500
p[9,1] = 8_____no_output_____p_____no_output_____lc, data = opt.RunTests(star,planet,p,blc,blt)_____no_output_____plt.plot(lc)_____no_output_____plt.plot(lc, c='g')
plt.plot(blc, c='b')_____no_output_____plt.plot((blc-lc)*(1e6))_____no_output_____def Optimize(star, planet, error, verbose=False):
_planet = planet
_star = star
aError = error
#==========High Res Baseline Creation==========
if (verbose == True):
print("Starting baseline generation...")
tInt = time.time()
blt, blm, blc = CreateBaseline(_star, _planet)
tFin = time.time()
if (verbose == True):
print("Baseline generation complete; Time to Compute: " + str(round(tFin-tInt,2)) + "s")
#===========Initial data creationg================
space_points = 5
temp_points = 5
data = np.zeros(shape=((space_points*temp_points),4))
for i in range (0, temp_points):
for j in range (0, space_points):
data[(i*space_points)+j,0]= ((i+1)*250)+0
data[(i*space_points)+j,1] = ((j+1)*4)+0
if (verbose == True):
print("First pass data points assigned")
#==================First pass testing Area======================
if (verbose == True):
print("Starting first pass...")
tInt = time.time()
lc, data = RunTests(_star, _planet, data, blc)
tFin = time.time()
if (verbose == True):
print("First pass finished : Time to compute: " + str(round(tFin-tInt,2)) + "s")
#=================First pass best point===================
#print(data) #For debugging purposes
if (verbose == True):
print("Processing first pass data...")
iBest = None
for i in range(0,space_points*temp_points):
if (data[i,3]<=(aError*1.05)):
if (iBest == None):
iBest = i
if(data[i,2] < data[iBest,2]):
iBest = i
#===========Second pass data creation================
space_points = 5
temp_points = 5
dataDouble = np.zeros(shape=((space_points*temp_points),2))
for i in range (0, temp_points):
for j in range (0, space_points):
dataDouble[(i*space_points)+j,0] = ((i)*50)+(data[iBest,0]-100)
if (dataDouble[(i*space_points)+j,0]<100):
dataDouble[(i*space_points)+j,0] = 100
dataDouble[(i*space_points)+j,1] = ((j)*2)+(data[iBest,1]-4)
if (dataDouble[(i*space_points)+j,1]<2):
dataDouble[(i*space_points)+j,1] = 2
if (verbose == True):
print("Second pass data points assigned")
#==================Second pass testing Area======================
if (verbose == True):
print("Starting second pass...")
tInt = time.time()
lc, dataDouble = RunTests(_star, _planet, dataDouble, blc)
tFin = time.time()
if (verbose == True):
print("Second pass finished : Time to compute: " + str(round(tFin-tInt,2)) + "s")
#=================Finding best second pass point===================
#print(data) #For debugging purposes
if (verbose == True):
print("Processing second pass data...")
iBest = None
for i in range(0,space_points*temp_points):
if (dataDouble[i,3]<=aError):
if (iBest == None):
iBest = i
if(dataDouble[i,2] < dataDouble[iBest,2]):
iBest = i
if (iBest == None):
print("No points match requested error")
else:
print("Temporal: " + str(dataDouble[iBest,0]) + " Spacial: " + str(dataDouble[iBest,1]))
print("Time for compute: " + str(round(dataDouble[iBest, 2],2)) +"ms : Error: " + str(round(dataDouble[iBest, 3],2)) + "ppm")
print("Expected compute time @ 1,000,000 cycles: " + str((round((dataDouble[iBest, 2]*1e3/60)/60,2))) + " Hrs")
return dataDouble[iBest,0], dataDouble[iBest,1]
# #print(data) #For debugging
# #print(dataDouble) #For debugging
# #=========Create Maps==================
# if (verbose == True):
# planet.map = ebm.Map.Map(nlat=dataDouble[iBest,1])
# system = ebm.System(star, planet)
# TotalTimeToCompute = 0.
# Teq = system.get_teq()
# T0 = np.ones_like(system.planet.map.values)*Teq
# t0 = 0.
# t1 = t0+system.planet.Porb*1
# dt = system.planet.Porb/dataDouble[iBest,0]
# times, maps, ttc = system.run_model_tester(T0, t0, t1, dt, verbose=False)
# TotalTimeToCompute += ttc
# if (planet.orbit.e != 0):
# T0 = maps[-1]
# t0 = times[-1]
# t1 = t0+system.planet.Porb
# dt = system.planet.Porb/dataDouble[iBest,0]
# times, maps, ttc = system.run_model_tester(T0, t0, t1, dt, verbose=False, intermediates=True)
# TotalTimeToCompute += ttc
# testLightcurve = system.lightcurve(times, maps, bolo=False, wav=4.5e-6)
# phaseTest = system.get_phase(times).flatten()
# order = np.argsort(phaseTest)
# testLightcurve = testLightcurve[order]
# phaseTest = phaseTest[order]
# testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve)
# plt.plot((baselineLightcurve)*1e6, lw=2, c='g')
# plt.plot((testLightcurve)*1e6, lw=1, c='r')
# plt.title("Lightcurves of baseline (green) compared to recommended values (red)")
# plt.show()
_____no_output_____temp, space = opt.Optimize(star, planet, 100, verbose=True)Starting baseline generation...
Baseline generation complete; Time to Compute: 2.76s
First pass data points assigned
Starting first pass...
First pass finished : Time to compute: 25.06s
Processing first pass data...
Second pass data points assigned
Starting second pass...
Second pass finished : Time to compute: 3.04s
Processing second pass data...
Temporal: 150.0 Spacial: 6.0
Time for compute: 66.81ms : Error: 44.1ppm
Expected compute time @ 1,000,000 cycles: 18.56 Hrs
phaseBaseline = system.get_phase(blt).flatten()
order = np.argsort(phaseBaseline)
baselineLightcurve = blc[order]
phaseBaseline = phaseBaseline[order]
_star = star
_planet = planet
_planet.map = ebm.Map.Map(nlat=8)
_system = ebm.System(_star, _planet)
tInt = time.time()
Teq = _system.get_teq()
T0 = np.ones_like(_system.planet.map.values)*Teq
t0 = 0.
t1 = t0+_system.planet.Porb
dt = _system.planet.Porb/500
testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False)
if (_planet.orbit.e != 0):
T0 = testMaps[-1]
t0 = testTimes[-1]
t1 = t0+_system.planet.Porb
dt = system.planet.Porb/500
testTimes, testMaps = system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
testLightcurve = system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6)
testbeta = testLightcurve
phaseTest = _system.get_phase(testTimes).flatten()
order = np.argsort(phaseTest)
testLightcurve = testLightcurve[order]
testalpha = testLightcurve
phaseTest = phaseTest[order]
testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve)
else:
testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6)
tFin = time.time()
_____no_output_____plt.plot(blc)_____no_output_____plt.plot(testbeta)_____no_output_____plt.plot(testalpha)_____no_output_____plt.plot(testLightcurve)_____no_output_____phaseTest_____no_output_____testTimes_____no_output_____blt_____no_output_____blt.shape_____no_output_____testTimes.shape_____no_output_____def RunTests(star, planet, points, base, basetimes, basemap):
"""
Runs several test of a system and returns time
to compute and error as comapared to baseline for each test.
Args:
star (ebm.Star): The star to runs the tests on
planet (ebm.Planet): The planet to run the tests on
points (2darray (n by 2)): The array of points to be tested by the model,
each point must contain [temporal, spacial], n points are provided
base (ndarray): Baseline lightcurve as generated by the CreateBaseline function
Return:
ndarray: Latest tested lightcurve, mainly used for debugging purposes
ndarray: (n by 4), n points of format [temporal, spacial, time_to_compute, error_in_ppm]
"""
data = np.zeros(shape=(points.shape[0],4))
_star = star
_planet = planet
_system = ebm.System(_star,_planet)
if (_planet.orbit.e != 0):
phaseBaseline = _system.get_phase(basetimes).flatten()
order = np.argsort(phaseBaseline)
baselineLightcurve = base[order]
phaseBaseline = phaseBaseline[order]
for i in range(0, points.shape[0]):
_star = star
_planet = planet
_planet.map = ebm.Map.Map(nlat=points[i,1])
_system = ebm.System(_star, _planet)
data[i,0] = points[i,0]
data[i,1] = points[i,1]
tInt = time.time()
Teq = _system.get_teq()
T0 = np.ones_like(_system.planet.map.values)*Teq
t0 = 0.
t1 = t0+_system.planet.Porb
dt = _system.planet.Porb/points[i,0]
testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False)
# if (_planet.orbit.e != 0):
# T0 = testMaps[-1]
# t0 = testTimes[-1]
# t1 = t0+_system.planet.Porb
# dt = _system.planet.Porb/points[i,0]
# testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
# testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6)
# phaseTest = _system.get_phase(testTimes).flatten()
# order = np.argsort(phaseTest)
# testLightcurve = testLightcurve[order]
# phaseTest = phaseTest[order]
# testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve)
# else:
# testLightcurve = _system.lightcurve(bolo=False, wav=4.5e-6)
if (_planet.orbit.e != 0):
T0 = testMaps[-1]
t0 = testTimes[-1]
t1 = t0+_system.planet.Porb
dt = system.planet.Porb/points[i,0]
testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6)
#testbeta = testLightcurve
phaseTest = _system.get_phase(testTimes).flatten()
order = np.argsort(phaseTest)
testLightcurve = testLightcurve[order]
#testalpha = testLightcurve
phaseTest = phaseTest[order]
testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve)
else:
testLightcurve = system.lightcurve(bolo=False, wav=4.5e-6)
tFin = time.time()
data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve)))
data[i,2] = (tFin - tInt)*(1e3)
return testLightcurve, data_____no_output_____light, ded = RunTests(star,planet,p,blc,blt,blm)_____no_output_____plt.plot(light)_____no_output_____phaseBaseline = system.get_phase(blt).flatten()
order = np.argsort(phaseBaseline)
baselineLightcurve = blc[order]
phaseBaseline = phaseBaseline[order]
_____no_output_____#==========Imports==========
import numpy as np
import matplotlib.pyplot as plt
import astropy.constants as const
import time
from scipy import interpolate
import Zach_OPTIMIZER.EBMFunctions as opt
import Bell_EBM as ebm_____no_output_____
def RunTests(star, planet, points, base, basetimes):
"""
Runs several test of a system and returns time
to compute and error as comapared to baseline for each test.
Args:
star (ebm.Star): The star to runs the tests on
planet (ebm.Planet): The planet to run the tests on
points (2darray (n by 2)): The array of points to be tested by the model,
each point must contain [temporal, spacial], n points are provided
base (ndarray): Baseline lightcurve as generated by the CreateBaseline function
basetime (ndarray): Baseline times as generated by CreateBaseline function
Return:
ndarray: Latest tested lightcurve, mainly used for debugging purposes
ndarray: (n by 4), n points of format [temporal, spacial, time_to_compute, error_in_ppm]
"""
data = np.zeros(shape=(points.shape[0],4))
_star = star
_planet = planet
_system = ebm.System(_star,_planet)
if (_planet.orbit.e != 0):
phaseBaseline = _system.get_phase(basetimes).flatten()
order = np.argsort(phaseBaseline)
baselineLightcurve = base[order]
phaseBaseline = phaseBaseline[order]
for i in range(0, points.shape[0]):
_star = star
_planet = planet
_planet.map = ebm.Map.Map(nlat=points[i,1])
_system = ebm.System(_star, _planet)
data[i,0] = points[i,0]
data[i,1] = points[i,1]
tInt = time.time()
Teq = _system.get_teq()
T0 = np.ones_like(_system.planet.map.values)*Teq
t0 = 0.
t1 = t0+_system.planet.Porb
dt = _system.planet.Porb/points[i,0]
testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False)
if (_planet.orbit.e != 0):
T0 = testMaps[-1]
t0 = testTimes[-1]
t1 = t0+_system.planet.Porb
dt = _system.planet.Porb/points[i,0]
testTimes, testMaps = _system.run_model(T0, t0, t1, dt, verbose=False, intermediates=True)
testLightcurve = _system.lightcurve(testTimes, testMaps, bolo=False, wav=4.5e-6)
phaseTest = _system.get_phase(testTimes).flatten()
order = np.argsort(phaseTest)
testLightcurve = testLightcurve[order]
phaseTest = phaseTest[order]
testLightcurve = np.interp(phaseBaseline, phaseTest, testLightcurve)
else:
testLightcurve = _system.lightcurve(bolo=False, wav=4.5e-6)
tFin = time.time()
data[i,3] = (1e6)*(np.amax(np.absolute(base - testLightcurve)))
data[i,2] = (tFin - tInt)*(1e3)
return testLightcurve, data
_____no_output_____
</code>
|
{
"repository": "ZYVE255/ebm-optimizer",
"path": "MiscNotebookFiles/Testing.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 254907,
"hexsha": "cb43e50e45368a3f99aafc86f7a0f96371099e03",
"max_line_length": 18072,
"avg_line_length": 88.3559792028,
"alphanum_fraction": 0.745091347
}
|
# Notebook from SystemsBiologyUniandes/PyEcoLib
Path: .ipynb_checkpoints/SSA_slow-checkpoint.ipynb
<code>
import math
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy.stats import bayes_mvs as bayesest
import os
import time
from szsimulator import Szsimulator
%matplotlib inline_____no_output_____mean_size = 3 # micron
doubling_time = 18 #min
tmax = 180 #min
sample_time = 2 #min
div_steps = 10
ncells = 1000_____no_output_____gr = np.log(2)/doubling_time
kd = div_steps*gr/(mean_size)_____no_output_____
ncells = 2000
sampling_time = sample_time
rprom = 10 # RNA mean concentration
pprom = 1000 # prot mean concentration
gammar = 5*gr # RNA Active degradation rate
kr = rprom*(gr+gammar) # RNA transcription rate
kp = pprom*gr/rprom # Protein translation rate
pop = np.zeros([ncells,6])
indexes = np.int(tmax/sampling_time)
rarray = np.zeros([ncells,indexes])
parray = np.zeros([ncells,indexes])
tarray = np.zeros([indexes])
szarray = np.zeros([ncells,indexes])
cellindex = 0
indexref = 0
start = time.time()
for cell in pop:
if ncells > 100:
if cellindex/ncells > indexref:
print(str(np.int(100*cellindex/ncells))+"%")
indexref += 0.1
#Initialize the simulator
sim = Szsimulator(tmax = tmax, sample_time = sample_time, ncells=1, gr = gr, k = kd, steps = div_steps)
#_______________
#Example of a direct SSA simulation
cell[0] = mean_size #Initial size
cell[1] = mean_size*rprom #Initial RNA number
cell[2] = mean_size*pprom #Initial Protein number
cell[3] = (1/gr)*np.log(1-(gr/(kr*cell[0]))*np.log(np.random.rand())) #time to thenext rna creation
cell[4] = -np.log(np.random.rand())/(gammar*cell[1]) #time to the next rna degradation
cell[5] = -np.log(np.random.rand())/(kp*cell[1]) #time to next protein creation
t=0
reactions=[[0,1,0,0,0,0],[0,-1,0,0,0,0],[0,0,1,0,0,0]] #Reactions (RNA creation, RNA active degradation, Protein creation)
nextt = 0
index = 0
ndiv = 0
while t<tmax: #iterating over time
nr = cell[1]
nprot = cell[2]
sz = cell[0]
tnextarr = [cell[3],cell[4],cell[5]]
tau = np.min(tnextarr)
cell += reactions[np.argmin(tnextarr)]
#------------------
sim.simulate(tmax=tau,export = False) #Simulate size dynamics for that given time
#--------------------
cell[0] = sim.get_sz(0) #Taking the cell size after that simulation
if sim.get_ndiv(0) > ndiv: #Check if cell got divided
cell[1] = np.random.binomial(nr,0.5) # RNA segregated binomially
cell[2] = np.random.binomial(nprot,0.5) # Protein segregated binomially
ndiv += 1 # New number of divisions
nr = cell[1] #Refreshing RNA number
nprot = cell[2] #Refreshing Protein number
sz = cell[0] #Refreshing size number
cell[3] = (1/gr)*np.log(1-(gr/(kr*cell[0]))*np.log(np.random.rand())) #time to thenext rna creation
cell[4] = -np.log(np.random.rand())/(gammar*cell[1]) #time to the next rna degradation
cell[5] = -np.log(np.random.rand())/(kp*cell[1]) #time to next protein creation
t+=tau
if t > nextt and index<len(tarray): #storing data
rarray[cellindex,index] = nr/sz # RNA concentration
parray[cellindex,index] = nprot/sz # Protein concentration
szarray[cellindex,index] = sz # Cell size
tarray[index] = t # Time
index += 1
nextt += sampling_time
cellindex += 1
print('It took', np.int(time.time()-start), 'seconds.')0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
It took 2835.5868401527405 seconds.
data=pd.DataFrame(np.transpose(np.array(szarray)))
ind=0
newcol=[]
for name in data.columns:
newcol.append("mom"+str(ind))
ind+=1
data.columns=newcol
mnszarray=[]
cvszarray=[]
errcv2sz=[]
errmnsz=[]
for m in range(len(data)):
szs=data.loc[m, :].values.tolist()
mean_cntr, var_cntr, std_cntr = bayesest(szs,alpha=0.95)
mnszarray.append(mean_cntr[0])
errmnsz.append(mean_cntr[1][1]-mean_cntr[0])
cvszarray.append(var_cntr[0]/mean_cntr[0]**2)
errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3
errcv2sz.append(errv)
data['time'] = tarray
data['Mean_sz'] = mnszarray
data['Error_mean'] = errmnsz
data['sz_CV2'] = cvszarray
data['Error_CV2'] = errcv2sz
if not os.path.exists('./data/SSA'):
os.makedirs('./data/SSA')
data.to_csv("./data/SSA/szsim.csv")_____no_output_____tmax=9*doubling_time
dt=0.0001*doubling_time
lamb=1
a=gr
nsteps=div_steps
k=kd
v0=mean_size
#psz1=[]
ndivs=10
t=0
bigdeltat=0.1
steps=int(np.floor(tmax/dt))
u=np.zeros([ndivs,nsteps])#(DIVS,STEPS)
u[0]=np.zeros(nsteps)
u[0][0]=1#P_00
allmeandivs4=[]#average divisions along the time
allvardiv4=[] # variace of pn along the time
allmeansz4=[]
allvarsz4=[]
time4=[]#time array
yenvol=[]
xenvol=[]
start=0
count=int(np.floor(tmax/(dt*1000)))-1
count2=0
start = time.time()
for l in range(steps):
utemp=u
for n in range(len(utemp)):#n=divs,
for m in range(len(utemp[n])):#m=steps
if (m==0):#m=steps
if(n==0):#n=divs
dun=-k*v0**lamb*np.exp(lamb*a*t)*(utemp[0][0])
u[n][m]+=dun*dt
else:
arg=lamb*(a*t-n*np.log(2))
dun=k*v0**lamb*np.exp(arg)*((2**lamb)*utemp[n-1][len(utemp[n])-1]-utemp[n][0])
u[n][m]+=dun*dt
elif(m==len(utemp[n])-1):
if(n==len(utemp)-1):
arg=lamb*(a*t-n*np.log(2))
dun=k*v0**lamb*np.exp(arg)*(utemp[n][len(utemp[n])-2])
u[n][m]+=dun*dt
else:
arg=lamb*(a*t-n*np.log(2))
dun=k*v0**lamb*np.exp(arg)*(utemp[n][m-1]-utemp[n][m])
u[n][m]+=dun*dt
else:
arg=lamb*(a*t-n*np.log(2))
dun=k*v0**lamb*np.exp(arg)*(utemp[n][m-1]-utemp[n][m])
u[n][m]+=dun*dt
t+=dt
count=count+1
if count==int(np.floor(tmax/(dt*1000))):
time4.append(t/doubling_time)
mean=0
for n in range(len(utemp)):
pnval=np.sum(u[n])
mean+=n*pnval
allmeandivs4.append(mean/mean_size)
var=0
for n in range(len(utemp)):#divs
pnval=np.sum(u[n])
var+=(n-mean)**2*pnval
allvardiv4.append(np.sqrt(var))
pn=np.zeros(ndivs)
sizen=np.zeros(ndivs)
meansz=0
for ll in range(len(utemp)):
pnltemp=np.sum(u[ll])#prob of n divs
pn[ll]=pnltemp#
sizen[ll]=np.exp(a*t)/2**ll#
meansz+=pnltemp*v0*np.exp(a*t)/2**ll
allmeansz4.append(meansz)
varsz=0
for ll in range(len(utemp)):
pnltemp=np.sum(u[ll])
varsz+=(v0*np.exp(a*t)/2**ll-meansz)**2*pnltemp
allvarsz4.append(varsz)
count=0
count2+=1
if(count2==100):
print(str(int(100*t/tmax))+"%")
count2=0
print('It took', np.int(time.time()-start), 'seconds.')9%
19%
29%
39%
49%
59%
69%
79%
88%
98%
It took 46 seconds.
fig, ax = plt.subplots(1,2, figsize=(12,4))
#ax[0].plot(tarray,mnszarray)
ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnszarray)-np.array(errmnsz),np.array(mnszarray)+np.array(errmnsz),
alpha=1, edgecolor='#4db8ff', facecolor='#4db8ff',linewidth=0,label='SSA')
#ax[1].plot(tarray,cvszarray)
ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvszarray)-np.array(errcv2sz),np.array(cvszarray)+np.array(errcv2sz),
alpha=1, edgecolor='#4db8ff', facecolor='#4db8ff',linewidth=0)
ax[0].plot(np.array(time4),np.array(allmeansz4),lw=2,c='#006599',label="Numerical")
ax[1].plot(np.array(time4),np.array(allvarsz4)/np.array(allmeansz4)**2,lw=2,c='#006599')
ax[0].set_ylabel("$s$ ($\mu$m)",size=20)
ax[1].set_ylabel("$C_V^2(s)$",size=20)
ax[0].set_xlabel(r"$t/\tau$",size=20)
ax[1].set_xlabel(r"$t/\tau$",size=20)
ax[0].set_ylim([1,1.2*np.max(mnszarray)])
ax[1].set_ylim([0,1.2*np.max(cvszarray)])
for l in [0,1]:
ax[l].set_xlim([0,tmax/doubling_time])
taqui=np.arange(0,(tmax+1)/doubling_time,step=1)
ax[l].set_xticks(np.array(taqui))
ax[l].grid()
ax[l].tick_params(axis='x', labelsize=15)
ax[l].tick_params(axis='y', labelsize=15)
for axis in ['bottom','left']:
ax[l].spines[axis].set_linewidth(2)
ax[l].tick_params(axis='both', width=2,length=6)
for axis in ['top','right']:
ax[l].spines[axis].set_linewidth(0)
ax[l].tick_params(axis='both', width=0,length=6)
plt.subplots_adjust(hspace=0.3,wspace=0.3)
taqui=np.arange(0,0.15,step=0.02)
ax[1].set_yticks(np.array(taqui))
ax[0].legend(fontsize=15)
if not os.path.exists('./figures/SSA'):
os.makedirs('./figures/SSA')
plt.savefig('./figures/SSA/size_statistics.svg',bbox_inches='tight')
plt.savefig('./figures/SSA/size_statistics.png',bbox_inches='tight')_____no_output_____data=pd.DataFrame(np.transpose(np.array(rarray)))
ind=0
newcol=[]
for name in data.columns:
newcol.append("mom"+str(ind))
ind+=1
data.columns=newcol
mnrnaarray=[]
cvrnaarray=[]
errcv2rna=[]
errmnrna=[]
for m in range(len(data)):
rnas=data.loc[m, :].values.tolist()
mean_cntr, var_cntr, std_cntr = bayesest(rnas,alpha=0.95)
mnrnaarray.append(mean_cntr[0])
errmnrna.append(mean_cntr[1][1]-mean_cntr[0])
cvrnaarray.append(var_cntr[0]/mean_cntr[0]**2)
errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3
errcv2rna.append(errv)
data['time'] = tarray
data['Mean_RNA'] = mnrnaarray
data['Error_mean'] = errmnrna
data['RNA_CV2'] = cvrnaarray
data['Error_CV2'] = errcv2rna
if not os.path.exists('./data/SSA'):
os.makedirs('./data/SSA')
data.to_csv("./data/SSA/RNAsim.csv")_____no_output_____fig, ax = plt.subplots(1,2, figsize=(12,4))
ax[0].plot(np.array(tarray)/doubling_time,mnrnaarray,c="#BD0025")
ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnrnaarray)-np.array(errmnrna),np.array(mnrnaarray)+np.array(errmnrna),
alpha=1, edgecolor='#FF3333', facecolor='#FF3333',linewidth=0)
ax[1].plot(np.array(tarray)/doubling_time,cvrnaarray,c="#BD0025")
ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvrnaarray)-np.array(errcv2rna),np.array(cvrnaarray)+np.array(errcv2rna),
alpha=1, edgecolor='#FF3333', facecolor='#FF3333',linewidth=0)
ax[0].set_ylabel("RNA",size=20)
ax[1].set_ylabel("$C_V^2(r)$",size=20)
ax[0].set_xlabel(r"$t/\tau$",size=20)
ax[1].set_xlabel(r"$t/\tau$",size=20)
ax[0].set_ylim([0,1.2*np.max(mnrnaarray)])
ax[1].set_ylim([0,1.2*np.max(cvrnaarray)])
for l in [0,1]:
ax[l].set_xlim([0,tmax/doubling_time])
taqui=np.arange(0,(tmax+1)/doubling_time,step=1)
ax[l].set_xticks(np.array(taqui))
ax[l].grid()
ax[l].tick_params(axis='x', labelsize=15)
ax[l].tick_params(axis='y', labelsize=15)
for axis in ['bottom','left']:
ax[l].spines[axis].set_linewidth(2)
ax[l].tick_params(axis='both', width=2,length=6)
for axis in ['top','right']:
ax[l].spines[axis].set_linewidth(0)
ax[l].tick_params(axis='both', width=0,length=6)
plt.subplots_adjust(hspace=0.3,wspace=0.3)
taqui=np.arange(0,1.2*np.max(cvrnaarray),step=np.round(.2*np.max(cvrnaarray),2))
ax[1].set_yticks(np.array(taqui))
if not os.path.exists('./figures/SSA'):
os.makedirs('./figures/SSA')
plt.savefig('./figures/SSA/rna_statistics.svg',bbox_inches='tight')
plt.savefig('./figures/SSA/rna_statistics.png',bbox_inches='tight')_____no_output_____data=pd.DataFrame(np.transpose(np.array(parray)))
ind=0
newcol=[]
for name in data.columns:
newcol.append("mom"+str(ind))
ind+=1
data.columns=newcol
mnprotarray=[]
cvprotarray=[]
errcv2prot=[]
errmnprot=[]
for m in range(len(data)):
rnas=data.loc[m, :].values.tolist()
mean_cntr, var_cntr, std_cntr = bayesest(rnas,alpha=0.95)
mnprotarray.append(mean_cntr[0])
errmnprot.append(mean_cntr[1][1]-mean_cntr[0])
cvprotarray.append(var_cntr[0]/mean_cntr[0]**2)
errv=(var_cntr[1][1]-var_cntr[0])/mean_cntr[0]**2+2*(mean_cntr[1][1]-mean_cntr[0])*var_cntr[0]/mean_cntr[0]**3
errcv2prot.append(errv)
data['time'] = tarray
data['Mean_prot'] = mnrnaarray
data['Error_mean'] = errmnrna
data['prot_CV2'] = cvrnaarray
data['Error_CV2'] = errcv2rna
if not os.path.exists('./data/SSA'):
os.makedirs('./data/SSA')
data.to_csv("./data/SSA/protsim.csv")_____no_output_____fig, ax = plt.subplots(1,2, figsize=(12,4))
ax[0].plot(np.array(tarray)/doubling_time,mnprotarray,c="#3BB000")
ax[0].fill_between(np.array(tarray)/doubling_time,np.array(mnprotarray)-np.array(errmnprot),np.array(mnprotarray)+np.array(errmnprot),
alpha=1, edgecolor='#4BE000', facecolor='#4BE000',linewidth=0)
ax[1].plot(np.array(tarray)/doubling_time,cvprotarray,c="#3BB000")
ax[1].fill_between(np.array(tarray)/doubling_time,np.array(cvprotarray)-np.array(errcv2prot),np.array(cvprotarray)+np.array(errcv2prot),
alpha=1, edgecolor='#4BE000', facecolor='#4BE000',linewidth=0)
ax[0].set_ylabel("Protein",size=20)
ax[1].set_ylabel("$C_V^2(p)$",size=20)
ax[0].set_xlabel(r"$t/\tau$",size=20)
ax[1].set_xlabel(r"$t/\tau$",size=20)
ax[0].set_ylim([0,1.2*np.max(mnprotarray)])
ax[1].set_ylim([0,1.2*np.max(cvprotarray)])
for l in [0,1]:
ax[l].set_xlim([0,tmax/doubling_time])
taqui=np.arange(0,(tmax+1)/doubling_time,step=1)
ax[l].set_xticks(np.array(taqui))
ax[l].grid()
ax[l].tick_params(axis='x', labelsize=15)
ax[l].tick_params(axis='y', labelsize=15)
for axis in ['bottom','left']:
ax[l].spines[axis].set_linewidth(2)
ax[l].tick_params(axis='both', width=2,length=6)
for axis in ['top','right']:
ax[l].spines[axis].set_linewidth(0)
ax[l].tick_params(axis='both', width=0,length=6)
plt.subplots_adjust(hspace=0.3,wspace=0.5)
taqui=np.arange(0,1.2*np.max(cvprotarray),step=np.round(.2*np.max(cvprotarray),4))
ax[1].set_yticks(np.array(taqui))
if not os.path.exists('./figures'):
os.makedirs('./figures')
if not os.path.exists('./figures/SSA'):
os.makedirs('./figures/SSA')
plt.savefig('./figures/SSA/prot_statistics.svg',bbox_inches='tight')
plt.savefig('./figures/SSA/prot_statistics.png',bbox_inches='tight')_____no_output_____
</code>
|
{
"repository": "SystemsBiologyUniandes/PyEcoLib",
"path": ".ipynb_checkpoints/SSA_slow-checkpoint.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 1,
"size": 155475,
"hexsha": "cb4474f34cc77b44fb283a0a4a557b3e149f4b73",
"max_line_length": 65384,
"avg_line_length": 280.6407942238,
"alphanum_fraction": 0.9058498151
}
|
# Notebook from waytobehigh/nlp_course
Path: week03_lm/homework.ipynb
### Homework: going neural (6 pts)
We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/expanding_mind_lm_kn_3.png' width=300px>
We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level._____no_output_____
<code>
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline_____no_output_____
</code>
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.
However, we still need special tokens:
* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$
* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
_____no_output_____
<code>
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w_____no_output_____
</code>
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset._____no_output_____
<code>
# get all unique characters from lines (including capital letters and symbols)
tokens = set(''.join(lines))
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokensn_tokens = 136
</code>
We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector._____no_output_____
<code>
# dictionary of character -> its identifier (index in tokens list)
token_to_id = {token: id for id, token in enumerate(tokens)}_____no_output_____assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")Seems alright!
</code>
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`.
The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:_____no_output_____
<code>
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'):
"""Casts a list of lines into tf-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.zeros([len(lines), max_len], dtype) + pad
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix_____no_output_____#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
[[ 1 66 67 68 0 0 0 0 0 0 0 0 0 0 0]
[ 1 66 67 66 68 66 67 66 0 0 0 0 0 0 0]
[ 1 66 67 68 18 19 20 21 22 23 24 25 26 17 0]]
</code>
### Neural Language Model
Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).
$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$
Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:
$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$
But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/fixed_window_lm.jpg' width=400px>
_____no_output_____
<code>
import tensorflow as tf
import keras, keras.layers as L
sess = tf.InteractiveSession()Using TensorFlow backend.
class FixedWindowLanguageModel:
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
You can stack several convolutions at once
"""
#YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
self.emb = L.Embedding(input_dim=n_tokens, output_dim=emb_size)
self.conv1 = L.Convolution1D(filters=hid_size, kernel_size=5,
padding='causal', name='conv1')
self.conv2 = L.Convolution1D(filters=n_tokens, kernel_size=5,
padding='causal', name='conv2')
self.activation = L.Activation('relu')
#END OF YOUR CODE
self.prefix_ix = tf.placeholder('int32', [None, None])
self.next_token_probs = tf.nn.softmax(self(self.prefix_ix)[:, -1])
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
embedding = self.emb(input_ix)
conv1 = self.conv1(embedding)
conv1 = self.activation(conv1)
conv2 = self.conv2(conv1)
return conv2
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100, sess=sess):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
probs = sess.run(self.next_token_probs, {self.prefix_ix: to_matrix([prefix])})[0]
return dict(zip(tokens, probs))
_____no_output_____window_lm = FixedWindowLanguageModel()_____no_output_____dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_lm_out = window_lm(dummy_input_ix)
# note: tensorflow and keras layers only create variables after they're first applied (called)
sess.run(tf.global_variables_initializer())
dummy_logits = sess.run(dummy_lm_out)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"_____no_output_____# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_lm_out_2 = window_lm(dummy_input_ix_2)
dummy_logits_2 = sess.run(dummy_lm_out_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."_____no_output_____
</code>
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:
$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$
As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood._____no_output_____
<code>
def compute_lengths(input_ix, eos_ix=token_to_id[EOS]):
""" compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """
count_eos = tf.cumsum(tf.to_int32(tf.equal(input_ix, eos_ix)), axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.to_int32(tf.equal(count_eos, 0)), axis=1)
return lengths
print('matrix:\n', dummy_input_ix.eval())
print('lengths:', compute_lengths(dummy_input_ix).eval())matrix:
[[ 1 66 67 68 0 0 0 0 0 0 0 0 0 0 0]
[ 1 66 67 66 68 66 67 66 0 0 0 0 0 0 0]
[ 1 66 67 68 18 19 20 21 22 23 24 25 26 17 0]]
lengths: [ 5 9 15]
input_ix = tf.placeholder('int32', [None, None])
logits = window_lm(input_ix[:, :-1])
reference_answers = input_ix[:, 1:]
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you will likely need to use compute_lengths and/or tf.sequence_mask to get it right.
lengths = compute_lengths(input_ix)
mask = tf.to_float(tf.sequence_mask(lengths, tf.shape(input_ix)[1])[:, 1:])
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=reference_answers, logits=logits)
loss = tf.reduce_sum(loss * mask) / tf.reduce_sum(mask)
# operation to update network weights
train_step = tf.train.AdamOptimizer().minimize(loss)_____no_output_____loss_1 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=50)})
loss_2 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=100)})
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!'_____no_output_____
</code>
### Training loop
Now let's train our model on minibatches of data_____no_output_____
<code>
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
sess.run(tf.global_variables_initializer())
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []_____no_output_____def score_lines(dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += sess.run(loss, {input_ix: batch_ix}) * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(lm, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by lm
:param temperature: samples proportionally to lm probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
while True:
token_probs = lm.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
if len(dev_history) == 0:
dev_history.append((0, score_lines(dev_lines, batch_size)))
print("Before training:", generate(window_lm, 'Bridging'))Before training: BridgingY"öŁfcμF}'[GÜàβáLWÜμ'ρ{1xZYεL(S}+8V#!ô|4`,ü."e(7;My.χ"èDÖlμaõ|00KÉó93/(n+A4ŁW<R>RS!4èFM%q:A°É
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
loss_i, _ = sess.run([loss, train_step], {input_ix: batch})
train_history.append((i, loss_i))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for j in range(3):
print(generate(window_lm, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
_____no_output_____assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(window_lm, temperature=0.5))Final dev loss: 1.577635495837142
Search in the sements are of the model method space of a propose a novel method for stocally sentati
Acture and compution training a sumplitical structure of the are is a propose sets of the the the fu
The relater stoch as the method foun and maching cansumed to the problem with a noural networks wher
Evolution of a structure is the detical contration of the and as agence able of the annomation from
Application of semails classification and problems of the enterent deep neural network (CNN) and met
Datasion poselation of the segmentation for exploit the fields for distrate of a monsider be often g
Matrix Space of the context and event in the search to componition of the object to dead for constru
Set of shown go use of convolutional Networks ; Tho subleation ; The recormation problems a large fo
In this paper work of adgation of the conved of the problem of the Semband for state the explodica
A Botification and explorical sefference models of the sequence the problems for $n$ convertation wi
</code>
### RNN Language Models
Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:
$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$
$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$
Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
<img src='https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/rnn_lm.jpg' width=480px>_____no_output_____
<code>
class RNNLanguageModel:
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
"""
# YOUR CODE - create layers/variables/etc
self.emb = L.Embedding(n_tokens, emb_size)
self.lstm = L.LSTM(hid_size, return_sequences=True)
self.linear = L.Dense(n_tokens)
#END OF YOUR CODE
self.prefix_ix = tf.placeholder('int32', [None, None])
self.next_token_probs = tf.nn.softmax(self(self.prefix_ix)[:, -1])
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
embedding = self.emb(input_ix)
lstm = self.lstm(embedding)
linear = self.linear(lstm)
return linear
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100, sess=sess):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
probs = sess.run(self.next_token_probs, {self.prefix_ix: to_matrix([prefix])})[0]
return dict(zip(tokens, probs))
_____no_output_____rnn_lm = RNNLanguageModel()_____no_output_____dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_lm_out = rnn_lm(dummy_input_ix)
# note: tensorflow and keras layers only create variables after they're first applied (called)
sess.run(tf.global_variables_initializer())
dummy_logits = sess.run(dummy_lm_out)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"_____no_output_____# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_lm_out_2 = rnn_lm(dummy_input_ix_2)
dummy_logits_2 = sess.run(dummy_lm_out_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."_____no_output_____
</code>
### RNN training
Our RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [breaking](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.
The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/clip_by_global_norm).
Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above.
_Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue.______no_output_____
<code>
input_ix = tf.placeholder('int32', [None, None])
logits = rnn_lm(input_ix[:, :-1])
reference_answers = input_ix[:, 1:]
# Copy the loss function and train step from the fixed-window model training
lengths = compute_lengths(input_ix)
mask = tf.to_float(tf.sequence_mask(lengths, tf.shape(input_ix)[1])[:, 1:])
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=reference_answers, logits=logits)
loss = tf.reduce_sum(loss * mask) / tf.reduce_sum(mask)
# and the train step
train_step = tf.train.AdamOptimizer().minimize(loss)_____no_output_____loss_1 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=50)})
loss_2 = sess.run(loss, {input_ix: to_matrix(dummy_lines, max_len=100)})
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. Hint: use tf.sequence_mask. Be careful when averaging!'_____no_output_____
</code>
### RNN: Training loop_____no_output_____
<code>
sess.run(tf.global_variables_initializer())
batch_size = 128
score_dev_every = 250
train_history, dev_history = [], []
dev_history.append((0, score_lines(dev_lines, batch_size)))_____no_output_____for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
loss_i, _ = sess.run([loss, train_step], {input_ix: batch})
train_history.append((i, loss_i))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for j in range(3):
print(generate(rnn_lm, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
_____no_output_____assert np.mean(train_history[:10]) > np.mean(train_history[-10:]), "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(rnn_lm, temperature=0.5))Final dev loss: 1.1499693006189857
Fast Classification for Machine Learning ; This paper presents a set of a compositional information
Scalable Convolutional Neural Networks ; In this paper we present a novel approach that are introduc
Regression from a Generative Semantic Algorithm for Statistical Deep Networks ; The problem of discr
A Constraint Optimal Transformation of Scale Transition ; This paper presents on the distributions o
A Convergence and Classification of Action Resparse Selection ; Specific methods are a problems of a
A Diagnostic Approach for Automatic Convex Algorithms ; Computational accurate content of the proble
Semantics for Network for the adversarial networks ; Described from the first sentence of the weak m
Sward Computer Neural Networks ; We show that popular image segmentation from a simple methods combi
The Set Samples Based Model for Generation ; We present a new consistency set of sentence in the con
Appearance Systems ; In this paper we propose a novel approaches to the interest capturing systems a
</code>
### Bonus quest: Ultimate Language Model
So you've learned the building blocks of neural language models, you can now build the ultimate monster:
* Make it char-level, word level or maybe use sub-word units like [bpe](https://github.com/rsennrich/subword-nmt);
* Combine convolutions, recurrent cells, pre-trained embeddings and all the black magic deep learning has to offer;
* Use strides to get larger window size quickly. Here's a [scheme](https://storage.googleapis.com/deepmind-live-cms/documents/BlogPost-Fig2-Anim-160908-r01.gif) from google wavenet.
* Train on large data. Like... really large. Try [1 Billion Words](http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz) benchmark;
* Use training schedules to speed up training. Start with small length and increase over time; Take a look at [one cycle](https://medium.com/@nachiket.tanksale/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6) for learning rate;
_You are NOT required to submit this assignment. Please make sure you don't miss your deadline because of it :)______no_output_____
|
{
"repository": "waytobehigh/nlp_course",
"path": "week03_lm/homework.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 77640,
"hexsha": "cb45642fe58f9499f9f83626ad7d526d5c6101e4",
"max_line_length": 22572,
"avg_line_length": 87.138047138,
"alphanum_fraction": 0.7989052035
}
|
# Notebook from marcellovictorino/DAND_4_Data_Wrangling
Path: 2) Rotten Tomatoes Movie Score/Rotten Tomatoes Movies - Roger Ebert Reviews.ipynb
<code>
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from bs4 import BeautifulSoup
import os
import unicodedata_____no_output_____
</code>
## Reviews from Roger Ebert_____no_output_____
<code>
# Reading Roger Ebert review from text files online
import requests
import glob
folder = 'ebert_reviews'
# Create folder if it doesn't already exists
if not os.path.exists(folder):
os.makedirs(folder)_____no_output_____ebert_review_urls = ['https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9900_1-the-wizard-of-oz-1939-film/1-the-wizard-of-oz-1939-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9901_2-citizen-kane/2-citizen-kane.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9901_3-the-third-man/3-the-third-man.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_4-get-out-film/4-get-out-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_5-mad-max-fury-road/5-mad-max-fury-road.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9902_6-the-cabinet-of-dr.-caligari/6-the-cabinet-of-dr.-caligari.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_7-all-about-eve/7-all-about-eve.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_8-inside-out-2015-film/8-inside-out-2015-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9903_9-the-godfather/9-the-godfather.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_10-metropolis-1927-film/10-metropolis-1927-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_11-e.t.-the-extra-terrestrial/11-e.t.-the-extra-terrestrial.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_12-modern-times-film/12-modern-times-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9904_14-singin-in-the-rain/14-singin-in-the-rain.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_15-boyhood-film/15-boyhood-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_16-casablanca-film/16-casablanca-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9905_17-moonlight-2016-film/17-moonlight-2016-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_18-psycho-1960-film/18-psycho-1960-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_19-laura-1944-film/19-laura-1944-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9906_20-nosferatu/20-nosferatu.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_21-snow-white-and-the-seven-dwarfs-1937-film/21-snow-white-and-the-seven-dwarfs-1937-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_22-a-hard-day27s-night-film/22-a-hard-day27s-night-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9907_23-la-grande-illusion/23-la-grande-illusion.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_25-the-battle-of-algiers/25-the-battle-of-algiers.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_26-dunkirk-2017-film/26-dunkirk-2017-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9908_27-the-maltese-falcon-1941-film/27-the-maltese-falcon-1941-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_29-12-years-a-slave-film/29-12-years-a-slave-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_30-gravity-2013-film/30-gravity-2013-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9909_31-sunset-boulevard-film/31-sunset-boulevard-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_32-king-kong-1933-film/32-king-kong-1933-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_33-spotlight-film/33-spotlight-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990a_34-the-adventures-of-robin-hood/34-the-adventures-of-robin-hood.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_35-rashomon/35-rashomon.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_36-rear-window/36-rear-window.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990b_37-selma-film/37-selma-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_38-taxi-driver/38-taxi-driver.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_39-toy-story-3/39-toy-story-3.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990c_40-argo-2012-film/40-argo-2012-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_41-toy-story-2/41-toy-story-2.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_42-the-big-sick/42-the-big-sick.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_43-bride-of-frankenstein/43-bride-of-frankenstein.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990d_44-zootopia/44-zootopia.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_45-m-1931-film/45-m-1931-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_46-wonder-woman-2017-film/46-wonder-woman-2017-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990e_48-alien-film/48-alien-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_49-bicycle-thieves/49-bicycle-thieves.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_50-seven-samurai/50-seven-samurai.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad990f_51-the-treasure-of-the-sierra-madre-film/51-the-treasure-of-the-sierra-madre-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_52-up-2009-film/52-up-2009-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_53-12-angry-men-1957-film/53-12-angry-men-1957-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9910_54-the-400-blows/54-the-400-blows.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9911_55-logan-film/55-logan-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9911_57-army-of-shadows/57-army-of-shadows.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9912_58-arrival-film/58-arrival-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9912_59-baby-driver/59-baby-driver.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_60-a-streetcar-named-desire-1951-film/60-a-streetcar-named-desire-1951-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_61-the-night-of-the-hunter-film/61-the-night-of-the-hunter-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_62-star-wars-the-force-awakens/62-star-wars-the-force-awakens.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9913_63-manchester-by-the-sea-film/63-manchester-by-the-sea-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_64-dr.-strangelove/64-dr.-strangelove.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_66-vertigo-film/66-vertigo-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9914_67-the-dark-knight-film/67-the-dark-knight-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_68-touch-of-evil/68-touch-of-evil.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_69-the-babadook/69-the-babadook.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9915_72-rosemary27s-baby-film/72-rosemary27s-baby-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9916_73-finding-nemo/73-finding-nemo.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9916_74-brooklyn-film/74-brooklyn-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9917_75-the-wrestler-2008-film/75-the-wrestler-2008-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9917_77-l.a.-confidential-film/77-l.a.-confidential-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_78-gone-with-the-wind-film/78-gone-with-the-wind-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_79-the-good-the-bad-and-the-ugly/79-the-good-the-bad-and-the-ugly.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9918_80-skyfall/80-skyfall.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_82-tokyo-story/82-tokyo-story.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_83-hell-or-high-water-film/83-hell-or-high-water-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_84-pinocchio-1940-film/84-pinocchio-1940-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad9919_85-the-jungle-book-2016-film/85-the-jungle-book-2016-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991a_86-la-la-land-film/86-la-la-land-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991b_87-star-trek-film/87-star-trek-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991b_89-apocalypse-now/89-apocalypse-now.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_90-on-the-waterfront/90-on-the-waterfront.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_91-the-wages-of-fear/91-the-wages-of-fear.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991c_92-the-last-picture-show/92-the-last-picture-show.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_93-harry-potter-and-the-deathly-hallows-part-2/93-harry-potter-and-the-deathly-hallows-part-2.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_94-the-grapes-of-wrath-film/94-the-grapes-of-wrath-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991d_96-man-on-wire/96-man-on-wire.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_97-jaws-film/97-jaws-film.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_98-toy-story/98-toy-story.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_99-the-godfather-part-ii/99-the-godfather-part-ii.txt',
'https://d17h27t6h515a5.cloudfront.net/topher/2017/September/59ad991e_100-battleship-potemkin/100-battleship-potemkin.txt']_____no_output_____# access url, read content and write to local file
for url in ebert_review_urls:
r = requests.get(url)
with open(os.path.join(folder, url.split('/')[-1]),encoding='utf-8', 'wb') as file:
file.write(r.content)_____no_output_____len(os.listdir(folder))_____no_output_____
</code>
Note not all 100 movies have been reviewed by Roger Ebert. As a matter of fact, we only have 88 out of the 100 best movies list._____no_output_____
<code>
# Parsing each Review
review_list = []
# read all txt files in folder
for review in glob.glob(folder+'/*.txt'):
with open(review, encoding='utf-8') as file:
title = file.readline().strip()
review_url = file.readline().strip()
review_text = file.read().strip()
review_dict = {'title': title, 'review_url': review_url, 'review': review_text}
review_list.append(review_dict)_____no_output_____df_reviews = pd.DataFrame(review_list, columns=review_dict.keys())
df_reviews = df_reviews.sort_values('title').reset_index(drop=True)
df_reviews.head()_____no_output_____# Saving it locally
df_reviews.to_csv('movies_review_text.csv', index=False)_____no_output_____
</code>
|
{
"repository": "marcellovictorino/DAND_4_Data_Wrangling",
"path": "2) Rotten Tomatoes Movie Score/Rotten Tomatoes Movies - Roger Ebert Reviews.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 20756,
"hexsha": "cb47e543c7354e8c6cd0b2890e7d924fc8953f87",
"max_line_length": 199,
"avg_line_length": 57.6555555556,
"alphanum_fraction": 0.591443438
}
|
# Notebook from llondon6/koalas
Path: factory/.ipynb_checkpoints/gmvrfit_2d_example-checkpoint.ipynb
# GMVRFIT 2D Example
<center>Development for a fitting function (greedy+linear based on mvpolyfit and gmvpfit) that handles rational fucntions</center>_____no_output_____
<code>
# Low-level import
from numpy import *
from numpy.linalg import pinv,lstsq
# Setup ipython environment
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Setup plotting backend
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 0.8
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['axes.titlesize'] = 20
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.pyplot import *
#
from positive import *_____no_output_____
</code>
## Package Development (positive/learning.py)_____no_output_____### Setup test data_____no_output_____
<code>
################################################################################
h = 3
Q = 25
x = h*linspace(-1,1,Q)
y = h*linspace(-1,1,Q)
X,Y = meshgrid(x,y)
# X += np.random.random( X.shape )-0.5
# Y += np.random.random( X.shape )-0.5
zfun = lambda xx,yy: 50 + (1.0 + xx*yy ) / ( 0.8 + xx**2 + yy**2 )
numerator_symbols, denominator_symbols = ['01'], ['00','11']
np.random.seed(42)
ns = 0.1*(np.random.random( X.shape )-0.5)
Z = zfun(X,Y) + ns
domain,scalar_range = ndflatten( [X,Y], Z )
################################################################################_____no_output_____
</code>
### Initiate class object for fitting_____no_output_____
<code>
foo = mvrfit( domain, scalar_range, numerator_symbols, denominator_symbols, verbose=True )_____no_output_____
</code>
### Plot using class method_____no_output_____
<code>
foo.plot()/Library/Python/2.7/site-packages/matplotlib/lines.py:1106: UnicodeWarning: Unicode unequal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if self._markerfacecolor != fc:
</code>
### Generate python string for fit model_____no_output_____
<code>
print foo.__str_python__(precision=8)f = lambda x0,x1: 5.02241109e+01 + 3.85252449e-01 * ( -6.88184091e-01*(x0*x0) + 3.08902945e+00*(x0*x1) + -7.15211199e-01*(x1*x1) + 2.59342646e+00 ) / ( 1.0 + 1.17625631e+00*(x0*x0) + 1.17841567e+00*(x1*x1) )
</code>
### Use greedy algorithm_____no_output_____
<code>
star = gmvrfit( domain, scalar_range, verbose=True )([0;36mgmvrfit[0m)>> Now working deg = 1
&& The estimator has changed by -inf
&& Degree tempering will continue.
False
&& The current boundary is [('1', True)]
&& The current estimator value is 0.999998
([0;36mgmvrfit[0m)>> Now working deg = 2
&& The estimator has changed by -0.922429
&& Degree tempering will continue.
False
&& The current boundary is [('01', True), ('11', True), ('00', False), ('11', False), ('00', True)]
&& The current estimator value is 0.077569
([0;36mgmvrfit[0m)>> Now working deg = 3
&& The estimator has changed by 0.000000
&& Degree tempering will continue.
False
&& The current boundary is [('01', True), ('11', True), ('00', False), ('11', False), ('00', True)]
&& The current estimator value is 0.077569
([0;36mgmvrfit[0m)>> Now working deg = 4
&& The estimator has changed by 0.000000
&& Degree tempering has completed becuase the estimator hasnt changes since the last degree value. The results of the last iteration wil be kept.
True
&& The Final boundary is [('01', True), ('11', True), ('00', False), ('11', False), ('00', True)]
&& The Final estimator value is 0.077569
========================================
# Degree Tempered Positive Greedy Solution:
========================================
f = lambda x0,x1: 5.02241109e+01 + 3.85252449e-01 * ( -6.88184091e-01*(x0*x0) + 3.08902945e+00*(x0*x1) + -7.15211199e-01*(x1*x1) + 2.59342646e+00 ) / ( 1.0 + 1.17625631e+00*(x0*x0) + 1.17841567e+00*(x1*x1) )
############################################
# Applying a Negative Greedy Algorithm
############################################
Iteration #1 (Negative Greedy)
------------------------------------
>> min_estimator = 2.6900e-01
>> The current boundary = [('01', True), ('11', True), ('00', False), ('11', False), ('00', True)]
>> Exiting because |min_est-initial_estimator_value| = |0.269004-0.077569| = |0.191434| > 0.189520.
>> NOTE that the result of the previous iteration will be kept.
========================================
# Negative Greedy Solution:
========================================
f = lambda x0,x1: 5.02241109e+01 + 3.85252449e-01 * ( -6.88184091e-01*(x0*x0) + 3.08902945e+00*(x0*x1) + -7.15211199e-01*(x1*x1) + 2.59342646e+00 ) / ( 1.0 + 1.17625631e+00*(x0*x0) + 1.17841567e+00*(x1*x1) )
Fit Information:
----------------------------------------
f = lambda x0,x1: 5.02241109e+01 + 3.85252449e-01 * ( -6.88184091e-01*(x0*x0) + 3.08902945e+00*(x0*x1) + -7.15211199e-01*(x1*x1) + 2.59342646e+00 ) / ( 1.0 + 1.17625631e+00*(x0*x0) + 1.17841567e+00*(x1*x1) )
star.plot()
star.bin['pgreedy_result'].plot()
star.bin['ngreedy_result'].plot()_____no_output_____
</code>
|
{
"repository": "llondon6/koalas",
"path": "factory/.ipynb_checkpoints/gmvrfit_2d_example-checkpoint.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 775102,
"hexsha": "cb48840a98a357d0123f79013371587da90eb83a",
"max_line_length": 358416,
"avg_line_length": 2123.5671232877,
"alphanum_fraction": 0.9482029462
}
|
# Notebook from UIUC-iSchool-DataViz/spring2019online
Path: _site/week09/prep_notebook_week09.ipynb
# Activity #1: MarketMap
* another way to visualize mappable data_____no_output_____## 1.a : explore the dataset_____no_output_____
<code>
# our usual stuff
%matplotlib inline
import pandas as pd
import numpy as np_____no_output_____#!pip install xlrd # JPN, might have to run this
# note: this is quering from the web! How neat is that??
df = pd.read_excel('https://query.data.world/s/ivl45pdpubos6jpsii3djsjwm2pcjv', skiprows=5)
# the above might take a while to load all the data_____no_output_____# what is in this dataframe? lets take a look at the top
df.head()
# this dataset is called: "Surgery Charges Across the U.S."
# and its just showing us how much different procedures
# cost from different hospitals_____no_output_____# what kinds of data are we working with?
df.dtypes_____no_output_____# lets look at some summary data
# recall: this is like R's "summary" function
df.describe()
# so, things like the mean zipcode aren't
# meaningful, same thing with provider ID
# But certainly looking at the average
# total payments, discharges, might
# be useful_____no_output_____# lets look at how many seperate types of surgery are
# represented in this dataset:
df["DRG Definition"].unique().size_____no_output_____# what about how many provider (hospital) names?
df["Provider Name"].unique().size_____no_output_____# how many states are represented
df["Provider State"].unique().size_____no_output_____# what are the state codes?
df["Provider State"].unique()_____no_output_____# lets figure out what the most common surgeries are via how
# many many folks are discharged after each type of surgery
# (1)
most_common = df.groupby("DRG Definition")["Total Discharges"].sum()
most_common
# (2) but lets sort by the largest on top
most_common = df.groupby("DRG Definition")["Total Discharges"].sum().sort_values(ascending=False)
most_common
# (3) lets look at only the top 5, for fun
most_common[:5]
# (4) or we can only look at the names of the top 5:
most_common[:5].index.values_____no_output_____
</code>
## 1.b: formatting data for MarketMap
* here we are going to practice doing some fancy things to clean this data
* this will be good practice for when you run into other datasets "in the wild"_____no_output_____
<code>
# (1) lets create a little table of total discharges for
# each type of surgery & state
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum()
total_discharges
# (2) the above is not intuative, lets prettify it
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum().unstack()
total_discharges_____no_output_____
</code>
### Aside: lets quick check out what are the most frequent surgeries_____no_output_____
<code>
# for our map, we are going to want to
# normalize the discharges or each surgery
# for each
# state by the total discharges across all
# states for a particular type of surger
# lets add this to our total_discharges DF
total_discharges["Total"] = total_discharges.sum(axis = 1)
total_discharges["Total"].head() # just look at the first few_____no_output_____# finally, lets check out the most often
# performed surgery across all states
# we can do this by sorting our DF by this total we just
# calculated:
total_discharges.sort_values(by = "Total",
ascending=False,
inplace = True)
# now lets just look at the first few of our
# sorted array
total_discharges.head()
# so, from this we see that joint replacement
# or reattachment of a lower extremeity is
# the most likely surgery (in number of discharges)
# followed by surgeries for sepsis and then heart failure_____no_output_____# neat. We won't need these for plotting, so we can remove our
# total column we just calculated
del total_discharges["Total"]
total_discharges.head()
# now we see that we are back to just states & surgeries
# *but* our sorting is still by the total that we
# previously calculated.
# spiffy!_____no_output_____
</code>
## 1.c: plot data with bqplot_____no_output_____
<code>
import bqplot
# by default bqplot does not import
# all packages, we have to
# explicitely import market_map
import bqplot.market_map # for access to market_map_____no_output_____# lets do our usual thing, but with a market map
# instead of a heat map
# scales:
x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things
c_sc = bqplot.ColorScale(scheme="Blues")
# just a color axes for now:
c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical')
# lets make the market map:
# (1) what should we plot for our color? lets take a look:
total_discharges.iloc[0].values, total_discharges.columns.values
# this is the total discharges for the most
# popular surgical procedure
# the columns will be states
# (2) lets put this into a map
mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values,
names = total_discharges.columns.values,
scales={'color':c_sc},
axes=[c_ax])
# (3) ok, but just clicking on things doesn't tell us too much
# lets add a little label to print out the total of the selected
import ipywidgets
label = ipywidgets.Label()
# link to market map
def get_data(change):
# (3.1)
#print(change['owner'].selected)
# (3.2) loop
v = 0.0 # to store total value
for s in change['owner'].selected:
v += total_discharges.iloc[0][total_discharges.iloc[0].index == s].values
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
total_discharges.iloc[0].name + \
' = ' + str(v[0]) # note: v is by default an array
label.value = l
mmap.observe(get_data,'selected')
#mmap
# (3)
ipywidgets.VBox([label,mmap])_____no_output_____
</code>
## Discussion:
* think back to the map we had last week: we can certainly plot this information with a more geo-realistic map
* what are the pros & cons of each style of map? What do each highlight? How are each biased?_____no_output_____## IF we have time: Re-do with other mapping system:_____no_output_____
<code>
from us_state_abbrev import us_state_abbrev
sc_geo = bqplot.AlbersUSA()
state_data = bqplot.topo_load('map_data/USStatesMap.json')
#(1)
#states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo})
#(2)
# library from last time
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
# color maps
import matplotlib.cm as cm
cmap = cm.Blues
# most popular surgery
popSurg = total_discharges.iloc[0]
# here, we will go through the process of getting colors to plot
# each state with its similar color to the marketmap above:
#!pip install webcolors
from webcolors import rgb_to_hex
d = {} # empty dict to store colors
for s in states_map.map_data['objects']['subunits']['geometries']:
if s['properties'] is not None:
#print(s['properties']['name'], s['id'])
# match states to abbreviations
state_abbrev = us_state_abbrev[s['properties']['name']]
#print(state_abbrev)
v = popSurg[popSurg.index == state_abbrev].values[0]
# renorm v to colors and then number of states
v = (v - popSurg.values.min())/(popSurg.values.max()-popSurg.values.min())
#print(v, int(cmap(v)[0]), int(cmap(v)[1]), int(cmap(v)[2]))
# convert to from 0-1 to 0-255 rgbs
c = [int(cmap(v)[i]*255) for i in range(3)]
#d[s['id']] = rgb_to_hex([int(cmap(v)[0]*255), int(cmap(v)[1]*255), int(cmap(v)[2]*255)])
d[s['id']] = rgb_to_hex(c)
def_tt = bqplot.Tooltip(fields=['name'])
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo}, colors = d, tooltip=def_tt)
# add interactions
states_map.interactions = {'click': 'select', 'hover': 'tooltip'}
# (3)
label = ipywidgets.Label()
# link to heat map
def get_data(change):
v = 0.0 # to store total value
if change['owner'].selected is not None:
for s in change['owner'].selected:
#print(s)
sn = state_names[s == ids][0]
state_abbrev = us_state_abbrev[sn]
v += popSurg[popSurg.index == state_abbrev].values[0]
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
popSurg.name + \
' = ' + str(v) # note: v is by default an array
label.value = l
states_map.observe(get_data,'selected')
fig=bqplot.Figure(marks=[states_map],
title='US States Map Example',
fig_margin={'top': 0, 'bottom': 0, 'left': 0, 'right': 0}) # try w/o first and see
#fig
# (3)
ipywidgets.VBox([label,fig])_____no_output_____
</code>
# Activity #2: Real quick ipyleaflets
* since cartopy wasn't working for folks, we'll quickly look at another option: ipyleaflets_____no_output_____
<code>
#!pip install ipyleaflet
from ipyleaflet import *
# note: you might have to close and reopen you notebook
# to see the map
m = Map(center=(52, 10), zoom=8, basemap=basemaps.Hydda.Full)
#(2) street maps
strata_all = basemap_to_tiles(basemaps.Strava.All)
m.add_layer(strata_all)
m_____no_output_____
</code>
### Note: more examples available here - https://github.com/jupyter-widgets/ipyleaflet/tree/master/examples_____no_output_____# Activity #3: Networked data - Simple example
_____no_output_____
<code>
# lets start with some very basic node data
# **copy paste into chat **
node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# we'll use bqplot.Graph to plot these
graph = bqplot.Graph(node_data=node_data,
colors = ["red", "red", "red", "red"])
fig = bqplot.Figure(marks = [graph])
fig
# you note I can pick them up and move them around, but they aren't connected in any way
# lets make some connections_____no_output_____node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# lets link the 0th entry (luke skywalker) to both
# jean-luc picard (1th entry) and pikachu (3rd entry)
link_data = [{'source': 0, 'target': 1}, {'source': 0, 'target': 3}]
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"])
#(2) we can also play with the springiness of our links:
graph.charge = -300 # setting it to positive makes them want to overlap and is, ingeneral, a lot of fun
# -300 is default
# (3) we can also change the link type:
graph.link_type = 'line' # arc = default, line, slant_line
# (4) highlight link direction, or not
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig_____no_output_____# we can do all the same things we've done with
# our previous map plots:
# for example, we can add a tooltip:
#(1)
tooltip = bqplot.Tooltip(fields=["media"])
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"],
tooltip=tooltip)
# we can also do interactive things with labels
label = ipywidgets.Label()
# note here that the calling sequence
# is a little different - instead
# of "change" we have "obj" and
# "element"
def printstuff(obj, element):
# (1.1)
#print(obj)
#print(element)
label.value = 'Media = ' + element['data']['media']
graph.on_element_click(printstuff)
fig = bqplot.Figure(marks = [graph])
ipywidgets.VBox([label,fig])_____no_output_____
</code>
# Activity #4: Network data - subset of facebook friends dataset
* from: https://snap.stanford.edu/data/egonets-Facebook.html
* dataset of friends lists
#### Info about this dataset:
* the original file you can read in has about 80,000 different connections
* it is ordered by the most connected person (person 0) at the top
* because this network would be computationally slow and just a hairball - we're going to be working with downsampled data
* for example, a file tagged "000090_000010" starts with the 10th most connected person, and only included connections up to the 90th most connected person
* Its worth noting that this dataset (linked here and on the webpage) also includes feature data like gender, last name, school, etc - however it is too sparse to be of visualization use to us
Check out the other social network links at the SNAP data webpage!_____no_output_____
<code>
# from 10 to 150 connections, a few large nodes
#filename = 'facebook_combined_sm000150_000010.txt'
# this might be too large: one large node, up to 100 connections
#filename='facebook_combined_sm000100.txt'
# start here
filename = 'facebook_combined_sm000090_000010.txt'
# then this one
#filename = 'facebook_combined_sm000030_000000.txt'
# note how different the topologies are
network = pd.read_csv('/Users/jillnaiman1/Downloads/'+filename,
sep=' ', names=['ind1', 'ind2'])
network_____no_output_____# build the network
node_data = []
link_data = []
color_data = [] # all same color
# add nodes
maxNet = max([network['ind1'].max(),network['ind2'].max()])
for i in range(maxNet+1):
node_data.append({"label": str(i), 'shape_attrs': {'r': 8} }) # small circles
# now, make links
for i in range(len(network)):
# we are linking the ith object to another jth object, but we
# gotta figure out with jth object it is
source_id = network.iloc[i]['ind1']
target_id = network.iloc[i]['ind2']
link_data.append({'source': source_id, 'target': target_id})
color_data.append('blue')
#link_data,node_data
#color_data_____no_output_____# plot
graph = bqplot.Graph(node_data=node_data,
link_data = link_data,
colors=color_data)
# play with these for different graphs
graph.charge = -100
graph.link_type = 'line'
graph.link_distance=50
# there is no direction to links
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig.layout.min_width='1000px'
fig.layout.min_height='900px'
# note: I think this has to be the layout for this to look right
fig
# in theory, we could color this network by what school folks are in, or some such
# but while the dataset does contain some of these features, the
# answer rate is too sparse for our subset here_____no_output_____
</code>
# Note: the below is just prep if you want to make your own subset datasets_____no_output_____
<code>
# prep fb data by downsampling
minCon = 0
maxCon = 30
G = pd.read_csv('/Users/jillnaiman1/Downloads/facebook_combined.txt',sep=' ', names=['ind1', 'ind2'])
Gnew = np.zeros([2],dtype='int')
# loop and append
Gnew = G.loc[G['ind1']==minCon].values[0]
for i in xrange(G.loc[G['ind1']==minCon].index[0],len(G)):
gl = G.loc[i].values
if (gl[0] <= maxCon) and (gl[1] <= maxCon) and (gl[0] >= minCon) and (gl[1] >= minCon):
Gnew = np.vstack((Gnew,gl))
np.savetxt('/Users/jillnaiman1/spring2019online/week09/data/facebook_combined_sm' + \
str(maxCon).zfill(6) + '_' + str(minCon).zfill(6) + '.txt', Gnew,fmt='%i')_____no_output_____graph.link_distance_____no_output_____
</code>
|
{
"repository": "UIUC-iSchool-DataViz/spring2019online",
"path": "_site/week09/prep_notebook_week09.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 146285,
"hexsha": "cb48e23f96f6bc35f8ab3e3a2c0c65ca2d02f868",
"max_line_length": 871,
"avg_line_length": 40.6008881488,
"alphanum_fraction": 0.3938339543
}
|
# Notebook from Draco666888/Stock_Recommendation_System
Path: Prim.ipynb
<code>
import csv
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline_____no_output_____priceData = pd.read_csv('SP_500_close_2015.csv',index_col = 0)
priceData.head()_____no_output_____firms = pd.read_csv("SP_500_firms.csv")
firms.head()_____no_output_____percent_change = priceData.pct_change()
percent_change = percent_change.drop(percent_change.index[0])
percent_change.head()
#Or equivalently without using Pandas' built-in
#percent change function.
percent_changeD = {}
for i in percent_change:
percent_changeD[i] = []
for j in range(1,(len(priceData))):
ret = (priceData[i][j]-priceData[i][j-1])/priceData[i][j-1]
percent_changeD[i].append(ret)
percent_change2 = pd.DataFrame(data = percent_changeD,index=priceData.index[1:])_____no_output_____def fullname(ts):
return firms[firms.Symbol == ts].Name.values[0]
currMax = 0
for i in percent_change2:
for j in percent_change2.index:
if percent_change2[i][j] > currMax:
currMax = percent_change2[i][j]
bestCo = i
bestDate = j
print (fullname(bestCo), bestDate, currMax)Freeport-McMoran Cp & Gld 2015-08-27 0.286616201466348
currMin = 1
for i in percent_change2:
for j in percent_change2.index:
if percent_change2[i][j] < currMin:
currMin = percent_change2[i][j]
worstCo = i
worstDate = j
print (fullname(worstCo), worstDate, currMin)Quanta Services Inc. 2015-10-16 -0.2850056957270392
AnnualReturn = {}
yearMax = -math.inf
for i in percent_change2:
AnnualReturn[i] = (priceData[i][-1]-priceData[i][0])/priceData[i][0]
if AnnualReturn[i] > yearMax:
yearMax = AnnualReturn[i]
maxCo = i
print (yearMax, maxCo, fullname(maxCo))1.2945491196819041 NFLX Netflix Inc.
AnnualReturn = {}
yearMin = math.inf
for i in percent_change2:
AnnualReturn[i] = (priceData[i][-1]-priceData[i][0])/priceData[i][0]
if AnnualReturn[i] < yearMin:
yearMin = AnnualReturn[i]
minCo = i
print (yearMin, minCo, fullname(minCo))
-0.7697847497642084 CHK Chesapeake Energy
def mean(x):
return float(sum(x)) / len(x)
def std(x):
stdev = 0.0
for value in x:
difference = value - mean(x)
stdev = stdev + (difference ** 2)
stdev = (stdev / len(x))**(1/2)
return stdev_____no_output_____Volatility = {}
volMax = -math.inf
for i in percent_change2:
Volatility[i] = std(percent_change2[i])
if Volatility[i] > volMax:
volMax = Volatility[i]
volMaxC = i
print (volMax, volMaxC, fullname(volMaxC))0.04398338070543049 FCX Freeport-McMoran Cp & Gld
Volatility = {}
volMin = math.inf
for i in percent_change2:
Volatility[i] = std(percent_change2[i])
if Volatility[i] < volMin:
volMin = Volatility[i]
volMinC = i
print (volMin, volMinC, fullname(volMinC))0.009044853669708705 KO The Coca Cola Company
def corr(x,y):
xy = sum([a*b for a,b in zip(x,y)])
x2 = sum([i**2 for i in x])
y2 = sum([i**2 for i in y])
n = len(x)
numer = (n*xy - sum(x)*sum(y))
denom = ((n*x2 - sum(x)**2)**(1/2) * (n*y2 - sum(y)**2)**(1/2))
correlation = numer/denom
return correlation
correlations = {}
for i in percent_change:
correlations[i] = {}
for i in correlations:
for j in percent_change:
correlations[i][j]=[]
for company1 in percent_change:
for company2 in percent_change:
if not correlations[company1][company2]:
x=percent_change[company1]
y=percent_change[company2]
if company1 == company2:
correlations[company1][company2] = 1
correlations[company2][company1] = 1
else:
correlations[company1][company2] = corr(x,y)
correlations[company2][company1] = corr(x,y)
def corr_print(company1, company2):
print ("The correlation coefficient between {} and {} is {}."
.format(fullname(company1), fullname(company2), correlations[company1][company2]))
corr_print("AAPL", "MMM")The correlation coefficient between Apple Inc. and 3M Company is 0.5157280000348696.
ticker_symbols = list(priceData.columns.values)
def top_bottomcorr(ts):
corr_tb = []
for ss in ticker_symbols:
if ss == ts:
continue
corr_co = correlations[ts][ss]
corr_tb.append((corr_co, ss))
corr_tb.sort()
print ("Most Correlated:", fullname(corr_tb[-1][1]), "(", corr_tb[-1][0],")")
print ("Least Correlated:", fullname(corr_tb[0][1]), "(", corr_tb[0][0],")")_____no_output_____top_bottomcorr("GOOG")Most Correlated: Alphabet Inc Class A ( 0.9893650403946361 )
Least Correlated: Stericycle Inc ( 0.01714894347853482 )
correlations = percent_change.corr()
correlations = correlations.where(np.triu(np.ones(correlations.shape)).astype(np.bool))
correlations = correlations.stack().reset_index()
correlations.columns = ['Company1', 'Company2', 'Correlation']
correlation_tuples = [tuple(x) for x in correlations.values]_____no_output_____def mergeSort(array):
if len(array) > 1:
mid = len(array) //2
left = array[:mid]
right = array [mid:]
mergeSort(left)
mergeSort(right)
i = 0
j = 0
k = 0
while i < len(left) and j < len(right):
if left[i][2] > right[j][2]:
array[k] = left[i]
i = i + 1
else:
array[k] = right[j]
j = j + 1
k = k+1
while i < len(left):
array[k] = left[i]
i += 1
k += 1
while j < len(right):
array[k] = right[j]
j += 1
k += 1
return(array)
sortedWeights = mergeSort(correlation_tuples)_____no_output_____class Digraph():
def __init__(self,filename = None):
self.edges = {}
self.numEdges = 0
def addNode(self,node):
self.edges[node] = set()
def add_Edge(self,src,dest,weight):
if not self.hasNode(src):
self.addNode(src)
self.edges[src] = {}
if not self.hasNode(dest):
self.addNode(dest)
self.edges[dest] = {}
if not self.hasEdge(src, dest):
self.numEdges += 1
self.edges[src][dest] = weight
def childrenOf(self, v):
# Returns a node's children
return self.edges[v].items()
def hasNode(self, v):
return v in self.edges
def hasEdge(self, v, w):
return w in self.edges[v]
def listEdges(self):
ll = []
for src,values in self.edges.items():
for dest,weight in values.items():
ll.append([src,dest,weight])
return ll
def __str__(self):
result = ''
for src in self.edges:
for dest,weight in self.edges[src].items():
result = result + src + '->'\
+ dest + ', length ' + str(weight) + '\n'
return result[:-1]
class Graph(Digraph):
def addEdge(self, src, dest, weight):
Digraph.addEdge(self, src, dest, weight)
Digraph.addEdge(self, dest, src, weight)_____no_output_____def init_graph(sortedWeights):
graph = Graph()
for x in sortedWeights:
graph.add_Edge(x[0],x[1],weight = x[2])
return graph
def init_nodePointers(graph):
nodePointers = {src:src for src in graph.edges}
return nodePointers
def init_nodeStarting(graph):
nodeStarting = {src:True for src in graph.edges}
return nodeStarting
def init_nodeBottom(graph):
nodeBottom = {src:True for src in graph.edges}
return nodeBottom
def findbottom(node, nodePointers):
source = node
destination = nodePointers[source]
while destination != source:
source = destination
destination = nodePointers[source]
return destination
def mergeSets(sortedWeights, k):
sortedWeights = [value for value in sortedWeights
if value[0] != value[1]]
graph = init_graph(sortedWeights)
nodePointers = init_nodePointers(graph)
nodeStarting = init_nodeStarting(graph)
nodeBottom = init_nodeBottom(graph)
counter = 0
for key in sortedWeights:
if counter < k:
bottom1 = findbottom(key[0], nodePointers)
bottom2 = findbottom(key[1], nodePointers)
if bottom1 != bottom2:
nodePointers[bottom2] = bottom1
nodeBottom[bottom2] = False
nodeStarting[bottom1] = False
counter += 1
return (nodePointers, nodeStarting, nodeBottom)
def recoverSets(nodePointers, nodeStarting, nodeBottom):
dict = {}
for b_key, b_value in nodeBottom.items():
if b_value:
dict.setdefault(b_key, set())
for s_key, s_value in nodeStarting.items():
if s_value and findbottom(s_key, nodePointers)== b_key:
bottom = findbottom(s_key, nodePointers)
current_node = s_key
while current_node != bottom:
dict[b_key].add(current_node)
current_node = nodePointers[current_node]
dict[b_key].add(b_key)
return list(dict.values())_____no_output_____nodePointers, nodeStarting, nodeBottom = mergeSets(sortedWeights, 100000)
print(recoverSets(nodePointers, nodeStarting, nodeBottom))
# print("For k = 100000, " + str(len(cluster_100000)) + " clusters are generated." + '\n')[{'ALL', 'CXO', 'GS', 'RIG', 'URI', 'PX', 'FOX', 'SEE', 'DGX', 'WFC', 'MA', 'FLS', 'HOLX', 'YUM', 'LEG', 'PXD', 'XEL', 'VRSK', 'ADM', 'CNP', 'SNA', 'HBAN', 'YHOO', 'PBCT', 'CVS', 'AVB', 'EIX', 'RTN', 'NLSN', 'ADP', 'NAVI', 'HRS', 'NBL', 'IFF', 'PRGO', 'EA', 'EMR', 'UNM', 'ADSK', 'MSI', 'SRCL', 'PKI', 'UNP', 'ORLY', 'FSLR', 'LVLT', 'CAT', 'GILD', 'BAX', 'SYF', 'ANTM', 'DVN', 'XEC', 'UDR', 'CTSH', 'AMT', 'WHR', 'TAP', 'CLX', 'CA', 'PEG', 'DAL', 'SPLS', 'F', 'ESS', 'DTE', 'AVY', 'ALLE', 'MCK', 'PM', 'PNR', 'LYB', 'CB', 'R', 'ZTS', 'ATVI', 'TSN', 'PFE', 'COP', 'CAH', 'MOS', 'NKE', 'HRB', 'HRL', 'CRM', 'NEM', 'APH', 'MO', 'EMC', 'DD', 'CMI', 'PGR', 'ACN', 'FBHS', 'TMO', 'NRG', 'VNO', 'BBT', 'HAS', 'JCI', 'HOT', 'FTI', 'SWK', 'BIIB', 'GPN', 'EXC', 'SYK', 'AET', 'HSY', 'VIAB', 'REGN', 'HCP', 'BDX', 'PFG', 'C', 'IPG', 'STT', 'SJM', 'LNC', 'PRU', 'AMGN', 'MJN', 'KMI', 'LOW', 'ZION', 'ENDP', 'UTX', 'NTRS', 'V', 'VRSN', 'NI', 'KSU', 'VZ', 'MAR', 'HP', 'JNPR', 'KLAC', 'LUK', 'FRT', 'ES', 'HUM', 'TWX', 'BSX', 'PEP', 'EXR', 'WDC', 'CBS', 'PSA', 'RAI', 'AN', 'HCN', 'BK', 'ICE', 'CERN', 'ABC', 'MAT', 'ALB', 'ABT', 'DPS', 'XYL', 'ETR', 'AEP', 'GWW', 'AGN', 'GOOG', 'GM', 'CTXS', 'EW', 'RL', 'RF', 'GPC', 'WYNN', 'PNW', 'AXP', 'JEC', 'BLK', 'ROK', 'ORCL', 'BWA', 'AMZN', 'GE', 'MET', 'MDT', 'MHK', 'DNB', 'FIS', 'FTR', 'NWS', 'COL', 'EBAY', 'AIV', 'WBA', 'JBHT', 'AMP', 'ADI', 'BHI', 'SE', 'WAT', 'LKQ', 'WMT', 'CCL', 'DOV', 'ESRX', 'CBG', 'XLNX', 'HAR', 'KMX', 'SPGI', 'MDLZ', 'PCG', 'HSIC', 'UAL', 'EQR', 'WM', 'XL', 'DIS', 'GT', 'RCL', 'FMC', 'CI', 'ETN', 'KEY', 'BRK-B', 'EL', 'IP', 'D', 'HIG', 'ABBV', 'CSX', 'NOV', 'DISCA', 'FCX', 'AAL', 'AYI', 'ROP', 'MTB', 'EMN', 'CSCO', 'COF', 'VMC', 'CHRW', 'EQT', 'TSS', 'BMY', 'KSS', 'LRCX', 'HCA', 'EXPD', 'TSCO', 'DHI', 'BA', 'AJG', 'MSFT', 'DVA', 'APC', 'IR', 'CTL', 'CFG', 'TYC', 'SWN', 'AMG', 'CMG', 'KO', 'SCG', 'SBUX', 'AKAM', 'MAC', 'JNJ', 'BCR', 'BXP', 'SHW', 'T', 'VLO', 'TSO', 'HST', 'PDCO', 'CHK', 'MLM', 'TIF', 'LLL', 'BLL', 'EXPE', 'APA', 'CNC', 'RRC', 'SNI', 'AZO', 'PCLN', 'QRVO', 'SRE', 'TRIP', 'HOG', 'MCD', 'XRAY', 'FOXA', 'NUE', 'VTR', 'MRO', 'LUV', 'AAPL', 'UPS', 'ED', 'FB', 'CINF', 'DE', 'CL', 'PSX', 'AFL', 'CVX', 'VRTX', 'LLY', 'JPM', 'DO', 'APD', 'ETFC', 'UNH', 'IRM', 'CTAS', 'AEE', 'PLD', 'UHS', 'INTC', 'SLG', 'WY', 'CMS', 'SPG', 'MMC', 'EFX', 'DUK', 'COH', 'RHI', 'TGNA', 'DLPH', 'VAR', 'TXT', 'AWK', 'CELG', 'MYL', 'GOOGL', 'ALXN', 'MS', 'FFIV', 'GRMN', 'WU', 'FE', 'MON', 'LB', 'IVZ', 'FL', 'SWKS', 'TDG', 'OMC', 'NWL', 'RHT', 'HPQ', 'O', 'HES', 'KORS', 'NFX', 'LMT', 'HD', 'GLW', 'CMA', 'NDAQ', 'AIG', 'CCI', 'RSG', 'NFLX', 'PH', 'ULTA', 'AES', 'MRK', 'SLB', 'FLR', 'STZ', 'FISV', 'DLR', 'BBBY', 'ISRG', 'HAL', 'PHM', 'AMAT', 'FITB', 'LM', 'BAC', 'DHR', 'CPB', 'HON', 'MKC', 'AON', 'WFM', 'DFS', 'BF-B', 'KR', 'OKE', 'SYY', 'DLTR', 'WMB', 'PVH', 'NOC', 'MUR', 'CAG', 'BBY', 'AIZ', 'TMK', 'UA', 'GIS', 'PWR', 'ROST', 'NVDA', 'URBN', 'ADBE', 'PG', 'XOM', 'L', 'MNST', 'DRI', 'MU', 'TJX', 'JWN', 'SYMC', 'MPC', 'ALK', 'ZBH', 'DOW', 'STJ', 'ILMN', 'LH', 'AAP', 'BEN', 'TEL', 'AVGO', 'CMCSA', 'LEN', 'OI', 'PNC', 'K', 'INTU', 'M', 'COST', 'SCHW', 'NSC', 'PBI', 'SO', 'MAS', 'AA', 'TRV', 'LLTC', 'TGT', 'STX', 'MCO', 'STI', 'PCAR', 'TDC', 'VFC', 'FLIR', 'FAST', 'IBM', 'OXY', 'CF', 'SIG', 'MMM', 'PPG', 'EQIX', 'FDX', 'KMB', 'ITW', 'DISCK', 'CHD', 'LNT', 'USB', 'HBI', 'COG', 'PPL', 'GD', 'ADS', 'WYN', 'A', 'XRX', 'TROW', 'CME', 'WEC', 'GGP', 'MNK', 'QCOM', 'TXN', 'EOG', 'ECL', 'AME', 'GPS', 'NWSA', 'PAYX', 'MCHP', 'KIM', 'DG', 'NTAP'}]
nodePointers, nodeStarting, nodeBottom = mergeSets(sortedWeights, 2000)
cluster_2000 = recoverSets(nodePointers, nodeStarting, nodeBottom)
print(cluster_2000)
print("For k = 2000, " + str(len(cluster_2000)) + " clusters are generated." + '\n')[{'GOOG', 'GOOGL'}, {'NWSA', 'NWS'}, {'DISCA', 'DISCK', 'SNI'}, {'PHM', 'DHI', 'LEN'}, {'CCL', 'RCL'}, {'ANTM', 'AET', 'UNH', 'CI', 'CNC'}, {'HCA', 'UHS'}, {'ROST', 'TJX'}, {'VLO', 'TSO', 'MPC', 'PSX'}, {'MO', 'PM', 'RAI'}, {'VMC', 'MLM'}, {'RSG', 'WM'}, {'DGX', 'LH'}, {'DAL', 'ALK', 'LUV', 'AAL', 'UAL'}, {'WYN', 'MAR', 'HOT'}, {'EXPD', 'CHRW'}, {'AMAT', 'LRCX'}, {'AVGO', 'SWKS'}, {'GWW', 'FAST'}, {'AZO', 'ORLY'}, {'FOXA', 'CMCSA', 'DIS', 'FOX', 'TWX'}, {'GM', 'F'}, {'NSC', 'CSX', 'UNP', 'KSU'}, {'WDC', 'STX'}, {'HCN', 'DUK', 'ED', 'AIV', 'ESS', 'AWK', 'LNT', 'D', 'DTE', 'PPL', 'ETR', 'VNO', 'DLR', 'AEP', 'FE', 'NI', 'AEE', 'PLD', 'XEL', 'PNW', 'SRE', 'WEC', 'SCG', 'CNP', 'GGP', 'SLG', 'UDR', 'AMT', 'FRT', 'PCG', 'ES', 'BXP', 'CMS', 'SO', 'O', 'EXC', 'EQR', 'AVB', 'EXR', 'EIX', 'SPG', 'VTR', 'PSA', 'HCP', 'KIM', 'CCI', 'PEG'}, {'LNC', 'CXO', 'PH', 'GS', 'PRU', 'APC', 'AMGN', 'RIG', 'MRK', 'PX', 'SLB', 'FLR', 'KMI', 'LOW', 'ZION', 'FISV', 'UTX', 'CFG', 'TYC', 'NTRS', 'WFC', 'V', 'MA', 'FLS', 'HAL', 'FITB', 'PXD', 'LM', 'SWN', 'BAC', 'DHR', 'CPB', 'AMG', 'HON', 'HP', 'VZ', 'MKC', 'AON', 'KO', 'SNA', 'SBUX', 'DFS', 'HBAN', 'PBCT', 'JNJ', 'BCR', 'SHW', 'OKE', 'PEP', 'T', 'RTN', 'ADP', 'NBL', 'IFF', 'EMR', 'UNM', 'BK', 'NOC', 'MUR', 'ICE', 'LLL', 'ABT', 'TMK', 'APA', 'GIS', 'DPS', 'PKI', 'RRC', 'XYL', 'PG', 'AJG', 'XOM', 'L', 'CAT', 'GPC', 'RF', 'DVN', 'JEC', 'XEC', 'BLK', 'ROK', 'BWA', 'CTSH', 'MET', 'MDT', 'DNB', 'MRO', 'UPS', 'CLX', 'DOW', 'CA', 'COL', 'CINF', 'CL', 'AFL', 'CVX', 'BEN', 'TEL', 'AMP', 'AVY', 'JPM', 'ADI', 'BHI', 'SE', 'DO', 'PNC', 'WAT', 'ETFC', 'K', 'INTU', 'CTAS', 'PNR', 'LYB', 'CB', 'SCHW', 'DOV', 'XLNX', 'INTC', 'SPGI', 'HSIC', 'TRV', 'LLTC', 'COP', 'XL', 'MCO', 'STI', 'PCAR', 'CAH', 'IBM', 'OXY', 'ETN', 'KEY', 'MMC', 'BRK-B', 'APH', 'MMM', 'PPG', 'DLPH', 'CMI', 'PGR', 'VAR', 'ACN', 'KMB', 'FDX', 'ITW', 'TXT', 'CHD', 'CELG', 'USB', 'HIG', 'TMO', 'COG', 'GD', 'MS', 'IPG', 'NOV', 'A', 'TROW', 'CME', 'ROP', 'BBT', 'MTB', 'EMN', 'JCI', 'COF', 'IVZ', 'FTI', 'SWK', 'EQT', 'OMC', 'TSS', 'TXN', 'SYK', 'HES', 'EOG', 'NFX', 'ECL', 'LMT', 'HD', 'AME', 'TSCO', 'CMA', 'PAYX', 'MCHP', 'BA', 'PFG', 'C', 'AIG', 'STT', 'NDAQ', 'REGN', 'BDX'}, {'AIZ'}, {'CSCO'}, {'EFX'}, {'XRAY'}, {'IR'}, {'DD'}, {'MCD'}, {'MDLZ'}, {'ADS'}, {'CBG'}, {'ORCL'}, {'SRCL'}, {'VFC'}, {'CVS'}, {'WBA'}, {'ALL'}, {'FBHS'}, {'MAS'}, {'DG'}, {'DLTR'}, {'WMB'}, {'NLSN'}, {'PFE'}, {'FIS'}, {'URI'}, {'NWL'}, {'AN'}, {'APD'}, {'AXP'}, {'ALXN'}, {'MCK'}, {'RHT'}, {'HST'}, {'WY'}, {'BF-B'}, {'HAR'}, {'ABC'}, {'GILD'}, {'RHI'}, {'CBS'}, {'ALLE'}, {'PVH'}, {'VRTX'}, {'QRVO'}, {'R'}, {'NKE'}, {'PDCO'}, {'XRX'}, {'ZBH'}, {'LUK'}, {'STZ'}, {'ESRX'}, {'ULTA'}, {'VIAB'}, {'IP'}, {'JBHT'}, {'CHK'}, {'EW'}, {'LVLT'}, {'FCX'}, {'NVDA'}, {'HRL'}, {'JWN'}, {'EL'}, {'DE'}, {'TGT'}, {'COST'}, {'VRSN'}, {'FMC'}, {'BLL'}, {'TGNA'}, {'MHK'}, {'GT'}, {'OI'}, {'SEE'}, {'WU'}, {'LEG'}, {'NUE'}, {'KLAC'}, {'BAX'}, {'FL'}, {'ADBE'}, {'FB'}, {'BMY'}, {'KR'}, {'IRM'}, {'AGN'}, {'AKAM'}, {'GE'}, {'UA'}, {'HSY'}, {'DVA'}, {'FLIR'}, {'PBI'}, {'MAC'}, {'STJ'}, {'ENDP'}, {'MNK'}, {'EBAY'}, {'KMX'}, {'GLW'}, {'M'}, {'MSFT'}, {'BBBY'}, {'LB'}, {'ADM'}, {'HRS'}, {'AAPL'}, {'GPN'}, {'CTXS'}, {'EMC'}, {'CTL'}, {'MOS'}, {'EQIX'}, {'GPS'}, {'LKQ'}, {'RL'}, {'AMZN'}, {'PWR'}, {'HUM'}, {'BSX'}, {'SYMC'}, {'EA'}, {'TIF'}, {'ALB'}, {'MON'}, {'ABBV'}, {'CAG'}, {'JNPR'}, {'LLY'}, {'CERN'}, {'MJN'}, {'ISRG'}, {'BIIB'}, {'KSS'}, {'WHR'}, {'SJM'}, {'HRB'}, {'WMT'}, {'VRSK'}, {'AA'}, {'AYI'}, {'ATVI'}, {'HOLX'}, {'PCLN'}, {'TDG'}, {'COH'}, {'SIG'}, {'MU'}, {'EXPE'}, {'TSN'}, {'MSI'}, {'CRM'}, {'FFIV'}, {'NRG'}, {'ZTS'}, {'CF'}, {'HOG'}, {'FTR'}, {'FSLR'}, {'SYY'}, {'ADSK'}, {'ILMN'}, {'AES'}, {'HBI'}, {'GRMN'}, {'QCOM'}, {'HPQ'}, {'DRI'}, {'AAP'}, {'YHOO'}, {'NTAP'}, {'SPLS'}, {'YUM'}, {'NAVI'}, {'BBY'}, {'MNST'}, {'MYL'}, {'NEM'}, {'WYNN'}, {'URBN'}, {'TDC'}, {'MAT'}, {'TRIP'}, {'NFLX'}, {'HAS'}, {'KORS'}, {'SYF'}, {'PRGO'}, {'TAP'}, {'CMG'}, {'WFM'}]
For k = 2000, 218 clusters are generated.
percent_change[['DAL', 'AAL', 'LUV', 'UAL', 'ALK']].plot()_____no_output_____pricesScaled = priceData.divide(priceData.iloc[0])
pricesScaled[['MAR', 'HOT', 'WYN']].plot()_____no_output_____
</code>
|
{
"repository": "Draco666888/Stock_Recommendation_System",
"path": "Prim.ipynb",
"matched_keywords": [
"bwa"
],
"stars": null,
"size": 156421,
"hexsha": "cb490dce50dce89e9c8de26a00426b7ac956a4d0",
"max_line_length": 68032,
"avg_line_length": 166.5825346113,
"alphanum_fraction": 0.8416900544
}
|
# Notebook from simoneb1x/softpython-en
Path: tools/tools-sol.ipynb
<code>
# Remember to execute this cell with Shift+Enter
import sys
sys.path.append('../')
import jupman_____no_output_____
</code>
# Tools and scripts
## [Download exercises zip](../_static/generated/tools.zip)
[Browse files online](https://github.com/DavidLeoni/softpython-en/tree/master/tools)
<div class="alert alert-warning">
**REQUISITES:**
* **Having Python 3 and Jupyter installed:** if you haven't already, see [Installation](https://en.softpython.org/installation.html)
</div>_____no_output_____## Python interpreter
In these tutorials we will use extensively the notebook editor Jupyter, because it allows to comfortably execute Python code, display charts and take notes. But if we want only make calculations it is not mandatory at all!
The most immediate way (even if not very practical) to execute Python things is by using the _command line_ interpreter in the so-called _interactive mode,_ that is, having Python to wait commands which will be manually inserted one by one. This usage _does not_ require Jupyter, you only need to have installed Python. Note that in Mac OS X and many linux systems like Ubuntu, Python is already installed by default, although sometimes it might not be version 3. Let's try to understand which version we have on our system._____no_output_____
### Let's open system console
Open a console (in Windows: system menu -> Anaconda Prompt, in Mac OS X: run the Terminal)
In the console you find the so-called _prompt_ of commands. In this _prompt_ you can directly insert commands for the operating system.
<div class="alert alert-warning">
**WARNING**: the commands you give in the prompt are commands in the language of the operating system you are using, **NOT** Python language !!!!!
</div>
In Windows you should see something like this:
```
C:\Users\David>
```
In Mac / Linux it could be something like this:
```bash
david@my-computer:~$
```_____no_output_____### Listing files and folders
In system console, try:
**Windows**: type the command `dir` and press Enter
**Mac or Linux**: type the command `ls` and press Enter.
A listing with all the files in the current folder should appear. In my case appears a list like this:
<div class="alert alert-warning">
**LET ME REPEAT**: in this context `dir` and `ls` are commands of _the operating system,_ **NOT** of Python !!
</div>
Windows:
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
Mac / Linux:
```
david@david-computer:~$ ls
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegistroDocenteStandard(1).pdf
backupsys java1.log RegistroDocenteStandard.pdf
BaseXData java_error_in_IDEA_14362.log
```
_____no_output_____### Let's launch the Python interpreter
In the opened system console, simply type the command `python`:
<div class="alert alert-warning">
**WARNING**: If Python does not run, try typing `python3` with the `3` at the end of `python`
</div>
```
C:\Users\David> python
```_____no_output_____You should see appearing something like this (most probably won't be exactly the same). Note that Python version is contained in the first row. If it begins with `2.`, then you are not using the right one for this book - in that case try exiting the interpreter ([see how to exit](#Exiting-the-interpreter)) and then type `python3`
```
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on windows
Type "help", "copyright", "credits" or "license" for more information.
>>>
```_____no_output_____<div class="alert alert-warning">
**CAREFUL** about the triple greater-than `>>>` at the beginning!
The triple greater-than `>>>` at the start tells us that differently from before now the console is expecting commands _in Python language._ So, the system commands we used before (`cd`, `dir`, ...) will NOT work anymore, or will give different results!
</div>_____no_output_____Now the console is expecting Python commands, so try inserting `3 + 5` and press Enter:
<div class="alert alert-warning">
**WARNING** DO NOT type `>>>`, only type the command which appears afterwards!
</div>
```
>>> 3 + 5
```
The writing `8` should appear:
```
8
```
Beyond calculations, we might tell PYthon to print something with the function `print("ciao")`
```
>>> print("ciao")
ciao
```_____no_output_____### Exiting the interpreter
To get out from the Python interpreter and go back to system prompt (that is, the one which accepts `cd` and `dir`/`ls` commands), type the Python comand `exit()`
After you actually exited the Python interpreter, the triple `>>>` should be gone (you should see it at the start of the line)
In Windows, you should see something similar:
```
>>> exit()
C:\Users\David>
```
in Mac / Linux it could be like this:
```
>>> exit()
david@my-computer:~$
```_____no_output_____Now you might go back to execute commands for the operating system like `dir` and `cd`:
**Windows**:
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
**Mac / Linux**:
```
david@david-computer:~$ ls
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegistroDocenteStandard(1).pdf
backupsys java1.log RegistroDocenteStandard.pdf
BaseXData java_error_in_IDEA_14362.log
```
_____no_output_____## Modules
Python Modules are simply text files which have the extension **.py** (for example `my_script.py`). When you write code in an editor, as a matter of fact you are implementing the corresponding module.
In Jupyter we use notebook files with the extension `.ipynb`, but to edit them you necessarily need Jupyter.
With `.py` files (alse said _script_ ) we can instead use any text editor, and we can then tell the interpreter to execute that file. Let's see how to do it.
### Simple text editor
1. With a text editor (_Notepad_ in Windows, or _TextEdit_ in Mac Os X) creates a text file, and put inside this code
```python
x = 3
y = 5
print(x + y)
```
2. Let's try to save it - it seems easy, but it is often definitely not, so read carefully!
<div class="alert alert-warning">
**WARNING**: when you are saving the file, **make sure the file have the extension** `.py` **!!**
</div>
Let's suppose to create the file `my_script.py` inside a folder called `MYFOLDER`:
* **WINDOWS**: if you use _Notepad_, in the save window you have to to set _Save as_ to _All files_ (otherwise the file will be wrongly saved like `my_script.py.txt` !)
* **MAC**: if you use _TextEdit,_ before saving click _Format_ and then _Convert to format Only text:_ **if you forget this passage, TextEdit in the save window will not allow you to save in the right format and you will probably end up with a** `.rtf` **file which we're not interested in**_____no_output_____3. Open a console (in Windows: system menu -> Anaconda Prompt, in Mac OS X: run the Terminal)
the console opens the so-called _commands prompt_. In this _prompt_ you can directly enter commands for the operating system (see [previous paragraph](#Python-interpreter)
<div class="alert alert-warning">
**WARNING**: the commands you give in the prompt are commands in the language of the operating system you are using, **NOT** Python language !!!!!
</div>
In Windows you should see something like this:
```
C:\Users\David>
```
In Mac / Linux it could be something like this:
```bash
david@my-computer:~$
```_____no_output_____Try for example to type the command `dir` (or `ls` for Mac / Linux) which shows all the files in the current folder. In my case a list like this appears:
<div class="alert alert-warning">
**LET ME REPEAT**: in this context `dir` / `ls` are commands of the _operating system,_ **NOT** Python.
</div>
```
C:\Users\David> dir
Arduino gotysc program.wav
a.txt index.html Public
MYFOLDER java0.log RegDocente.pdf
backupsys java1.log
BaseXData java_error_in_IDEA_14362.log
```
If you notice, in the list there is the name MYFOLDER, where I put `my_script.py`. To _enter_ the folder in the _prompt,_ you must first use the operating system command `cd` like this:_____no_output_____4. To enter a folder called MYFOLDER, type `cd MYFOLDER`:
```
C:\Users\David> cd MYFOLDER
C:\Users\David\MYFOLDER>
```
**What if I get into the wrong folder?**
If by chance you enter the wrong folder, like `DUMBTHINGS`, to go back of one folder, type `cd ..` (NOTE: `cd` is followed by one space and TWO dots `..` _one after the other_ )
```
C:\Users\David\DUMBTHINGS> cd ..
C:\Users\David\>
```_____no_output_____5. Make sure to be in the folder which contains `my_script.py`. If you aren't there, use commands `cd` and `cd ..` like above to navigate the folders.
Let's see what present in MYFOLDER with the system command `dir` (or `ls` if in Mac/Linux):
<div class="alert alert-warning">
**LET ME REPEAT**: inthis context `dir` (or `ls` is a command of the _operating system,_ **NOT** Python.
</div>
```
C:\Users\David\MYFOLDER> dir
my_script.py
```
`dir` is telling us that inside `MYFOLDER` there is our file `my_script.py`_____no_output_____
6. From within `MYFOLDER`, type `python my_script.py`
```
C:\Users\David\MYFOLDER>python my_script.py
```
<div class="alert alert-warning">
**WARNING**: if Python does not run, try typing `python3 my_script.py` with `3` at the end of `python`
</div>
If everything went fine, you should see
```
8
C:\Users\David\MYFOLDER>
```
<div class="alert alert-warning">
**WARNING**: After executing a script this way, the console is awaiting new _system_ commands, **NOT** Python commands (so, there shouldn't be any triple greater-than `>>>`)
</div>_____no_output_____### IDE
In these tutorials we work on Jupyter notebooks with extension `.ipynb`, but to edit long `.py` files it's more convenient to use more traditional editors, also called IDE _(Integrated Development Environment)._ For Python we can use [Spyder](https://www.spyder-ide.org/), [Visual Studio Code](https://code.visualstudio.com/Download) or [PyCharme Community Edition](https://www.jetbrains.com/pycharm/download/).
Differently from Jupyter, these editors allow more easily code _debugging_ and _testing._ _____no_output_____Let's try Spyder, which is the easiest - if you have Anaconda, you find it available inside Anaconda Navigator.
<div class="alert alert-info">
**INFO**: Whenever you run Spyder, it might ask you to perform an upgrade, in these cases you can just click No.
</div>
In the upper-left corner of the editor there is the code of the file `.py` you are editing. Such files are also said _script._ In the lower-right corner there is the console with the IPython interpreter (which is the same at the heart of Jupyter, here in textual form). When you execute the script, it's like inserting commands in that interpreter.
- To execute the whole script: press `F5`
- To execute only the current line or the selection: press `F9`
- To clear memory: after many executions the variables in the memory of the interpreter might get values you don't expect. To clear the memory, click on the gear to the right of the console, and select _Restart kernel______no_output_____**EXERCISE**: do some test, taking the file `my_script.py` we created before:
```python
x = 3
y = 5
print(x + y)
```
- once the code is in the script, hit `F5`
- select only `print(x+y)` and hit F9
- select only `x=3` and hit F9
- click on th gear the right of the console panel, and select _Restart kernel,_ then select only `print(x+y)` and hit F9. What happens?
Remember that if the memory of the interpreter has been cleared with _Restart kernel,_ and then you try executing a code row with variables defined in lines which were not exectued before, Python will not know which variables you are referring to and will show a `NameError`. _____no_output__________no_output_____## Jupyter
Jupyter is an editor that allows to work on so called _notebooks,_ which are files ending with the extension `.ipynb`. They are documents divided in cells where in each cell you can insert commands and immediately see the respective output. Let's try opening this._____no_output_____1. Unzip [exercises zip](../_static/generated/tools.zip) in a folder, you should obtain something like this:
```
tools
tools-sol.ipynb
tools.ipynb
jupman.py
```
<div class="alert alert-warning">
**WARNING: To correctly visualize the notebook, it MUST be in the unzipped folder.**
</div>
_____no_output_____
2. open Jupyter Notebook. Two things should appear, first a console and then a browser. In the browser navigate the files to reach the unzipped folder, and open the notebook `tools.ipynb`
<div class="alert alert-warning">
**WARNING: DO NOT click Upload button in Jupyer**
Just navigate until you reach the file.
</div>
<div class="alert alert-warning">
**WARNING: open the notebook WITHOUT the** `-sol` **at the end!**
Seeing now the solutions is too easy ;-)
</div>_____no_output_____3. Go on reading the exercises file, sometimes you will find paragraphs marked **Exercises** which will ask to write Python commands in the following cells.Exercises are graded by difficulty, from one star ✪ to four ✪✪✪✪
<div class="alert alert-warning">
**WARNING: In this book we use ONLY PYTHON 3** <br/>
If by chance you obtain weird behaviours, check you are using Python 3 and not 2. If by chance by typing `python` your operating system runs python 2, try executing the third by typing the command `python3`
</div>
<div class="alert alert-info">
**If you don't find Jupyter / something doesn't work:** have a look at [installation](https://en.softpython.org/installation.html#Jupyter-Notebook)
</div>
_____no_output_____Useful shortcuts:
* to execute Python code inside a Jupyter cell, press `Control + Enter`
* to execute Python code inside a Jupyter cell AND select next cell, press `Shift + Enter`
* to execute Python code inside a Jupyter cell AND a create a new cell aftwerwards, press `Alt + Enter`
* when something seem wrong in computations, try to clean memory by running `Kernel->Restart and Run all`_____no_output_____**EXERCISE**: Let's try inserting a Python command: type in the cell below here `3 + 5`, then while in that cell press special keys `Control+Enter`. As a result, the number `8` should appear_____no_output_____**EXERCISE**: with Python we can write comments by starting a row with a sharp `#`. Like before, type in the next cell `3 + 5` but this time type it in the row under the writing `# write here`:_____no_output_____
<code>
# write here
_____no_output_____
</code>
**EXERCISE**: In every cell Jupyter only shows the result of last executed row. Try inserting this code in the cell below and execute by pressing `Control+Enter`. Which result do you see?
```python
3 + 5
1 + 1
```_____no_output_____
<code>
# write here
_____no_output_____
</code>
**EXERCISE**: Let's try now to create a new cell.
* While you are with curson the cell, press `Alt+Enter`. A new cell should be created after the current one.
* In the cell just created, insert `2 + 3` and press `Shift+Enter`. What happens to the cursor? Try the difference swith `Control+Enter`. If you don't understand the difference, try pressing many times `Shift+Enter` and see what happens._____no_output_____### Printing an expression_____no_output_____Let's try to assign an expression to a variable:_____no_output_____
<code>
coins = 3 + 2_____no_output_____
</code>
Note the assignment by itself does not produce any output in the Jupyter cell. We can ask Jupyter the value of the variable by simply typing again the name in a cell:_____no_output_____
<code>
coins_____no_output_____
</code>
The effect is (almost always) the same we would obtain by explictly calling the function `print`:_____no_output_____
<code>
print(coins)5
</code>
What's the difference? For our convenience Jupyter will directly show the result of the last executed expression in the cell, but only the last one:_____no_output_____
<code>
coins = 4
2 + 5
coins_____no_output_____
</code>
If we want to be sure to print both, we need to use the function `print`:_____no_output_____
<code>
coins = 4
print(2 + 5)
print(coins)7
4
</code>
Furthermore, the result of last expression is shown only in Jupyter notebooks, if you are writig a normal `.py` script and you want to see results you must in any case use `print`._____no_output_____If we want to print more expressions in one row, we can pass them as different parameters to `print` by separating them with a comma:_____no_output_____
<code>
coins = 4
print(2+5, coins)7 4
</code>
To `print` we can pass as many expressions as we want:_____no_output_____
<code>
coins = 4
print(2 + 5, coins, coins*3)7 4 12
</code>
If we also want to show some text, we can write it by creating so-called _strings_ between double quotes (we will see strings much more in detail in next chapters):_____no_output_____
<code>
coins = 4
print("We have", coins, "golden coins, but we would like to have double:", coins * 2) We have 4 golden coins, but we would like to have double: 8
</code>
**QUESTION**: Have a look at following expressions, and for each one of them try to guess the result it produces. Try verifying your guesses both in Jupyter and another editor of files `.py` like Spyder:
1. ```python
x = 1
x
x
```
1. ```python
x = 1
x = 2
print(x)
```
1. ```python
x = 1
x = 2
x
```
1. ```python
x = 1
print(x)
x = 2
print(x)
```
1. ```python
print(zam)
print(zam)
zam = 1
zam = 2
```
1. ```python
x = 5
print(x,x)
```
1. ```python
x = 5
print(x)
print(x)
```
1. ```python
carpets = 8
length = 5
print("If I have", carpets, "carpets in sequence I walk for",
carpets * length, "meters.")
```
1. ```python
carpets = 8
length = 5
print("If", "I","have", carpets, "carpets","in", "sequence",
"I", "walk", "for", carpets * length, "meters.")
``` _____no_output_____### Exercise - Castles in the air
Given two variables
```python
castles = 7
dirigibles = 4
```
write some code to print:
```
I've built 7 castles in the air
I have 4 steam dirigibles
I want a dirigible parked at each castle
So I will buy other 3 at the Steam Market
```
- **DO NOT** put numerical constants in your code like `7`, `4` or `3`! Write generic code which only uses the provided variables._____no_output_____
<code>
#jupman-purge-output
castles = 7
dirigibles = 4
# write here
print("I've built",castles, "castles in the air")
print("I have", dirigibles, "steam dirigibles")
print("I want a dirigible parked at each castle")
print("So I will buy other", castles - dirigibles, "at the Steam Market")I've built 7 castles in the air
I have 4 steam dirigibles
I want a dirigible parked at each castle
So I will buy other 3 at the Steam Market
</code>
## Visualizing the execution with Python Tutor
We have seen some of the main data types. Before going further, let's see the right tools to understand what happens when we execute the code.
[Python tutor](http://pythontutor.com/) is a very good website to visualize online Python code execution, allowing to step forth and _back_ in code flow. Exploit it as much as you can, it should work with many of the examples we shall see in the book. Let's now try an example.
**Python tutor 1/4**
Go to [pythontutor.com](http://pythontutor.com/) and select _Python 3______no_output__________no_output_____**Python tutor 2/4**
Make sure at least Python 3.6 is selected:
_____no_output_____**Python tutor 3/4**
**Try inserting:**
```python
x = 5
y = 7
z = x + y
```
_____no_output_____**Python tutor 4/4**
**By clicking on Next, you will see the changes in Python memory**
_____no_output_____### Debugging code in Jupyter
Python Tutor is fantastic, but when you execute code in Jupyter and it doesn't work, what can you do? To inspect the execution, the editor usually makes available a tool called _debugger,_ which allows to execute instructions one by one. At present (August 2018), the Jupyter debugger is called [pdb](https://davidhamann.de/2017/04/22/debugging-jupyter-notebooks/) and it is extremely limited. To overcome its limitations, in this book we invented a custom solution which exploits Python Tutor.
If you insert Python code in a cell, and then **at the cell end** you write the instruction `jupman.pytut()`, the preceding code will be visualized inside Jupyter notebook with Python Tutor, as if by magic._____no_output_____<div class="alert alert-warning">
**WARNING**: `jupman` is a collection of support functions we created just for this book.
Whenever you see commands which start with `jupman`, to make them work you need first to execute the cell at the beginning of the document. For convenience we report here that cell. If you already didn't, execute it now.
</div>
_____no_output_____
<code>
# Remember to execute this cell with Control+Enter
# These commands tell Python where to find the file jupman.py
import sys;
sys.path.append('../');
import jupman;_____no_output_____
</code>
Now we are ready yo try Python Tutor with the magic function `jupman.pytut()`: _____no_output_____
<code>
x = 5
y = 7
z = x + y
jupman.pytut()_____no_output_____
</code>
#### Python Tutor : Limitation 1
Python Tutor is handy, but there are important limitations:
_____no_output_____<div class="alert alert-warning">
**ATTENTION**: Python Tutor only looks inside one cell!
Whenever you use Python Tutor inside Jupyter, the only code Python tutors considers is the one inside the cell containing the command `jupman.pytut()`
</div>
So for example in the two following cells, only `print(w)` will appear inside Python tutor without the `w = 3`. If you try clicking _Forward_ in Python tutor, you will we warned that `w` was not defined._____no_output_____
<code>
w = 3_____no_output_____print(w)
jupman.pytut()3
</code>
To have it work in Python Tutor you must put ALL the code in the SAME cell:_____no_output_____
<code>
w = 3
print(w)
jupman.pytut()3
</code>
#### Python Tutor : Limitation 2
<div class="alert alert-warning">
**WARNING: Python Tutor only uses functions from standard PYthon distribution**
PYthon Tutor is good to inspect simple algorithms with basic Python functions, if you use libraries from third parties it will not work.
</div>
If you use some library like `numpy`, you can try **only online** to select `Python 3.6 with Anaconda` :

_____no_output_____### Exercise - tavern
Given the variables
```python
pirates = 10
each_wants = 5 # mugs of grog
kegs = 4
keg_capacity = 20 # mugs of grog
```
Try writing some code which prints:
```
In the tavern there are 10 pirates, each wants 5 mugs of grog
We have 4 kegs full of grog
From each keg we can take 20 mugs
Tonight the pirates will drink 50 mugs, and 30 will remain for tomorrow
```
- **DO NOT** use numerical constants in your code, instead try using the proposed variables
- To keep track of remaining kegs, make a variable `remaining_mugs`
- if you are using Jupyter, try using `jupman.pytut()` at the cell end to visualize execution_____no_output_____
<code>
pirates = 10
each_wants = 5 # mugs of grog
kegs = 4
keg_capacity = 20 # mugs of grog
# write here
print("In the tavern there are", pirates, "pirates, each wants", each_wants, "mugs of grog")
print("We have", kegs, "kegs full of grog")
print("From each keg we can take", keg_capacity,"mugs")
remaining_mugs = kegs*keg_capacity - pirates*each_wants
print("Tonight the pirates will drink", pirates * each_wants, "mugs, and", remaining_mugs, "will remain for tomorrow")
#jupman.pytut()In the tavern there are 10 pirates, each wants 5 mugs of grog
We have 4 kegs full of grog
From each keg we can take 20 mugs
Tonight the pirates will drink 50 mugs, and 30 will remain for tomorrow
</code>
## Python Architecture
While not strictly fundamental to understand the book, the following part is useful to understand what happens under the hood when you execute commands.
Let's go back to Jupyter: the notebook editor Jupyter is a very powerful tool and flexible, allows to execute Python code, not only that, also code written in other programming languages (R, Bash, etc) and formatting languages (HTML, Markdown, Latex, etc).
Se must keep in mind that the Python code we insert in cells of Jupyter notebooks (the files with extension `.ipynb`) is not certainly magically understood by your computer. Under the hood, a lot of transformations are performed so to allow you computer processor to understaned the instructions to be executed. We report here the main transformations which happen, from Jupyter to the processor (CPU):_____no_output_____### Python is a high level language
Let's try to understand well what happens when you execute a cell:
1. **source code**: First Jupyter checks if you wrote some Python _source code_ in the cell (it could also be other programming languages like R, Bash, or formatting like Markdown ...). By default Jupyter assumes your code is Python. Let's suppose there is the following code:
```python
x = 3
y = 5
print(x + y)
```
**EXERCISE**: Without going into code details, try copy/pasting it into the cell below. Making sure to have the cursor in the cell, execute it with `Control + Enter`. When you execute it an `8` should appear as calculation result. The `# write down here` as all rows beginning with a sharp `#` is only a comment which will be ignored by Python_____no_output_____
<code>
# write down here
_____no_output_____
</code>
If you managed to execute the code, you can congratulate Python! It allowed you to execute a program written in a quite comprehensible language _independently_ from your operating system (Windows, Mac Os X, Linux ...) and from the processor of your computer (x86, ARM, ...)! Not only that, the notebook editor Jupyter also placed the result in your browser._____no_output_____
In detail, what happened? Let's see:
2. **bytecode**: When requesting the execution, Jupyter took the text written in the cell, and sent it to the so-called _Python compiler_ which transformed it into _bytecode_. The _bytecode_ is a longer sequence of instructions which is less intelligeble for us humans (**this is only an example, there is no need to understand it !!**):
```
2 0 LOAD_CONST 1 (3)
3 STORE_FAST 0 (x)
3 6 LOAD_CONST 2 (5)
9 STORE_FAST 1 (y)
4 12 LOAD_GLOBAL 0 (print)
15 LOAD_FAST 0 (x)
18 LOAD_FAST 1 (y)
21 BINARY_ADD
22 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
25 POP_TOP
26 LOAD_CONST 0 (None)
29 RETURN_VALUE
```_____no_output_____3. **machine code**: The _Python interpreter_ took the _bytecode_ above one instruction per time, and converted it into _machine code_ which can actually be understood by the processor (CPU) of your computer. To us the _machine code_ may look even longer and uglier of _bytecode_ but the processor is happy and by reading it produces the program results. Example of _machine code_ (**it is just an example, you do not need to understand it !!**):
```
mult:
push rbp
mov rbp, rsp
mov eax, 0
mult_loop:
cmp edi, 0
je mult_end
add eax, esi
sub edi, 1
jmp mult_loop
mult_end:
pop rbp
ret
```_____no_output_____We report in a table what we said above. In the table we explicitly write the file extension ni which we can write the various code formats
- The ones interesting for us are Jupyter notebooks `.ipynb` and Python source code files `.py`
- `.pyc` file smay be generated by the compiler when reading `.py` files, but they are not interesting to us, we will never need to edit the,
- `.asm` machine code also doesn't matter for us
| Tool | Language| File extension | Example|
|-----|-----------|---------|---|
| Jupyter Notebook| Python| .ipynb||
| Python Compiler | Python source code | .py |`x = 3`<br> `y = 5`<br> `print(x + y)`|
| Python Interpreter | Python bytecode | .pyc| `0 LOAD_CONST 1 (3)`<br>`3 STORE_FAST 0 (x)`|
| Processor (CPU) | Machine code| .asm |`cmp edi, 0`<br>`je mult_end`|
No that we now have an idea of what happens, we can maybe understand better the statement _Python is a high level language,_ that is, it's positioned high in the above table: when we write Python code, we are not interested in the generated _bytecode_ or _machine code,_ we can **just focus on the program logic**.
Besides, the Python code we write is **independent from the pc architecture**: if we have a Python interpreter installed on a computer, it will take care of converting the high-level code into the machine code of that particular architecture, which includes the operating system (Windows / Mac Os X / Linux) and processor (x86, ARM, PowerPC, etc)._____no_output_____### Performance
Everything has a price. If we want to write programs focusing only on the _high level logic_ without entering into the details of how it gets interpreted by the processor, we tyipcally need to give up on _performance._ Since Python is an _interpreted_ language has the downside of being slow. What if we really need efficiency? Luckily, Python can be extended with code written in _C language_ which typically is much more performant. Actually, even if you won't notice it, many functions of Python under the hood are directly written in the fast C language. If you really need performance (not in this book!) it might be worth writing first a prototype in Python and, once established it works, compile it into _C language_ by using [Cython compiler](http://cython.org/) and manually optimize the generated code._____no_output_____
|
{
"repository": "simoneb1x/softpython-en",
"path": "tools/tools-sol.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 55873,
"hexsha": "cb495d0d17ac365d44a5482033790dadf8d4f7a8",
"max_line_length": 846,
"avg_line_length": 34.964330413,
"alphanum_fraction": 0.5314731624
}
|
# Notebook from ShepherdCode/Soars2021
Path: Notebooks/Jas_307_GenCode_MLP.ipynb
# MLP ORF to GenCode
Use GenCode 38 and length-restricted data.
Use model pre-trained on Simulated ORF. _____no_output_____
<code>
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()2021-08-18 09:58:48 EDT
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model2021-08-18 09:58:49.316801: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/DataPrep.py')
with open('DataPrep.py', 'w') as f:
f.write(r.text)
from DataPrep import DataPrep
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
from SimTools.DataPrep import DataPrep
BESTMODELPATH=DATAPATH+"BestModel-304"
LASTMODELPATH=DATAPATH+"LastModel" CoLab not working. On my PC, use relative paths.
</code>
## Data Load_____no_output_____
<code>
PC_TRAINS=1000
NC_TRAINS=1000
PC_TESTS=40000
NC_TESTS=40000
PC_LENS=(200,4000)
NC_LENS=(200,4000) # Wen used 3500 for hyperparameter, 3000 for train
PC_FILENAME='gencode.v38.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=32
DROP_RATE=0.30
EPOCHS=200
SPLITS=3
FOLDS=3
show_time()2021-08-18 09:58:49 EDT
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False) # not ORF-restricted
loader.set_check_size(*PC_LENS) # length-restricted
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
loader.set_check_size(*NC_LENS) # length-restricted
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()PC seqs loaded: 88964
NC seqs loaded: 46919
2021-08-18 09:58:51 EDT
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(pcdf)
nc_all = dataframe_extract_sequence(ncdf)
pcdf=None
ncdf=None
show_time()
print("PC seqs pass filter:",len(pc_all),type(pc_all))
print("NC seqs pass filter:",len(nc_all),type(nc_all))
#PC seqs pass filter: 55381
#NC seqs pass filter: 469192021-08-18 09:58:51 EDT
PC seqs pass filter: 88964 <class 'list'>
NC seqs pass filter: 46919 <class 'list'>
print("Simulated sequence characteristics:")
oc = ORF_counter()
print("PC seqs")
oc.describe_sequences(pc_all)
print("NC seqs")
oc.describe_sequences(nc_all)
oc=None
show_time()Simulated sequence characteristics:
PC seqs
Average RNA length: 1546.957241131244
Average ORF length: 785.6919203273234
NC seqs
Average RNA length: 1179.5947483961722
Average ORF length: 203.12651591039878
2021-08-18 09:59:09 EDT
</code>
## Data Prep_____no_output_____
<code>
dp = DataPrep()
Xseq,y=dp.combine_pos_and_neg(pc_all,nc_all)
nc_all=None
pc_all=None
nc_all=None
print("The first few shuffled labels:")
print(y[:30])
show_time()The first few shuffled labels:
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1]
2021-08-18 09:59:09 EDT
Xfrq=KmerTools.seqs_to_kmer_freqs(Xseq,MAX_K)
Xseq = None
y=np.asarray(y)
show_time()2021-08-18 09:59:45 EDT
# Assume X and y were shuffled.
train_size=PC_TRAINS+NC_TRAINS
X_train=Xfrq[:train_size]
X_test=Xfrq[train_size:]
y_train=y[:train_size]
y_test=y[train_size:]
print("Training set size=",len(X_train),"=",len(y_train))
print("Reserved test set size=",len(X_test),"=",len(y_test))
Xfrq=None
y=None
show_time()Training set size= 2000 = 2000
Reserved test set size= 133883 = 133883
2021-08-18 09:59:45 EDT
</code>
## Load a trained neural network_____no_output_____
<code>
show_time()
model = load_model(BESTMODELPATH)
print(model.summary())2021-08-18 09:59:45 EDT
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 32) 2720
_________________________________________________________________
dropout_3 (Dropout) (None, 32) 0
_________________________________________________________________
dense_5 (Dense) (None, 32) 1056
_________________________________________________________________
dropout_4 (Dropout) (None, 32) 0
_________________________________________________________________
dense_6 (Dense) (None, 32) 1056
_________________________________________________________________
dropout_5 (Dropout) (None, 32) 0
_________________________________________________________________
dense_7 (Dense) (None, 1) 33
=================================================================
Total params: 4,865
Trainable params: 4,865
Non-trainable params: 0
_________________________________________________________________
None
</code>
## Test the neural network_____no_output_____
<code>
def show_test_AUC(model,X,y):
ns_probs = [0 for _ in range(len(y))]
bm_probs = model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
def show_test_accuracy(model,X,y):
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
_____no_output_____print("Accuracy on test data.")
show_time()
show_test_AUC(model,X_test,y_test)
show_test_accuracy(model,X_test,y_test)
show_time()Accuracy on test data.
2021-08-18 09:59:46 EDT
</code>
|
{
"repository": "ShepherdCode/Soars2021",
"path": "Notebooks/Jas_307_GenCode_MLP.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 1,
"size": 41452,
"hexsha": "cb49ccd1f368939981395303d6ce454efee8c1fe",
"max_line_length": 19872,
"avg_line_length": 68.2899505766,
"alphanum_fraction": 0.7765608415
}
|
# Notebook from DiffEqML/diffeqml_research
Path: hypersolver/image_classification/hypereuler_mnist.ipynb
# HyperEuler on MNIST-trained Neural ODEs_____no_output_____
<code>
import sys ; sys.path.append('..')
from torchdyn.models import *; from torchdyn import *
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import pytorch_lightning as pl
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.metrics.functional import accuracy
from tqdm import tqdm_notebook as tqdm
from src.custom_fixed_explicit import ButcherTableau, GenericExplicitButcher
from src.hypersolver import *_____no_output_____device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")_____no_output_____# smaller batch_size; only needed for visualization. The classification model
# will not be retrained
batch_size=16
size=28
path_to_data='../../data/mnist_data'
all_transforms = transforms.Compose([
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
test_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
train_data = datasets.MNIST(path_to_data, train=True, download=True,
transform=all_transforms)
test_data = datasets.MNIST(path_to_data, train=False,
transform=test_transforms)
trainloader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
testloader = DataLoader(test_data, batch_size=batch_size, shuffle=True)_____no_output_____
</code>
## Loading the pretrained Neural ODE _____no_output_____
<code>
func = nn.Sequential(nn.Conv2d(32, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 32, 3, padding=1)
).to(device)
ndes = []
for i in range(1):
ndes.append(NeuralDE(func,
solver='dopri5',
sensitivity='adjoint',
atol=1e-4,
rtol=1e-4,
s_span=torch.linspace(0, 1, 2)).to(device))
#ndes.append(nn.Conv2d(32, 32, 3, padding=1)))
model = nn.Sequential(nn.BatchNorm2d(1),
Augmenter(augment_func=nn.Conv2d(1, 31, 3, padding=1)),
*ndes,
nn.AvgPool2d(28),
#nn.Conv2d(32, 1, 3, padding=1),
nn.Flatten(),
nn.Linear(32, 10)).to(device)
_____no_output_____state_dict = torch.load('../pretrained_models/nde_mnist')
# remove state_dict keys for `torchdyn`'s Adjoint nn.Module (not used here)
copy_dict = state_dict.copy()
for key in copy_dict.keys():
if 'adjoint' in key: state_dict.pop(key)
model.load_state_dict(state_dict)_____no_output_____
</code>
### Visualizing pretrained flows_____no_output_____
<code>
x, y = next(iter(trainloader)); x = x.to(device)
for layer in model[:2]: x = layer(x)
model[2].nfe = 0
traj = model[2].trajectory(x, torch.linspace(0, 1, 50)).detach().cpu()
model[2].nfe /home/jyp/michael_dev/testenv/lib/python3.7/site-packages/torchdiffeq/_impl/misc.py:237: UserWarning: t is not on the same device as y0. Coercing to y0.device.
warnings.warn("t is not on the same device as y0. Coercing to y0.device.")
</code>
Pixel-flows of the Neural ODE, solved with `dopri5`_____no_output_____
<code>
fig, axes = plt.subplots(nrows=5, ncols=10, figsize=(22, 10))
K = 4
for i in range(5):
for j in range(10):
im = axes[i][j].imshow(traj[i*5+j, K, 0], cmap='inferno')
fig.tight_layout(w_pad=0)_____no_output_____
</code>
### Defining the HyperSolver class (-- HyperEuler version --)_____no_output_____
<code>
tableau = ButcherTableau([[0]], [1], [0], [])
euler_solver = GenericExplicitButcher(tableau)
hypersolv_net = nn.Sequential(
nn.Conv2d(32+32+1, 32, 3, stride=1, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1)).to(device)
#for p in hypersolv_net.parameters(): torch.nn.init.zeros_(p)
hs = HyperEuler(f=model[2].defunc, g=hypersolv_net)
x0 = torch.zeros(12, 32, 6, 6).to(device)
span = torch.linspace(0, 2, 10).to(device)
traj = model[2].trajectory(x0, span)
res_traj = hs.base_residuals(traj, span)
hyp_res_traj = hs.hypersolver_residuals(traj, span)
hyp_traj = hs.odeint(x0, span)_____no_output_____hyp_traj = hs.odeint(x0, span, use_residual=False).detach().cpu()
etraj = odeint(model[2].defunc, x0, span, method='euler').detach().cpu()_____no_output_____(hyp_traj - etraj).max()_____no_output_____
</code>
### Training the Hypersolver_____no_output_____
<code>
PHASE1_ITERS = 10 # num iters without swapping of the ODE initial condition (new sample)
ITERS = 15000
s_span = torch.linspace(0, 1, 10).to(device)
run_loss = 0.
# using test data for hypersolver training does not cause issues
# or task information leakage; the labels are not utilized in any way
it = iter(trainloader)
X0, Y = next(it)
Y = Y.to(device)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span)
etraj = odeint(model[2].defunc, X0, s_span, method='euler')
opt = torch.optim.AdamW(hypersolv_net.parameters(), 1e-3, weight_decay=1e-8)
sched = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=ITERS, eta_min=5e-4)
for i in tqdm(range(ITERS)):
ds = s_span[1] - s_span[0]
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj, s_span).detach()
# Let the model generalize to other ICs after PHASE1_ITERS
if i > PHASE1_ITERS:
if i % 10 == 0: # swapping IC
try:
X0, _ = next(it)
except:
it = iter(trainloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj.detach(), s_span).detach()
corrections = hs.hypersolver_residuals(base_traj.detach(), s_span)
loss = torch.norm(corrections - residuals.detach(), p='fro', dim=(3, 4)).mean() * ds**2
loss.backward()
torch.nn.utils.clip_grad_norm_(hypersolv_net.parameters(), 1)
if i % 10 == 0: print(f'\rLoss: {loss}', end='')
opt.step()
sched.step()
opt.zero_grad()/home/jyp/michael_dev/testenv/lib/python3.7/site-packages/ipykernel_launcher.py:20: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0
Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`
it = iter(testloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
steps = 10
s_span = torch.linspace(0, 1, steps)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(X0, s_span).detach().cpu()
#etraj = hs.odeint(X0, s_span, use_residual=False).detach().cpu()
straj = hs.odeint(X0, s_span, use_residual=True).detach().cpu()/home/jyp/michael_dev/testenv/lib/python3.7/site-packages/torchdiffeq/_impl/misc.py:237: UserWarning: t is not on the same device as y0. Coercing to y0.device.
warnings.warn("t is not on the same device as y0. Coercing to y0.device.")
</code>
Evolution of absolute error: [Above] HyperEuler, [Below] Euler_____no_output_____
<code>
fig, axes = plt.subplots(nrows=2, ncols=steps-1, figsize=(10, 4))
K = 1
vmin = min(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min())
vmax = max(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max())
for i in range(steps-1):
im = axes[0][i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
for i in range(steps-1):
im = axes[1][i].imshow(torch.abs(etraj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')
#tikz.save('MNIST_interpolation_AE_plot.tex')_____no_output_____
</code>
Evolution of absolute error: HyperEuler (alone). Greater detail_____no_output_____
<code>
fig, axes = plt.subplots(nrows=1, ncols=steps-1, figsize=(10, 4))
for i in range(steps-1):
im = axes[i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno')
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')_____no_output_____
</code>
### Evaluating ODE solution error_____no_output_____
<code>
x = []
# NOTE: high GPU mem usage for generating data below for plot (on GPU)
# consider using less batches (and iterating) or performing everything on CPU
for i in range(5):
x_b, _ = next(it)
x += [model[:2](x_b.to(device))]
x = torch.cat(x); x.shape_____no_output_____STEPS = range(8, 50)
euler_avg_error, euler_std_error = [], []
hyper_avg_error, hyper_std_error = [], []
midpoint_avg_error, midpoint_std_error = [], []
rk4_avg_error, rk4_std_error = [], []
for step in tqdm(STEPS):
s_span = torch.linspace(0, 1, step)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(x, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(x, s_span).detach().cpu()
# hypersolver
s_span = torch.linspace(0, 1, step)
straj = hs.odeint(x, s_span, use_residual=True).detach().cpu()
#midpoint
model[2].solver = 'midpoint'
s_span = torch.linspace(0, 1, step//2)
mtraj = model[2].trajectory(x, s_span).detach().cpu()
#midpoint
model[2].solver = 'rk4'
s_span = torch.linspace(0, 1, step//4)
rtraj = model[2].trajectory(x, s_span).detach().cpu()
# errors
euler_error = torch.abs((etraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
hyper_error = torch.abs((straj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
midpoint_error = torch.abs((mtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
rk4_error = torch.abs((rtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
# mean, stdev
euler_avg_error += [euler_error.mean().item()] ; euler_std_error += [euler_error.mean(dim=1).mean(dim=1).std(0).item()]
hyper_avg_error += [hyper_error.mean().item()] ; hyper_std_error += [hyper_error.mean(dim=1).mean(dim=1).std(0).item()]
midpoint_avg_error += [midpoint_error.mean().item()] ; midpoint_std_error += [midpoint_error.mean(dim=1).mean(dim=1).std(0).item()]
rk4_avg_error += [rk4_error.mean().item()] ; rk4_std_error += [rk4_error.mean(dim=1).mean(dim=1).std(0).item()]_____no_output_____euler_avg_error, euler_std_error = np.array(euler_avg_error), np.array(euler_std_error)
hyper_avg_error, hyper_std_error = np.array(hyper_avg_error), np.array(hyper_std_error)
midpoint_avg_error, midpoint_std_error = np.array(midpoint_avg_error), np.array(midpoint_std_error)
rk4_avg_error, rk4_std_error = np.array(rk4_avg_error), np.array(rk4_std_error)
range_steps = range(8, 50, 1)
fig, ax = plt.subplots(1, 1); fig.set_size_inches(8, 3)
ax.plot(range_steps, euler_avg_error, color='red', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, euler_avg_error-euler_std_error, euler_avg_error+euler_std_error, alpha=0.05, color='red')
ax.plot(range_steps, hyper_avg_error, c='black', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, hyper_avg_error+hyper_std_error, hyper_avg_error-hyper_std_error, alpha=0.05, color='black')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 2)
ax.plot(mid_range_steps, midpoint_avg_error[::2], color='green', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, midpoint_avg_error[::2]-midpoint_std_error[::2], midpoint_avg_error[::2]+midpoint_std_error[::2], alpha=0.1, color='green')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 4)
ax.plot(mid_range_steps, rk4_avg_error[::4], color='gray', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, rk4_avg_error[::4]-rk4_std_error[::4], rk4_avg_error[::4]+rk4_std_error[::4], alpha=0.05, color='gray')
ax.set_ylim(0, 200)
ax.set_xlim(8, 40)
ax.legend(['Euler', 'HyperEuler', 'Midpoint', 'RK4'])
ax.set_xlabel('NFEs')
ax.set_ylabel('Terminal error (MAPE)')_____no_output_____
</code>
|
{
"repository": "DiffEqML/diffeqml_research",
"path": "hypersolver/image_classification/hypereuler_mnist.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 49,
"size": 323900,
"hexsha": "cb4a494869b6bf04408c43def1112a205cf07d17",
"max_line_length": 146160,
"avg_line_length": 475.624082232,
"alphanum_fraction": 0.9322445199
}
|
# Notebook from shubhamchouksey/Movie-Recommendation
Path: KNNRecommendation.ipynb
## Nearest Neighbor item based Collaborative Filtering

Source: https://towardsdatascience.com_____no_output_____
<code>
##Dataset url: https://grouplens.org/datasets/movielens/latest/
import pandas as pd
import numpy as np_____no_output_____r_cols = ['user_id','movie_id','rating']
movies_df = pd.read_csv('u.item.csv', names=['movieId','title'],sep='|',usecols=range(2))
m_cols = ['movie_id','title']
rating_df=pd.read_csv('u.data.csv', names=['userId', 'movieId', 'rating'],usecols=range(3))_____no_output_____movies_df.head()_____no_output_____rating_df.head()_____no_output_____df = pd.merge(rating_df,movies_df,on='movieId')
df.head()_____no_output_____combine_movie_rating = df.dropna(axis = 0, subset = ['title'])
# combine_movie_rating.shape
movie_ratingCount = (combine_movie_rating.
groupby(by = ['title'])['rating'].
count().
reset_index().
rename(columns = {'rating': 'totalRatingCount'})
[['title', 'totalRatingCount']]
)
movie_ratingCount.head()
_____no_output_____rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, left_on = 'title', right_on = 'title', how = 'left')
rating_with_totalRatingCount.head()_____no_output_____pd.set_option('display.float_format', lambda x: '%.3f' % x)
print(movie_ratingCount['totalRatingCount'].describe())count 1664.000
mean 60.098
std 80.963
min 1.000
25% 7.000
50% 27.000
75% 80.250
max 584.000
Name: totalRatingCount, dtype: float64
popularity_threshold = 50
rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold')
rating_popular_movie.head()_____no_output_____rating_popular_movie.shape_____no_output_____## First lets create a Pivot matrix
movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0)
movie_features_df.head()_____no_output_____from scipy.sparse import csr_matrix
movie_features_df_matrix = csr_matrix(movie_features_df.values)
# print(movie_features_df_matrix)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)_____no_output_____movie_features_df.shape_____no_output_____# query_index = np.random.choice(movie_features_df.shape[0])
# print(query_index)
query_index = movie_features_df.index.get_loc('Star Wars (1977)')
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
_____no_output_____movie_features_df.head()_____no_output_____distances_____no_output_____indices_____no_output_____for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))Recommendations for Star Wars (1977):
1: Return of the Jedi (1983), with distance of 0.11648183086402542:
2: Raiders of the Lost Ark (1981), with distance of 0.2359429772070084:
3: Empire Strikes Back, The (1980), with distance of 0.24955008270687218:
4: Toy Story (1995), with distance of 0.26622322826178724:
5: Godfather, The (1972), with distance of 0.3034231233589749:
</code>
## Cosine Similarity

_____no_output_____
<code>
my_ratings = movie_features_df[0]
my_ratings = my_ratings.loc[my_ratings!=0]
my_ratings_____no_output_____simCandidates = pd.Series()
for i in range(0,len(my_ratings.index)):
print("Adding sims for ",my_ratings.index[i],"...")
query_index = movie_features_df.index.get_loc(my_ratings.index[i])
# print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
distances = (1/(1+distances)) * my_ratings[i]
# print(distances)
sims = pd.Series(distances.flatten(),
name="ratings", index=movie_features_df.index[indices.flatten()])
# sims = distances.map(lambda x: (1/x)*myRatings[i])
print(sims)
simCandidates = simCandidates.append(sims)
print('\nsorting..\n')
simCandidates.sort_values(inplace=True,ascending=False)
print(simCandidates.head(20))Adding sims for Empire Strikes Back, The (1980) ...
title
Empire Strikes Back, The (1980) 5.000
Raiders of the Lost Ark (1981) 4.247
Indiana Jones and the Last Crusade (1989) 4.090
Back to the Future (1985) 4.011
Star Wars (1977) 4.001
Terminator, The (1984) 3.976
Name: ratings, dtype: float64
Adding sims for Gone with the Wind (1939) ...
title
Gone with the Wind (1939) 1.000
Wizard of Oz, The (1939) 0.746
Sound of Music, The (1965) 0.704
Casablanca (1942) 0.704
It's a Wonderful Life (1946) 0.702
Back to the Future (1985) 0.693
Name: ratings, dtype: float64
Adding sims for Star Wars (1977) ...
title
Star Wars (1977) 5.000
Return of the Jedi (1983) 4.478
Raiders of the Lost Ark (1981) 4.045
Empire Strikes Back, The (1980) 4.001
Toy Story (1995) 3.949
Godfather, The (1972) 3.836
Name: ratings, dtype: float64
sorting..
Empire Strikes Back, The (1980) 5.000
Star Wars (1977) 5.000
Return of the Jedi (1983) 4.478
Raiders of the Lost Ark (1981) 4.247
Indiana Jones and the Last Crusade (1989) 4.090
Raiders of the Lost Ark (1981) 4.045
Back to the Future (1985) 4.011
Empire Strikes Back, The (1980) 4.001
Star Wars (1977) 4.001
Terminator, The (1984) 3.976
Toy Story (1995) 3.949
Godfather, The (1972) 3.836
Gone with the Wind (1939) 1.000
Wizard of Oz, The (1939) 0.746
Sound of Music, The (1965) 0.704
Casablanca (1942) 0.704
It's a Wonderful Life (1946) 0.702
Back to the Future (1985) 0.693
dtype: float64
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace=True,ascending=False)
simCandidates.head(10)_____no_output_____filteredSims = simCandidates.drop(my_ratings.index)
filteredSims.head(10)_____no_output_____
</code>
This is the final Recommendation of movies of similar that i was like earlier such as `Empire Strikes Back, The (1980)`, `Gone with the Wind (1939)`, `Star Wars (1977)` _____no_output_____
|
{
"repository": "shubhamchouksey/Movie-Recommendation",
"path": "KNNRecommendation.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 315699,
"hexsha": "cb4b7f67a2e591a27f9e93cfff1a61d97bd8a0f5",
"max_line_length": 162724,
"avg_line_length": 218.1748445059,
"alphanum_fraction": 0.88224543
}
|
# Notebook from jfear/larval_gonad_ovary
Path: notebook/2018-08-14_get_zscores_for_sharvani.ipynb
# Zscores for Sharvani_____no_output_____Sharvani was looking at the initial run that I did, but she could not see some common genes in the biomarkers list. I want to put together the zscores show she can easily look there and see how the gene behaves._____no_output_____
<code>
import os
import sys
from pathlib import Path
from IPython.display import display, HTML, Markdown
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# Project level imports
from larval_gonad_ovary.notebook import Nb
from larval_gonad_ovary.plotting import make_figs
from larval_gonad_ovary.config import memory_____no_output_____# Setup notebook
nbconfig = Nb.setup_notebook()last updated: 2018-08-14
Git hash: eb7e3486aa1ed6cc3c23658afd54dacdb200f517
genes = pd.Series(nbconfig.fbgn2symbol, name='gene_symbol')_____no_output_____zscores = pd.read_parquet('../output/scrnaseq-wf/zscore_tpm.res.0.4.parquet')
zscores.index.name = 'FBgn'
dat = zscores.join(genes).set_index('gene_symbol', append=True)
dat.sort_index(level='gene_symbol', inplace=True, )
dat.to_csv('../output/2018-08-14_zscores.tsv', sep='\t')_____no_output_____raw = pd.read_parquet('../output/scrnaseq-wf/raw.res.0.4.parquet')
dat = raw.join(genes).set_index('gene_symbol', append=True)
dat.sort_index(level='gene_symbol', inplace=True, )
dat.to_csv('../output/2018-08-14_raw.tsv', sep='\t')_____no_output_____
</code>
|
{
"repository": "jfear/larval_gonad_ovary",
"path": "notebook/2018-08-14_get_zscores_for_sharvani.ipynb",
"matched_keywords": [
"biomarkers"
],
"stars": null,
"size": 35528,
"hexsha": "cb4b9e2d5120411e9092084d5f7e707e58365dbb",
"max_line_length": 217,
"avg_line_length": 31.9209344115,
"alphanum_fraction": 0.2806518802
}
|
# Notebook from alexis-thual/PySyft
Path: examples/tutorials/Part 4 - Federated Learning via Trusted Aggregator.ipynb
# Part 4: Federated Learning with Model Averaging
**Recap**: In Part 2 of this tutorial, we trained a model using a very simple version of Federated Learning. This required each data owner to trust the model owner to be able to see their gradients.
**Description:**In this tutorial, we'll show how to use the advanced aggregation tools from Part 3 to allow the weights to be aggregated by a trusted "secure worker" before the final resulting model is sent back to the model owner (us).
In this way, only the secure worker can see whose weights came from whom. We might be able to tell which parts of the model changed, but we do NOT know which worker (bob or alice) made which change, which creates a layer of privacy.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Jason Mancuso - Twitter: [@jvmancuso](https://twitter.com/jvmancuso)_____no_output_____
<code>
import torch
import syft as sy
import copy
hook = sy.TorchHook(torch)
from torch import nn
from syft import optim_____no_output_____
</code>
# Step 1: Create Data Owners
First, we're going to create two data owners (Bob and Alice) each with a small amount of data. We're also going to initialize a secure machine called "secure_worker". In practice this could be secure hardware (such as Intel's SGX) or simply a trusted intermediary. _____no_output_____
<code>
# create a couple workers
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
secure_worker = sy.VirtualWorker(hook, id="secure_worker")
bob.add_workers([alice, secure_worker])
alice.add_workers([bob, secure_worker])
secure_worker.add_workers([alice, bob])
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# get pointers to training data on each worker by
# sending some training data to bob and alice
bobs_data = data[0:2].send(bob)
bobs_target = target[0:2].send(bob)
alices_data = data[2:].send(alice)
alices_target = target[2:].send(alice)_____no_output_____
</code>
# Step 2: Create Our Model
For this example, we're going to train with a simple Linear model. We can initialize it normally using PyTorch's nn.Linear constructor._____no_output_____
<code>
# Iniitalize A Toy Model
model = nn.Linear(2,1)_____no_output_____
</code>
# Step 3: Send a Copy of the Model to Alice and Bob
Next, we need to send a copy of the current model to Alice and Bob so that they can perform steps of learning on their own datasets._____no_output_____
<code>
bobs_model = model.copy().send(bob)
alices_model = model.copy().send(alice)
bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1)
alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1)_____no_output_____
</code>
# Step 4: Train Bob's and Alice's Models (in parallel)
As is conventional with Federated Learning via Secure Averaging, each data owner first trains their model for several iterations locally before the models are averaged together._____no_output_____
<code>
for i in range(10):
# Train Bob's Model
bobs_opt.zero_grad()
bobs_pred = bobs_model(bobs_data)
bobs_loss = ((bobs_pred - bobs_target)**2).sum()
bobs_loss.backward()
bobs_opt.step(bobs_data.shape[0])
bobs_loss = bobs_loss.get().data
# Train Alice's Model
alices_opt.zero_grad()
alices_pred = alices_model(alices_data)
alices_loss = ((alices_pred - alices_target)**2).sum()
alices_loss.backward()
alices_opt.step(alices_data.shape[0])
alices_loss = alices_loss.get().data
print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss))Bob:tensor(0.4355) Alice:tensor(1.9072)
Bob:tensor(0.2525) Alice:tensor(0.5729)
Bob:tensor(0.1516) Alice:tensor(0.1775)
Bob:tensor(0.0956) Alice:tensor(0.0598)
Bob:tensor(0.0641) Alice:tensor(0.0244)
Bob:tensor(0.0460) Alice:tensor(0.0133)
Bob:tensor(0.0354) Alice:tensor(0.0096)
Bob:tensor(0.0288) Alice:tensor(0.0079)
Bob:tensor(0.0245) Alice:tensor(0.0070)
Bob:tensor(0.0215) Alice:tensor(0.0064)
</code>
# Step 5: Send Both Updated Models to a Secure Worker
Now that each data owner has a partially trained model, it's time to average them together in a secure way. We achieve this by instructing Alice and Bob to send their model to the secure (trusted) server.
Note that this use of our API means that each model is sent DIRECTLY to the secure_worker. We never see it._____no_output_____
<code>
alices_model.move(secure_worker)_____no_output_____bobs_model.move(secure_worker)_____no_output_____
</code>
# Step 6: Average the Models_____no_output_____Finally, the last step for this training epoch is to average Bob and Alice's trained models together and then use this to set the values for our global "model". _____no_output_____
<code>
model.weight.data.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get())
model.bias.data.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get())
""_____no_output_____
</code>
# Rinse and Repeat
And now we just need to iterate this multiple times!_____no_output_____
<code>
iterations = 10
worker_iters = 5
for a_iter in range(iterations):
bobs_model = model.copy().send(bob)
alices_model = model.copy().send(alice)
bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1)
alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1)
for wi in range(worker_iters):
# Train Bob's Model
bobs_opt.zero_grad()
bobs_pred = bobs_model(bobs_data)
bobs_loss = ((bobs_pred - bobs_target)**2).sum()
bobs_loss.backward()
bobs_opt.step(bobs_data.shape[0])
bobs_loss = bobs_loss.get().data
# Train Alice's Model
alices_opt.zero_grad()
alices_pred = alices_model(alices_data)
alices_loss = ((alices_pred - alices_target)**2).sum()
alices_loss.backward()
alices_opt.step(alices_data.shape[0])
alices_loss = alices_loss.get().data
alices_model.move(secure_worker)
bobs_model.move(secure_worker)
model.weight.data.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get())
model.bias.data.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get())
print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss))Bob:tensor(0.0712) Alice:tensor(0.0141)
Bob:tensor(0.0684) Alice:tensor(0.0087)
Bob:tensor(0.0596) Alice:tensor(0.0056)
Bob:tensor(0.0506) Alice:tensor(0.0038)
Bob:tensor(0.0425) Alice:tensor(0.0027)
Bob:tensor(0.0356) Alice:tensor(0.0019)
Bob:tensor(0.0298) Alice:tensor(0.0015)
Bob:tensor(0.0248) Alice:tensor(0.0011)
Bob:tensor(0.0206) Alice:tensor(0.0009)
Bob:tensor(0.0171) Alice:tensor(0.0008)
</code>
Lastly, we want to make sure that our resulting model learned correctly, so we'll evaluate it on a test dataset. In this toy problem, we'll use the original data, but in practice we'll want to use new data to understand how well the model generalizes to unseen examples._____no_output_____
<code>
preds = model(data)
loss = ((preds - target) ** 2).sum()_____no_output_____print(preds)
print(target)
print(loss.data)tensor([[0.2274],
[0.1693],
[0.8352],
[0.7771]], grad_fn=<AddmmBackward>)
tensor([[0.],
[0.],
[1.],
[1.]], requires_grad=True)
tensor(0.1572)
</code>
In this toy example, the averaged model is underfitting relative to a plaintext model trained locally would behave, however we were able to train it without exposing each worker's training data. We were also able to aggregate the updated models from each worker on a trusted aggregator to prevent data leakage to the model owner.
In a future tutorial, we'll aim to do our trusted aggregation directly with the gradients, so that we can update the model with better gradient estimates and arrive at a stronger model._____no_output_____# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)_____no_output_____
|
{
"repository": "alexis-thual/PySyft",
"path": "examples/tutorials/Part 4 - Federated Learning via Trusted Aggregator.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 14157,
"hexsha": "cb4c3af53f889c4d1a39dd1bd64f1e18bb94c69c",
"max_line_length": 453,
"avg_line_length": 33.1545667447,
"alphanum_fraction": 0.5869887688
}
|
# Notebook from ekvall93/kthLife
Path: python/Post-processing the final result for the presentation..ipynb
<code>
from utils import *
import gensim
from sklearn.mixture import BayesianGaussianMixture
import json_____no_output_____df = pd.read_csv("assets/finalproduct/finalproductDf")
df.drop(["Unnamed: 0"],axis=1, inplace=True)
id_to_auth = pickle_o.load("assets/dictionaries/id_to_all_auths_2004")
auth_to_id = pickle_o.load("assets/dictionaries/auths_to_all_id_2004")_____no_output_____Name = list(df.Author.values)
kth_id = [auth_to_id[a] for a in Name]
df_only_auth = pd.DataFrame(data={"Name":Name, "ID":kth_id})
df_only_auth.to_csv("assets/finalproduct/onlyAuthors.csv")_____no_output_____df_abs = pd.read_csv("assets/dataframes/all_authors_df_2004")
df_abs.drop(["Unnamed: 0"],axis=1, inplace=True)_____no_output_____df_abs.head()_____no_output_____df.head()_____no_output_____list_of_dict = list()
for a, d in zip(df.Author.values, df.Doc_id.values):
new_d = dict()
new_d["name"] = str(a)
abstracts = list()
all_d = d.split(":")
new_d["docid"] = all_d
for ad in all_d:
abst = df_abs[df_abs.Doc_id == int(ad)].Abstracts.values[0]
abstracts.append(abst)
new_d["abstracts"] = abstracts
list_of_dict.append(new_d)
_____no_output_____list_of_dict[0]_____no_output_____with open('assets/finalproduct/auth_to_abs.json', 'w') as fp:
json.dump(list_of_dict, fp)_____no_output_____y = json.dumps(auth_to_abs)
# the result is a Python dictionary:
print(y){"Lundqvist, Mikael": ["Spontaneous oscillations measured by local field potentials, electroencephalograms and magnetoencephalograms exhibit a pronounced peak in the alpha band (8-12 Hz) in humans and primates. Both instantaneous power and phase of these ongoing oscillations have commonly been observed to correlate with psychophysical performance in stimulus detection tasks. We use a novel model-based approach to study the effect of prestimulus oscillations on detection rate. A previously developed biophysically detailed attractor network exhibits spontaneous oscillations in the alpha range before a stimulus is presented and transiently switches to gamma-like oscillations on successful detection. We demonstrate that both phase and power of the ongoing alpha oscillations modulate the probability of such state transitions. The power can either positively or negatively correlate with the detection rate, in agreement with experimental findings, depending on the underlying neural mechanism modulating the oscillatory power. Furthermore, the spatially distributed alpha oscillators of the network can be synchronized by global nonspecific weak excitatory signals. These synchronization events lead to transient increases in alpha-band power and render the network sensitive to the exact timing of target stimuli, making the alpha cycle function as a temporal mask in line with recent experimental observations. Our results are relevant to several studies that attribute a modulatory role to prestimulus alpha dynamics.", "Attractor neural networks are thought to underlie working memory functions in the cerebral cortex. Several such models have been proposed that successfully reproduce firing properties of neurons recorded from monkeys performing working memory tasks. However, the regular temporal structure of spike trains in these models is often incompatible with experimental data. Here, we show that the in vivo observations of bistable activity with irregular firing at the single cell level can be achieved in a large-scale network model with a modular structure in terms of several connected hypercolumns. Despite high irregularity of individual spike trains, the model shows population oscillations in the beta and gamma band in ground and active states, respectively. Irregular firing typically emerges in a high-conductance regime of balanced excitation and inhibition. Population oscillations can produce such a regime, but in previous models only a non-coding ground state was oscillatory. Due to the modular structure of our network, the oscillatory and irregular firing was maintained also in the active state without fine-tuning. Our model provides a novel mechanistic view of how irregular firing emerges in cortical populations as they go from beta to gamma oscillations during memory retrieval.", "Changes in oscillatory brain activity are strongly correlated with performance in cognitive tasks and modulations in specific frequency bands are associated with working memory tasks. Mesoscale network models allow the study of oscillations as an emergent feature of neuronal activity. Here we extend a previously developed attractor network model, shown to faithfully reproduce single-cell activity during retention and memory recall, with synaptic augmentation. This enables the network to function as a multi-item working memory by cyclic reactivation of up to six items. The reactivation happens at theta frequency, consistently with recent experimental findings, with increasing theta power for each additional item loaded in the network's memory. Furthermore, each memory reactivation is associated with gamma oscillations. Thus, single-cell spike trains as well as gamma oscillations in local groups are nested in the theta cycle. The network also exhibits an idling rhythm in the alpha/beta band associated with a noncoding global attractor. Put together, the resulting effect is increasing theta and gamma power and decreasing alpha/beta power with growing working memory load, rendering the network mechanisms involved a plausible explanation for this often reported behavior."], "Herman, Pawel Andrzej": ["Spontaneous oscillations measured by local field potentials, electroencephalograms and magnetoencephalograms exhibit a pronounced peak in the alpha band (8-12 Hz) in humans and primates. Both instantaneous power and phase of these ongoing oscillations have commonly been observed to correlate with psychophysical performance in stimulus detection tasks. We use a novel model-based approach to study the effect of prestimulus oscillations on detection rate. A previously developed biophysically detailed attractor network exhibits spontaneous oscillations in the alpha range before a stimulus is presented and transiently switches to gamma-like oscillations on successful detection. We demonstrate that both phase and power of the ongoing alpha oscillations modulate the probability of such state transitions. The power can either positively or negatively correlate with the detection rate, in agreement with experimental findings, depending on the underlying neural mechanism modulating the oscillatory power. Furthermore, the spatially distributed alpha oscillators of the network can be synchronized by global nonspecific weak excitatory signals. These synchronization events lead to transient increases in alpha-band power and render the network sensitive to the exact timing of target stimuli, making the alpha cycle function as a temporal mask in line with recent experimental observations. Our results are relevant to several studies that attribute a modulatory role to prestimulus alpha dynamics.", "Quantifying neural and non-neural contributions to increased joint resistance in spasticity is essential for a better understanding of its pathophysiological mechanisms and evaluating different intervention strategies. However, direct measurement of spasticity-related manifestations, e.g., motoneuron and biophysical properties in humans, is extremely challenging. In this vein, we developed a forward neuromusculoskeletal model that accounts for dynamics of muscle spindles, motoneuron pools, muscle activation and musculotendon of wrist flexors and relies on the joint angle and resistant torque as the only input measurement variables. By modeling the stretch reflex pathway, neural and non-neural related properties of the spastic wrist flexors were estimated during the wrist extension test. Joint angle and resistant torque were collected from 17 persons with chronic stroke and healthy controls using NeuroFlexor, a motorized force measurement device during the passive wrist extension test. The model was optimized by tuning the passive and stretch reflex-related parameters to fit the measured torque in each participant. We found that persons with moderate and severe spasticity had significantly higher stiffness than controls. Among subgroups of stroke survivors, the increased neural component was mainly due to a lower muscle spindle rate at 50% of the motoneuron recruitment. The motoneuron pool threshold was highly correlated to the motoneuron pool gain in all subgroups. The model can describe the overall resistant behavior of the wrist joint during the test. Compared to controls, increased resistance was predominantly due to higher elasticity and neural components. We concluded that in combination with the NeuroFlexor measurement, the proposed neuromusculoskeletal model and optimization scheme served as suitable tools for investigating potential parameter changes along the stretch-reflex pathway in persons with spasticity.", "Quantifying neural and non-neural contributions to the joint resistance in spasticity is essential for a better evaluation of different intervention strategies such as botulinum toxin A (BoTN-A). However, direct measurement of muscle mechanical properties and spasticity-related parameters in humans is extremely challenging. The aim of this study was to use a previously developed musculoskeletal model and optimization scheme to evaluate the changes of neural and non-neural related properties of the spastic wrist flexors during passive wrist extension after BoTN-A injection. Data of joint angle and resistant torque were collected from 21 chronic stroke patients before, and 4 and 12 weeks post BoTN-A injection using NeuroFlexor, which is a motorized force measurement device to passively stretch wrist flexors. The model was optimized by tuning the passive and stretch-related parameters to fit the measured torque in each participant. It was found that stroke survivors exhibited decreased neural components at 4 weeks post BoNT-A injection, which returned to baseline levels after 12 weeks. The decreased neural component was mainly due to the increased motoneuron pool threshold, which is interpreted as a net excitatory and inhibitory inputs to the motoneuron pool. Though the linear stiffness and viscosity properties of wrist flexors were similar before and after treatment, increased exponential stiffness was observed over time which may indicate a decreased range of motion of the wrist joint. Using a combination of modeling and experimental measurement, valuable insights into the treatment responses, i.e., transmission of motoneurons, are provided by investigating potential parameter changes along the stretch reflex pathway in persons with chronic stroke.", "The olfactory sense is a particularly challenging domain for cognitive science investigations of perception, memory, and language. Although many studies show that odors often are difficult to describe verbally, little is known about the associations between olfactory percepts and the words that describe them. Quantitative models of how odor experiences are described in natural language are therefore needed to understand how odors are perceived and communicated. In this study, we develop a computational method to characterize the olfaction-related semantic content of words in a large text corpus of internet sites in English. We introduce two new metrics: olfactory association index (OAI, how strongly a word is associated with olfaction) and olfactory specificity index (OSI, how specific a word is in its description of odors). We validate the OAI and OSI metrics using psychophysical datasets by showing that terms with high OAI have high ratings of perceived olfactory association and are used to describe highly familiar odors. In contrast, terms with high OSI have high inter-individual consistency in how they are applied to odors. Finally, we analyze Dravnieks's (1985) dataset of odor ratings in terms of OAI and OSI. This analysis reveals that terms that are used broadly (applied often but with moderate ratings) tend to be olfaction-unrelated and abstract (e.g., \u201cheavy-\u009d or \u201clight-\u009d; low OAI and low OSI) while descriptors that are used selectively (applied seldom but with high ratings) tend to be olfaction-related (e.g., \u201cvanilla-\u009d or \u201clicorice-\u009d; high OAI). Thus, OAI and OSI provide behaviorally meaningful information about olfactory language. These statistical tools are useful for future studies of olfactory perception and cognition, and might help integrate research on odor perception, neuroimaging, and corpus-based linguistic models of semantic organization.", "Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45-100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20-35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity.", "One of the urgent challenges in the automated analysis and interpretation of electrical brain activity is the effective handling of uncertainties associated with the complexity and variability of brain dynamics, reflected in the nonstationary nature of brain signals such as electroencephalogram (EEG). This poses a severe problem for existing approaches to the classification task within brain-computer interface (BCI) systems. Recently emerged type-2 fuzzy logic (T2FL) methodology has shown a remarkable potential in dealing with uncertain information given limited insight into the nature of the data-generating mechanism. The objective of this work is, thus, to examine the applicability of the T2FL approach to the problem of EEG pattern recognition. In particular, the focus is two-fold: 1) the design methodology for the interval T2FL system (IT2FLS) that can robustly deal with inter-session as well as within-session manifestations of nonstationary spectral EEG correlates of motor imagery, and 2) the comprehensive examination of the proposed fuzzy classifier in both off-line and on-line EEG classification case studies. The on-line evaluation of the IT2FLS-controlled real-time neurofeedback over multiple recording sessions holds special importance for EEG-based BCI technology. In addition, a retrospective comparative analysis accounting for other popular BCI classifiers such as linear discriminant analysis, kernel Fisher discriminant, and support vector machines as well as a conventional type-1 FLS, simulated off-line on the recorded EEGs, has demonstrated the enhanced potential of the proposed IT2FLS approach to robustly handle uncertainty effects in BCI classification.", "Working memory (WM) activity is not as stationary or sustained as previously thought. There are brief bursts of gamma (similar to 50-120 Hz) and beta (similar to 20-35 Hz) oscillations, the former linked to stimulus information in spiking. We examined these dynamics in relation to readout and control mechanisms of WM. Monkeys held sequences of two objects in WM to match to subsequent sequences. Changes in beta and gamma bursting suggested their distinct roles. In anticipation of having to use an object for the match decision, there was an increase in gamma and spiking information about that object and reduced beta bursting. This readout signal was only seen before relevant test objects, and was related to premotor activity. When the objects were no longer needed, beta increased and gamma decreased together with object spiking information. Deviations from these dynamics predicted behavioral errors. Thus, beta could regulate gamma and the information in WM.", "Persistent spiking has been thought to underlie working memory (WM). However, virtually all of the evidence for this comes from studies that averaged spiking across time and across trials, which masks the details. On single trials, activity often occurs in sparse transient bursts. This has important computational and functional advantages. In addition, examination of more complex tasks reveals neural coding in WM is dynamic over the course of a trial. All this suggests that spiking is important for WM, but that its role is more complex than simply persistent spiking."]}
author = df.Author.values
list_of_author= list()
for i, a in enumerate(author):
a_dict = dict()
a_dict["id"]= i
a_dict["name"]= a
list_of_author.append(a_dict)
_____no_output_____with open('assets/finalproduct/list_of_author.json', 'w') as fp:
json.dump(list_of_author, fp)_____no_output_____nan_ix = [isinstance(i,float) for i in df.Department.values]
df.Department[nan_ix] = "NaN"
department = list(set(df.Department.values))_____no_output_____department = [make_name_noAscii(d) for d in department]_____no_output_____department_to_auth= list()
for i, d in enumerate(department):
author = list(df[df.Department == d].Author.values)
a_dict = dict()
a_dict["department"]= d
a_dict["name"]= author
department_to_auth.append(a_dict)_____no_output_____with open('assets/finalproduct/department_to_auth.json', 'w') as fp:
json.dump(department_to_auth, fp)_____no_output_____dep_list = list()
for i, d in enumerate(department):
a_dict = dict()
a_dict["id"]= i
a_dict["department"]= d
dep_list.append(a_dict)_____no_output_____with open('assets/finalproduct/departments.json', 'w') as fp:
json.dump(dep_list, fp)_____no_output_____kth_school_s = pd.Series(np.array(df.Department)).value_counts().sort_values(ascending=False)
plt.figure(figsize=(35,23))
ax = sns.barplot(kth_school_s.index,kth_school_s.values)
ax.set_xticklabels(ax.get_xticklabels(), rotation=50, ha="right",fontsize=30)
ax.set_title("KTH authors distribution(departments)",fontsize=50)
ax.set_ylabel("Counts",fontsize=30)
sns.set(font_scale=3)
plt.gcf().subplots_adjust(bottom=0.40)
#plt.show()
plt.savefig("assets/figures/articleDepartmentFinal")_____no_output_____len(kth_school_s.index)_____no_output_____39 - 5_____no_output_____kth_school_s.values.sum()_____no_output_____1744 - 884_____no_output_____
</code>
|
{
"repository": "ekvall93/kthLife",
"path": "python/Post-processing the final result for the presentation..ipynb",
"matched_keywords": [
"single-cell"
],
"stars": null,
"size": 872362,
"hexsha": "cb4ce17bf7c58f04ffc03b56fd501f75d3fc3384",
"max_line_length": 836160,
"avg_line_length": 1504.0724137931,
"alphanum_fraction": 0.9549854304
}
|
# Notebook from PhilHarnish/forge
Path: src/puzzle/examples/msph/2018/the major.ipynb
<code>
tiles = """
ALS
ANK
APP
ATS
BUR
CAR
CDR
CIE
DEV
DIN
EES
ELS
ERS
FIE
FLY
FMA
GHI
GHM
HLD
HON
HOU
ILS
ING
ING
ING
IRR
KIY
LAN
LAS
LEM
LEY
LLC
LYA
MID
NCH
NDS
OCK
OND
PUD
RED
RIC
SBA
SCR
SOX
SPR
SQU
TRI
TYD
UST
VAL
""".lower().split()_____no_output_____import forge
from data import warehouse
from puzzle.puzzlepedia import prod_config
prod_config.init()
trie = warehouse.get('/words/unigram/trie')_____no_output_____import re
from data.seek_sets import chain_seek_set_____no_output_____def walk(seek_set, acc, targets, pos=0):
if pos >= len(targets):
yield ' '.join(acc)
return
if targets:
target = targets[pos]
seek_set.set_length(target)
for result, weight in trie.walk(seek_set, exact_match=False):
if weight < 5e4:
break
acc.append(result)
yield from walk(seek_set[result:], acc, targets, pos+1)
acc.pop()
def process(tiles, targets):
seek_set = chain_seek_set.ChainSeekSet(tiles, sum(targets))
for result in walk(seek_set, [], targets):
print(result)
def parse(s):
parts = s.split(' ')
result = []
for p in parts:
p = p.strip('’,;.‘^!-*')
if p:
result.append(int(p))
return result_____no_output_____digits = parse("9")
print(digits)
process(tiles, digits)[9]
cardinals
squirrels
springing
scrapping
scrappers
springers
given = """
AK AKE ARH AYI BE DA DO EA EI ES ETA ETH FUS GR HEW
HME IN LES LI MOL NB NEO NGO NIN OLE PA PAN
PRA RC RIN RMY RNO SED STA TAR TYP USO UYT WIM WIT
""".lower().split()_____no_output_____process(given, [11, 14])a king
a kind
a kinda
a rhino
a klingon
a keane
a rhesus
a rhine
a kline
a keine
a khmer
a kelis
a kedar
a rhein
a kling
a kepada
a kernow
in a
in with
in be
in us
in do
in he
in list
in data
in best
in great
in line
in type
in less
in east
in star
in past
in dog
in bed
in ring
in bear
in eat
in nine
in dot
in pan
in gray
in earn
in types
in dose
in bet
in dad
in pat
in ear
in ease
in lie
in pad
in dam
in bee
in lip
in leslie
in pasta
in lid
in pale
in lining
in pant
in grease
in espanol
in beg
in staring
in daring
in starch
in stalin
in benin
in espana
in ealing
in panty
in panda
in prada
in bering
in lesben
in rinse
in greased
in mollie
in witty
in beaker
in rinsed
in pandas
in bestality
in panning
in lipase
in pastas
in earch
in dorint
in paring
in liberi
in lista
in typeset
in doesn
in stang
in stale
in lingo
in esearch
in fuses
in beeing
in libel
in bening
in darin
in belies
in greta
in eased
in dales
in seddon
in prado
in molar
in moles
in does
in eidos
in sedaka
in beaked
in molest
in praline
in stata
in estar
in darcs
in graying
in bestar
in panes
in dahmer
in listas
in lidar
in liber
in dopant
in witless
in pango
in padang
in earing
in espada
in lesbe
in dangos
in listado
in mollis
in paling
in bedale
in pandan
in fussed
in listar
in lipari
in doling
in palin
in parco
in typeid
in parche
in nines
in done
with me
with meet
with medal
with metart
with merino
with medalist
with ewing
with melia
with meakin
with metar
with merch
with merci
with meine
with menino
with menina
with melita
with meines
i no
i net
i near
i naked
i nest
i neat
i nearing
i neale
i nline
be a
be in
be with
be i
be us
be do
be he
be list
be data
be great
be line
be type
be less
be east
be star
be past
be dog
be ring
be pain
be eat
be nine
be dot
be paint
be pan
be gray
be earn
be doing
be types
be stainless
be dose
be dad
be pat
be ear
be ease
be lie
be pad
be dam
be inline
be lip
be typing
be leslie
be pasta
be lid
be pale
be lining
be staind
be pant
be grease
be espanol
be staring
be inning
be grin
be stains
be daring
be starch
be stalin
be espana
be ealing
be painless
be panty
be stain
be panda
be molina
be moline
be pains
be grind
be ingress
be prada
be rinse
be greased
be mollie
be dainty
be witty
be fusing
be rinsed
be pandas
be panning
be lipase
be staines
be pastas
be earch
be dorint
be paring
be panini
be ingres
be infuse
be lista
be typeset
be doesn
be paine
be stang
be stale
be lingo
be esearch
be fuses
be instal
be inlining
be darin
be greta
be inest
be eased
be indole
be molino
be dales
be seddon
be prado
be molar
be moles
be does
be instar
be eidos
be sedaka
be molest
be praline
be dainese
be stata
be estar
be darcs
be graying
be panes
be dahmer
be grins
be infuses
be listas
be lidar
be dopant
be witless
be pango
be padang
be earing
be espada
be insta
be dangos
be listado
be mollis
be paling
be pandan
be indoles
be fussed
be tarina
be listar
be lipari
be doling
be palin
be parco
be typeid
be daines
be parche
be nines
be done
us of
us on
us or
us one
us oh
us oak
us ongoing
us oakdale
us oprah
us obese
us orinda
us oakes
us oline
do a
do in
do with
do i
do be
do us
do he
do list
do data
do best
do great
do line
do type
do being
do less
do east
do star
do past
do bed
do ring
do pain
do bear
do eat
do nine
do paint
do pan
do gray
do earn
do types
do stainless
do bet
do pat
do ear
do ease
do lie
do pad
do dam
do bee
do inline
do lip
do typing
do leslie
do pasta
do lid
do pale
do lining
do staind
do pant
do grease
do espanol
do beg
do staring
do inning
do grin
do stains
do daring
do starch
do stalin
do benin
do espana
do ealing
do painless
do panty
do stain
do panda
do molina
do moline
do pains
do grind
do ingress
do prada
do bering
do lesben
do rinse
do greased
do mollie
do dainty
do witty
do fusing
do beaker
do rinsed
do pandas
do bestality
do panning
do lipase
do staines
do pastas
do earch
do paring
do panini
do liberi
do ingres
do infuse
do lista
do typeset
do paine
do stang
do stale
do lingo
do esearch
do fuses
do beeing
do instal
do libel
do bening
do inlining
do darin
do belies
do greta
do inest
do eased
do molino
do dales
do molar
do moles
do instar
do sedaka
do beaked
do molest
do praline
do dainese
do stata
do estar
do darcs
do graying
do bestar
do panes
do dahmer
do grins
do infuses
do listas
do lidar
do liber
do witless
do pango
do padang
do earing
do espada
do lesbe
do insta
do dangos
do mollis
do paling
do bedale
do pandan
do fussed
do tarina
do listar
do lipari
do palin
do parco
do typeid
do daines
do parche
do nines
he we
he way
he west
he war
he window
he win
he wine
he wind
he winning
he wear
he wet
he wing
he wake
he wearing
he waking
he wines
he warhol
he weaning
he warhead
he weariness
he wakes
he winstar
he windom
he winless
he weiber
he wearch
he winline
he westar
he wearin
he weibel
he wakeling
he wakelin
he waker
list a
list at
list as
list am
list ad
list army
list arm
list ah
list adobe
list atari
list ahmed
list aarhus
list adoring
list alesse
list adorno
list arcing
list adorn
list aearch
list adamo
list agreing
list ahmet
list arche
list alesina
data re
data read
data research
data real
data role
data ring
data rest
data reality
data rear
data retain
data realise
data roles
data realised
data reset
data retains
data resins
data resin
data rearing
data realist
data rinse
data reseal
data rinsed
data raking
data reining
data rakesh
data reales
data reise
data rakes
data reseau
data resear
data researc
data reine
data raked
data reale
best a
best at
best as
best am
best ad
best army
best arm
best ah
best atari
best ahmed
best aarhus
best adoring
best alesse
best adorno
best adaline
best arcing
best adorn
best aearch
best aline
best adamo
best agreing
best ahmet
best arche
best alesina
great a
great art
great arm
great aretha
great arpanet
great arles
great arline
line of
line on
line or
line oh
line oak
line ongoing
line oakdale
line oprah
line obese
line orinda
line orcinus
line oakes
type the
type a
type in
type i
type it
type at
type as
type if
type search
type their
type so
type am
type these
type she
type set
type say
type star
type thing
type ad
type sea
type thus
type ie
type army
type saying
type tag
type seat
type arm
type sin
type seal
type thin
type stars
type ah
type sole
type spain
type slip
type sing
type starring
type tale
type tap
type swim
type adobe
type sake
type tales
type staring
type spanning
type sealing
type stare
type sparc
type theta
type atari
type thinning
type starr
type sprang
type tango
type ahmed
type aarhus
type stardom
type starling
type sinus
type sling
type spans
type sparing
type soledad
type searing
type sparco
type seabed
type thine
type sinead
type sprains
type seine
type seibel
type sprain
type adoring
type soleus
type slingo
type alesse
type soles
type adorno
type irina
type sinning
type tapas
type adaline
type seadoo
type arcing
type tatarstan
type taliesin
type sayin
type spandau
type sinless
type adorn
type sparcs
type theist
type sealine
type searcg
type swims
type shewing
type searcn
type searcb
type sdarch
type searcu
type taint
type searct
type searcm
type starline
type taber
type talib
type tatars
type tatar
type idabel
type tarina
type spaeth
type tangos
type alist
type aline
type swiming
type seale
type searc
type seastar
type adamo
type astar
type ingot
type talese
type agreing
type sakes
type theanine
type talia
type ahmet
type thinline
type arche
type spanne
type sakar
type alesina
being re
being read
being research
being real
being role
being rest
being reality
being rear
being restart
being realise
being roles
being realised
being reset
being realist
being reseal
being rakesh
being reales
being reise
being rakes
being realidad
being reseau
being resear
being researc
being roleta
being reine
being raked
being reale
less tag
less tap
less tango
less tapas
less edina
less taliesin
less taint
less taber
less talib
less tatars
less tatar
less tarina
less tangos
less talia
less tainty
east a
east at
east as
east am
east ad
east army
east arm
east ah
east adobe
east atari
east ahmed
east aarhus
east adoring
east alesse
east adorno
east adaline
east arcing
east adorn
east aline
east adamo
east agreing
east ahmet
east arche
east alesina
star in
star i
star not
star my
star no
star now
star car
star none
star nor
star clip
star cake
star nose
star ceiling
star inline
star norm
star cease
star ceased
star nod
star inning
star chew
star chewing
star cakes
star cling
star nobel
star ingress
star ingres
star infuse
star norco
star cesar
star cline
star coles
star inlining
star chews
star cdata
star inest
star indole
star myles
star nosed
star infusing
star nolita
star nodal
star coleus
star myrinet
star infuses
star noline
star indain
star noakes
star mydata
star ctype
star indoles
star cinese
past a
past at
past as
past am
past ad
past army
past arm
past ah
past adobe
past atari
past ahmed
past aarhus
past adoring
past alesse
past adorno
past adaline
past arcing
past adorn
past aearch
past aline
past adamo
past agreing
past ahmet
past arche
past alesina
dog re
dog read
dog research
dog real
dog role
dog rest
dog reality
dog rear
dog retain
dog restart
dog realise
dog roles
dog realised
dog reset
dog retains
dog resins
dog resin
dog realist
dog rinse
dog reseal
dog rinsed
dog rakesh
dog reales
dog reise
dog rakes
dog reseau
dog resear
dog researc
dog roleta
dog reine
dog raked
dog reale
bed of
bed a
bed on
bed or
bed at
bed as
bed one
bed am
bed oh
bed ad
bed army
bed arm
bed oak
bed ongoing
bed ah
bed atari
bed oakdale
bed ahmed
bed oprah
bed aarhus
bed orinda
bed orcinus
bed adoring
bed alesse
bed adorno
bed arcing
bed adorn
bed oakes
bed aearch
bed alist
bed aline
bed astar
bed agreing
bed ahmet
bed arche
bed oline
bed alesina
ring re
ring read
ring research
ring real
ring role
ring rest
ring reality
ring rear
ring retain
ring restart
ring realise
ring roles
ring realised
ring reset
ring retains
ring resins
ring resin
ring realist
ring rinse
ring reseal
ring rinsed
ring rakesh
ring reales
ring reise
ring rakes
ring realidad
ring reseau
ring resear
ring researc
ring roleta
ring reine
ring raked
ring reale
pain a
pain with
pain be
pain us
pain do
pain he
pain list
pain data
pain best
pain great
pain line
pain type
pain less
pain east
pain star
pain dog
pain bed
pain ring
pain bear
pain eat
pain nine
pain dot
pain pan
pain gray
pain earn
pain types
pain dose
pain bet
pain dad
pain ear
pain ease
pain lie
pain dam
pain bee
pain lip
pain leslie
pain lid
pain lining
pain pant
pain grease
pain espanol
pain beg
pain staring
pain daring
pain starch
pain stalin
pain benin
pain espana
pain ealing
pain panty
pain panda
pain prada
pain bering
pain lesben
pain rinse
pain greased
pain mollie
pain witty
pain beaker
pain rinsed
pain pandas
pain bestality
pain panning
pain earch
pain dorint
pain liberi
pain lista
pain typeset
pain doesn
pain stang
pain stale
pain lingo
pain esearch
pain fuses
pain beeing
pain libel
pain bening
pain darin
pain belies
pain greta
pain eased
pain dales
pain seddon
pain prado
pain molar
pain moles
pain does
pain eidos
pain sedaka
pain beaked
pain molest
pain praline
pain stata
pain estar
pain darcs
pain graying
pain bestar
pain panes
pain dahmer
pain listas
pain lidar
pain liber
pain dopant
pain witless
pain earing
pain lesbe
pain dangos
pain listado
pain mollis
pain bedale
pain pandan
pain fussed
pain listar
pain doling
pain typeid
pain nines
pain done
bear he
bear head
bear hi
bear hear
bear heat
bear hearing
bear hole
bear ha
bear healing
bear hint
bear hay
bear heal
bear holes
bear hines
bear hesse
bear hearn
bear heist
bear heshe
bear heine
bear heilig
bear holed
bear heise
eat a
eat art
eat arm
eat aretha
eat arpanet
eat arles
eat arline
nine the
nine a
nine in
nine i
nine it
nine at
nine as
nine if
nine search
nine their
nine so
nine am
nine these
nine she
nine set
nine start
nine say
nine star
nine thing
nine ad
nine sea
nine thus
nine ie
nine army
nine saying
nine tag
nine seat
nine arm
nine sin
nine seal
nine thin
nine stars
nine ah
nine sole
nine spain
nine slip
nine sing
nine starring
nine tale
nine tap
nine swim
nine adobe
nine sake
nine tales
nine staring
nine sealing
nine stare
nine sparc
nine theta
nine atari
nine starr
nine sprang
nine tango
nine ahmed
nine aarhus
nine stardom
nine starling
nine sinus
nine sling
nine spans
nine sparing
nine soledad
nine searing
nine sparco
nine seabed
nine thine
nine sinead
nine sprains
nine seine
nine seibel
nine sprain
nine adoring
nine soleus
nine slingo
nine alesse
nine soles
nine adorno
nine irina
nine tapas
nine adaline
nine seadoo
nine arcing
nine tatarstan
nine taliesin
nine sayin
nine spandau
nine sinless
nine adorn
nine sparcs
nine theist
nine sealine
nine searcg
nine swims
nine shewing
nine searcn
nine searcb
nine sdarch
nine searcu
nine taint
nine searct
nine searcm
nine starline
nine taber
nine talib
nine tatars
nine tatar
nine idabel
nine tarina
nine spaeth
nine tangos
nine alist
nine aline
nine swiming
nine seale
nine searc
nine seastar
nine adamo
nine astar
nine ingot
nine talese
nine starlit
nine agreing
nine sakes
nine talia
nine tainty
nine ahmet
nine thinline
nine arche
nine spanne
nine sakar
nine alesina
dot a
dot area
dot art
dot areas
dot arm
dot aretha
dot arpanet
dot arles
dot arline
dot areal
paint a
paint area
paint art
paint areas
paint arm
paint aretha
paint arpanet
paint arles
paint arline
paint areal
pan a
pan in
pan with
pan i
pan be
pan us
pan do
pan he
pan list
pan go
pan buy
pan data
pan best
pan great
pan line
pan type
pan being
pan less
pan east
pan star
pan god
pan past
pan bay
pan bar
pan dog
pan bed
pan bring
pan ring
pan pain
pan bus
pan going
pan bear
pan eat
pan beat
pan nine
pan dot
pan paint
pan pan
pan gray
pan earn
pan doing
pan types
pan stainless
pan dose
pan bet
pan dad
pan pat
pan ear
pan beast
pan ease
pan lie
pan pad
pan beam
pan bearing
pan dam
pan bean
pan bee
pan inline
pan lip
pan bind
pan typing
pan leslie
pan blessed
pan pasta
pan lid
pan baking
pan pale
pan bless
pan beastality
pan lining
pan staind
pan bake
pan baker
pan grease
pan beg
pan staring
pan bethesda
pan inning
pan grin
pan stains
pan daring
pan starch
pan stalin
pan benin
pan baked
pan ealing
pan painless
pan stain
pan goethe
pan molina
pan moline
pan pains
pan grind
pan ingress
pan prada
pan bering
pan lesben
pan rinse
pan greased
pan mollie
pan dainty
pan witty
pan fusing
pan beaker
pan rinsed
pan bling
pan bestality
pan lipase
pan staines
pan pastas
pan earch
pan dorint
pan paring
pan panini
pan liberi
pan ingres
pan infuse
pan lista
pan typeset
pan doesn
pan paine
pan stang
pan binning
pan stale
pan lingo
pan esearch
pan fuses
pan beeing
pan instal
pan libel
pan bening
pan brine
pan inlining
pan darin
pan belies
pan greta
pan inest
pan eased
pan indole
pan goring
pan molino
pan dales
pan seddon
pan betaine
pan prado
pan molar
pan moles
pan does
pan infusing
pan beale
pan instar
pan eidos
pan sedaka
pan beaked
pan molest
pan praline
pan dainese
pan stata
pan estar
pan darcs
pan graying
pan bestar
pan dahmer
pan grins
pan infuses
pan listas
pan lidar
pan liber
pan witless
pan brining
pan pango
pan padang
pan beset
pan earing
pan espada
pan indain
pan lesbe
pan insta
pan dangos
pan listado
pan mollis
pan paling
pan betas
pan brinda
pan bedale
pan indoles
pan fussed
pan tarina
pan listar
pan lipari
pan bethea
pan boles
pan doling
pan goole
pan palin
pan beane
pan parco
pan brines
pan typeid
pan daines
pan baying
pan beilin
pan blingo
pan parche
pan beaty
pan nines
pan busoni
pan done
gray in
gray i
gray it
gray if
gray ie
gray irina
gray idabel
gray ingot
earn of
earn on
earn or
earn one
earn oh
earn oak
earn ongoing
earn oakdale
earn oprah
earn obese
earn orinda
earn orcinus
earn oakes
earn oline
doing re
doing read
doing research
doing real
doing role
doing rest
doing reality
doing rear
doing restart
doing realise
doing roles
doing realised
doing reset
doing realist
doing reseal
doing rakesh
doing reales
doing reise
doing rakes
doing reseau
doing resear
doing researc
doing roleta
doing reine
doing raked
doing reale
types a
types in
types with
types i
types be
types us
types do
types he
types list
types data
types best
types great
types line
types being
types less
types east
types star
types past
types dog
types bed
types ring
types pain
types bear
types eat
types nine
types dot
types paint
types pan
types gray
types earn
types doing
types stainless
types dose
types bet
types dad
types pat
types ear
types ease
types lie
types pad
types dam
types bee
types inline
types lip
types leslie
types pasta
types lid
types pale
types lining
types staind
types pant
types grease
types beg
types staring
types inning
types grin
types stains
types daring
types starch
types stalin
types benin
types ealing
types painless
types stain
types panda
types molina
types moline
types pains
types grind
types prada
types bering
types lesben
types rinse
types greased
types mollie
types fusing
types beaker
types rinsed
types pandas
types panning
types lipase
types pastas
types earch
types dorint
types paring
types panini
types liberi
types infuse
types lista
types paine
types stang
types stale
types lingo
types beeing
types instal
types libel
types bening
types inlining
types darin
types greta
types eased
types indole
types molino
types dales
types seddon
types prado
types molar
types instar
types eidos
types sedaka
types beaked
types praline
types stata
types darcs
types graying
types bestar
types dahmer
types grins
types listas
types lidar
types liber
types dopant
types witless
types pango
types padang
types earing
types lesbe
types insta
types dangos
types listado
types mollis
types paling
types bedale
types pandan
types indoles
types fussed
types tarina
types listar
types lipari
types doling
types palin
types parco
types parche
types done
dose do
dose day
dose deal
dose dead
dose dear
dose dealing
dose dinar
dose dakine
dose detain
dose dinning
dose dinesh
dose deity
dose dakar
dose dearing
dose deane
dose dearch
dose desing
dose detalii
dose detains
dose deine
dose dlese
bet a
bet area
bet art
bet areas
bet arm
bet aretha
bet arpanet
bet arles
bet arline
bet areal
dad of
dad on
dad or
dad one
dad oh
dad oak
dad ongoing
dad oprah
dad obese
dad orcinus
dad oakes
dad oline
pat a
pat area
pat art
pat areas
pat arm
pat aretha
pat arpanet
pat arles
pat arline
pat areal
ear in
ear i
ear not
ear my
ear no
ear now
ear car
ear none
ear nor
ear clip
ear cake
ear nose
ear ceiling
ear inline
ear norm
ear nod
ear inning
ear chew
ear chewing
ear cakes
ear cling
ear nobel
ear ingress
ear mylist
ear ingres
ear infuse
ear norco
ear cesar
ear cline
ear coles
ear instal
ear inlining
ear chews
ear cdata
ear inest
ear indole
ear myles
ear nosed
ear infusing
ear instar
ear nolita
ear nodal
ear coleus
ear myrinet
ear infuses
ear noline
ear indain
ear insta
ear noakes
ear mydata
ear ctype
ear indoles
ear cinese
ease do
ease day
ease dinar
ease dakine
ease detain
ease dinning
ease dinesh
ease deity
ease dakar
ease desing
ease detalii
ease dlidos
ease detains
ease deine
ease dlese
lie the
lie a
lie in
lie i
lie it
lie at
lie as
lie if
lie search
lie their
lie so
lie am
lie these
lie she
lie set
lie start
lie say
lie star
lie thing
lie ad
lie sea
lie thus
lie ie
lie army
lie saying
lie tag
lie seat
lie arm
lie sin
lie seal
lie thin
lie stars
lie ah
lie sole
lie spain
lie sing
lie starring
lie tale
lie tap
lie swim
lie adobe
lie sake
lie tales
lie staring
lie spanning
lie stare
lie sparc
lie theta
lie atari
lie thinning
lie starr
lie sprang
lie tango
lie ahmed
lie aarhus
lie stardom
lie sinus
lie spans
lie sparing
lie soledad
lie searing
lie sparco
lie seabed
lie thine
lie sinead
lie sprains
lie seine
lie seibel
lie sprain
lie adoring
lie soleus
lie alesse
lie soles
lie adorno
lie irina
lie sinning
lie tapas
lie seadoo
lie arcing
lie tatarstan
lie sayin
lie spandau
lie sinless
lie adorn
lie sparcs
lie theist
lie searcg
lie swims
lie shewing
lie searcn
lie searcb
lie sdarch
lie searcu
lie taint
lie searct
lie searcm
lie taber
lie tatars
lie tatar
lie idabel
lie tarina
lie spaeth
lie tangos
lie swiming
lie seale
lie searc
lie seastar
lie adamo
lie astar
lie ingot
lie talese
lie agreing
lie sakes
lie theanine
lie tainty
lie ahmet
lie arche
lie spanne
lie sakar
lie alesina
pad of
pad a
pad on
pad or
pad at
pad as
pad one
pad am
pad oh
pad ad
pad army
pad arm
pad oak
pad ongoing
pad ah
pad adobe
pad atari
pad oakdale
pad ahmed
pad oprah
pad aarhus
pad obese
pad orinda
pad orcinus
pad adoring
pad alesse
pad adorno
pad arcing
pad adorn
pad oakes
pad aearch
pad alist
pad aline
pad astar
pad agreing
pad ahmet
pad arche
pad oline
pad alesina
dam old
dam ollie
dam olnine
dam olean
dam oleari
dam oline
bee the
bee a
bee in
bee i
bee it
bee at
bee as
bee if
bee search
bee their
bee so
bee am
bee these
bee she
bee set
bee start
bee say
bee star
bee thing
bee ad
bee sea
bee thus
bee ie
bee army
bee saying
bee tag
bee seat
bee arm
bee sin
bee seal
bee thin
bee stars
bee ah
bee sole
bee spain
bee slip
bee sing
bee starring
bee tale
bee tap
bee swim
bee sake
bee tales
bee staring
bee spanning
bee sealing
bee stare
bee sparc
bee theta
bee atari
bee thinning
bee starr
bee sprang
bee tango
bee ahmed
bee aarhus
bee stardom
bee starling
bee sinus
bee sling
bee spans
bee sparing
bee soledad
bee searing
bee sparco
bee thine
bee sinead
bee sprains
bee seine
bee sprain
bee adoring
bee soleus
bee slingo
bee alesse
bee soles
bee adorno
bee irina
bee sinning
bee tapas
bee adaline
bee seadoo
bee arcing
bee tatarstan
bee taliesin
bee sayin
bee spandau
bee sinless
bee adorn
bee sparcs
bee theist
bee sealine
bee searcg
bee swims
bee shewing
bee searcn
bee sdarch
bee searcu
bee taint
bee searct
bee searcm
bee starline
bee tatars
bee tatar
bee tarina
bee spaeth
bee tangos
bee alist
bee aline
bee swiming
bee seale
bee searc
bee seastar
bee adamo
bee astar
bee ingot
bee talese
bee starlit
bee agreing
bee sakes
bee theanine
bee talia
bee tainty
bee ahmet
bee thinline
bee arche
bee spanne
bee sakar
bee alesina
inline of
inline on
inline or
inline oh
inline oak
inline oakdale
inline oprah
inline obese
inline orinda
inline oakes
lip and
lip a
lip at
lip as
lip am
lip ad
lip army
lip rain
lip raw
lip arm
lip radar
lip ah
lip rat
lip andale
lip adobe
lip atari
lip anakin
lip ahmed
lip aarhus
lip rains
lip raines
lip antares
lip radon
lip adoring
lip alesse
lip adorno
lip raine
lip arcing
lip rasta
lip adorn
lip antara
lip aearch
lip antari
lip rafuse
lip rahmen
lip raring
lip adamo
lip astar
lip agreing
lip ahmet
lip arche
lip alesina
typing re
typing read
typing research
typing real
typing role
typing rest
typing rear
typing realise
typing roles
typing realised
typing reset
typing realist
typing reseal
typing rakesh
typing reales
typing reise
typing rakes
typing realidad
typing reseau
typing resear
typing researc
typing roleta
typing reine
typing raked
typing reale
leslie the
leslie a
leslie in
leslie i
leslie it
leslie at
leslie as
leslie if
leslie search
leslie their
leslie so
leslie am
leslie these
leslie she
leslie set
leslie start
leslie say
leslie star
leslie thing
leslie ad
leslie sea
leslie thus
leslie ie
leslie army
leslie saying
leslie tag
leslie seat
leslie arm
leslie sin
leslie thin
leslie stars
leslie ah
leslie sole
leslie spain
leslie sing
leslie starring
leslie tap
leslie swim
leslie adobe
leslie sake
leslie staring
leslie spanning
leslie stare
leslie sparc
leslie theta
leslie atari
leslie thinning
leslie starr
leslie sprang
leslie tango
leslie ahmed
leslie aarhus
leslie stardom
leslie sinus
leslie spans
leslie sparing
leslie soledad
leslie searing
leslie sparco
leslie seabed
leslie thine
leslie sinead
leslie sprains
leslie seine
leslie sprain
leslie adoring
leslie soleus
leslie soles
leslie adorno
leslie irina
leslie sinning
leslie tapas
leslie seadoo
leslie arcing
leslie tatarstan
leslie sayin
leslie spandau
leslie adorn
leslie sparcs
leslie theist
leslie searcg
leslie swims
leslie shewing
leslie searcn
leslie searcb
leslie sdarch
leslie searcu
leslie taint
leslie searct
leslie searcm
leslie taber
leslie tatars
leslie tatar
leslie tarina
leslie spaeth
leslie tangos
leslie swiming
leslie searc
leslie seastar
leslie adamo
leslie astar
leslie ingot
leslie agreing
leslie sakes
leslie theanine
leslie tainty
leslie ahmet
leslie arche
leslie spanne
leslie sakar
pasta a
pasta in
pasta with
pasta i
pasta be
pasta us
pasta do
pasta he
pasta data
pasta great
pasta line
pasta type
pasta being
pasta less
pasta dog
pasta bed
pasta ring
pasta bear
pasta eat
pasta nine
pasta dot
pasta pan
pasta gray
pasta earn
pasta doing
pasta types
pasta dose
pasta bet
pasta dad
pasta ear
pasta ease
pasta lie
pasta dam
pasta bee
pasta inline
pasta lip
pasta typing
pasta leslie
pasta lid
pasta lining
pasta pant
pasta grease
pasta espanol
pasta beg
pasta inning
pasta grin
pasta daring
pasta benin
pasta espana
pasta ealing
pasta panty
pasta panda
pasta molina
pasta moline
pasta grind
pasta ingress
pasta prada
pasta bering
pasta lesben
pasta rinse
pasta greased
pasta mollie
pasta dainty
pasta witty
pasta fusing
pasta beaker
pasta rinsed
pasta pandas
pasta panning
pasta earch
pasta dorint
pasta liberi
pasta ingres
pasta infuse
pasta typeset
pasta doesn
pasta lingo
pasta esearch
pasta fuses
pasta beeing
pasta libel
pasta bening
pasta inlining
pasta darin
pasta belies
pasta greta
pasta inest
pasta eased
pasta indole
pasta molino
pasta dales
pasta seddon
pasta prado
pasta molar
pasta moles
pasta does
pasta eidos
pasta sedaka
pasta beaked
pasta molest
pasta praline
pasta dainese
pasta estar
pasta darcs
pasta graying
pasta panes
pasta dahmer
pasta grins
pasta infuses
pasta lidar
pasta liber
pasta dopant
pasta witless
pasta earing
pasta lesbe
pasta dangos
pasta mollis
pasta bedale
pasta pandan
pasta indoles
pasta fussed
pasta tarina
pasta doling
pasta typeid
pasta daines
pasta nines
pasta done
lid of
lid a
lid on
lid or
lid at
lid as
lid one
lid am
lid oh
lid ad
lid army
lid arm
lid oak
lid ongoing
lid ah
lid adobe
lid atari
lid oakdale
lid ahmed
lid oprah
lid aarhus
lid obese
lid orinda
lid orcinus
lid adoring
lid alesse
lid adorno
lid arcing
lid adorn
lid oakes
lid aearch
lid astar
lid agreing
lid ahmet
lid arche
lid alesina
pale search
pale so
pale she
pale set
pale start
pale say
pale star
pale sea
pale saying
pale seat
pale sin
pale seal
pale stars
pale sole
pale slip
pale sing
pale starring
pale swim
pale sake
pale staring
pale spanning
pale sealing
pale stare
pale starr
pale sprang
pale stares
pale stardom
pale starling
pale sinus
pale sling
pale spans
pale soledad
pale searing
pale seabed
pale sinead
pale sprains
pale seine
pale seibel
pale sprain
pale soleus
pale slingo
pale soles
pale sinning
pale seadoo
pale sayin
pale spandau
pale sealine
pale searcg
pale swims
pale sesrch
pale shewing
pale searcn
pale searcb
pale sdarch
pale searcu
pale searct
pale searcm
pale starline
pale swiming
pale searc
pale seastar
pale starlit
pale sakes
pale spanne
pale sakar
lining re
lining read
lining research
lining real
lining role
lining rest
lining rear
lining retain
lining restart
lining roles
lining reset
lining retains
lining resins
lining resin
lining rinse
lining reseal
lining rinsed
lining rakesh
lining reales
lining reise
lining rakes
lining reseau
lining resear
lining researc
lining roleta
lining reine
lining raked
lining reale
staind of
staind a
staind on
staind or
staind at
staind as
staind one
staind am
staind oh
staind ad
staind army
staind arm
staind oak
staind ah
staind adobe
staind oakdale
staind ahmed
staind oprah
staind aarhus
staind obese
staind orinda
staind adoring
staind alesse
staind adorno
staind adorn
staind oakes
staind aearch
staind aline
staind agreing
staind ahmet
staind arche
staind oline
pant a
pant area
pant art
pant areas
pant arm
pant aretha
pant arles
pant arline
pant areal
grease do
grease day
grease dinar
grease dakine
grease detain
grease dinesh
grease deity
grease dakar
grease detalii
grease dlidos
grease detains
grease deine
grease dlese
espanol ear
espanol estado
espanol estados
espanol elise
espanol estab
espanol elist
espanol estas
espanol estar
espanol eakins
espanol egret
espanol eearch
espanol ebeling
espanol erindale
beg re
beg read
beg research
beg real
beg role
beg rest
beg reality
beg rear
beg retain
beg restart
beg realise
beg roles
beg realised
beg reset
beg retains
beg resins
beg resin
beg realist
beg rinse
beg reseal
beg rinsed
beg rakesh
beg reales
beg reise
beg rakes
beg realidad
beg reseau
beg resear
beg researc
beg roleta
beg reine
beg raked
beg reale
staring re
staring read
staring research
staring real
staring role
staring rest
staring reality
staring rear
staring retain
staring restart
staring realise
staring roles
staring realised
staring reset
staring retains
staring resins
staring resin
staring rinse
staring reseal
staring rinsed
staring rakesh
staring reales
staring reise
staring rakes
staring realidad
staring reseau
staring resear
staring researc
staring roleta
staring reine
staring raked
staring reale
inning re
inning read
inning research
inning real
inning role
inning rest
inning reality
inning rear
inning restart
inning realise
inning roles
inning realised
inning reset
inning realist
inning reseal
inning rakesh
inning reales
inning reise
inning rakes
inning realidad
inning reseau
inning resear
inning researc
inning roleta
inning reine
inning raked
inning reale
grin a
grin with
grin be
grin us
grin do
grin he
grin list
grin data
grin best
grin line
grin type
grin less
grin east
grin star
grin past
grin bed
grin bear
grin eat
grin nine
grin dot
grin pan
grin earn
grin types
grin dose
grin bet
grin dad
grin pat
grin ear
grin ease
grin lie
grin pad
grin dam
grin bee
grin lip
grin leslie
grin pasta
grin lid
grin pale
grin pant
grin espanol
grin starch
grin stalin
grin benin
grin espana
grin ealing
grin panty
grin panda
grin prada
grin lesben
grin rinse
grin mollie
grin witty
grin beaker
grin rinsed
grin pandas
grin bestality
grin lipase
grin pastas
grin earch
grin dorint
grin liberi
grin lista
grin typeset
grin doesn
grin stang
grin stale
grin lingo
grin esearch
grin fuses
grin beeing
grin libel
grin darin
grin belies
grin eased
grin dales
grin seddon
grin prado
grin molar
grin moles
grin does
grin eidos
grin sedaka
grin beaked
grin molest
grin praline
grin stata
grin estar
grin darcs
grin bestar
grin panes
grin dahmer
grin listas
grin lidar
grin liber
grin dopant
grin witless
grin pango
grin padang
grin espada
grin lesbe
grin dangos
grin listado
grin mollis
grin paling
grin bedale
grin pandan
grin fussed
grin listar
grin lipari
grin doling
grin palin
grin parco
grin typeid
grin parche
grin nines
grin done
daring re
daring read
daring research
daring real
daring role
daring rest
daring reality
daring rear
daring retain
daring restart
daring realise
daring roles
daring realised
daring reset
daring retains
daring resins
daring resin
daring realist
daring rinse
daring reseal
daring rinsed
daring rakesh
daring reales
daring reise
daring rakes
daring reseau
daring resear
daring researc
daring roleta
daring reine
daring raked
daring reale
starch me
starch meet
starch medal
starch metart
starch merino
starch ewing
starch melia
starch meakin
starch metar
starch meine
starch menino
starch menina
starch melita
starch meines
stalin in
stalin i
stalin be
stalin go
stalin buy
stalin best
stalin being
stalin god
stalin bay
stalin bar
stalin bring
stalin bus
stalin going
stalin bear
stalin beat
stalin bet
stalin beam
stalin bearing
stalin bean
stalin bind
stalin blessed
stalin baking
stalin bless
stalin bake
stalin baker
stalin bethesda
stalin baked
stalin goethe
stalin ingress
stalin ingres
stalin infuse
stalin binning
stalin brine
stalin inest
stalin indole
stalin goring
stalin betaine
stalin infusing
stalin beale
stalin bestar
stalin infuses
stalin brining
stalin beset
stalin indain
stalin gopal
stalin betas
stalin brinda
stalin indoles
stalin bethea
stalin boles
stalin goole
stalin beane
stalin brines
stalin baying
stalin beaty
stalin busoni
benin a
benin in
benin with
benin i
benin us
benin do
benin he
benin list
benin data
benin great
benin line
benin type
benin less
benin east
benin star
benin past
benin dog
benin ring
benin pain
benin eat
benin dot
benin paint
benin pan
benin gray
benin earn
benin doing
benin types
benin stainless
benin dose
benin dad
benin pat
benin ear
benin ease
benin lie
benin pad
benin dam
benin inline
benin lip
benin typing
benin leslie
benin pasta
benin lid
benin pale
benin staind
benin pant
benin grease
benin espanol
benin staring
benin grin
benin stains
benin daring
benin starch
benin stalin
benin espana
benin ealing
benin painless
benin panty
benin stain
benin panda
benin molina
benin moline
benin pains
benin grind
benin ingress
benin prada
benin rinse
benin greased
benin mollie
benin dainty
benin witty
benin fusing
benin rinsed
benin pandas
benin lipase
benin staines
benin pastas
benin earch
benin dorint
benin paring
benin ingres
benin infuse
benin lista
benin typeset
benin doesn
benin paine
benin stang
benin stale
benin lingo
benin esearch
benin fuses
benin instal
benin darin
benin greta
benin inest
benin eased
benin indole
benin molino
benin dales
benin seddon
benin prado
benin molar
benin moles
benin does
benin instar
benin eidos
benin sedaka
benin molest
benin praline
benin dainese
benin stata
benin estar
benin darcs
benin graying
benin panes
benin dahmer
benin grins
benin infuses
benin listas
benin lidar
benin dopant
benin witless
benin pango
benin padang
benin earing
benin espada
benin insta
benin dangos
benin listado
benin mollis
benin paling
benin pandan
benin indoles
benin fussed
benin tarina
benin listar
benin lipari
benin doling
benin palin
benin parco
benin typeid
benin daines
benin parche
benin done
espana king
espana kind
espana kinda
espana rhino
espana klingon
espana keane
espana rhine
espana kline
espana keine
espana khmer
espana kelis
espana kedar
espana rhein
espana kling
espana kepada
espana kernow
ealing of
ealing on
ealing or
ealing one
ealing oh
ealing oak
ealing oakdale
ealing oprah
ealing obese
ealing orinda
ealing orcinus
ealing oakes
painless tag
painless tap
painless tango
painless taber
painless talib
painless tatars
painless tatar
painless tarina
painless tangos
painless talia
panty pm
panty print
panty pay
panty pet
panty pin
panty paying
panty pine
panty pole
panty pearce
panty poles
panty pines
panty pinning
panty peirce
panty peseta
panty pineal
panty petal
panty petaling
panty pinole
panty pease
panty plist
panty pinus
panty pesetas
panty polen
panty pinless
panty pindar
panty prine
panty polestar
panty prins
panty pineau
panty plies
stain a
stain with
stain be
stain us
stain do
stain he
stain data
stain great
stain line
stain type
stain less
stain dog
stain bed
stain ring
stain bear
stain eat
stain nine
stain dot
stain pan
stain gray
stain earn
stain types
stain dose
stain bet
stain dad
stain pat
stain ear
stain ease
stain lie
stain pad
stain dam
stain bee
stain lip
stain leslie
stain lid
stain pale
stain lining
stain pant
stain grease
stain espanol
stain beg
stain daring
stain benin
stain espana
stain ealing
stain panty
stain panda
stain prada
stain bering
stain lesben
stain rinse
stain greased
stain mollie
stain witty
stain beaker
stain rinsed
stain pandas
stain panning
stain lipase
stain earch
stain dorint
stain paring
stain liberi
stain typeset
stain doesn
stain lingo
stain esearch
stain fuses
stain beeing
stain libel
stain bening
stain darin
stain belies
stain greta
stain eased
stain dales
stain seddon
stain prado
stain molar
stain moles
stain does
stain eidos
stain sedaka
stain beaked
stain molest
stain praline
stain estar
stain darcs
stain graying
stain panes
stain dahmer
stain lidar
stain liber
stain dopant
stain witless
stain pango
stain padang
stain earing
stain espada
stain lesbe
stain dangos
stain mollis
stain paling
stain bedale
stain pandan
stain fussed
stain lipari
stain doling
stain palin
stain parco
stain typeid
stain parche
stain nines
stain done
panda a
panda in
panda with
panda i
panda be
panda us
panda do
panda he
panda list
panda best
panda great
panda line
panda type
panda being
panda less
panda east
panda star
panda past
panda dog
panda bed
panda ring
panda pain
panda bear
panda eat
panda nine
panda dot
panda paint
panda pan
panda gray
panda earn
panda doing
panda types
panda stainless
panda dose
panda bet
panda pat
panda ear
panda ease
panda lie
panda pad
panda bee
panda inline
panda lip
panda typing
panda leslie
panda pasta
panda lid
panda pale
panda lining
panda staind
panda grease
panda beg
panda staring
panda inning
panda grin
panda stains
panda starch
panda stalin
panda benin
panda ealing
panda painless
panda stain
panda molina
panda moline
panda pains
panda grind
panda ingress
panda bering
panda lesben
panda rinse
panda greased
panda mollie
panda witty
panda fusing
panda beaker
panda rinsed
panda bestality
panda lipase
panda staines
panda pastas
panda earch
panda dorint
panda paring
panda panini
panda liberi
panda ingres
panda infuse
panda lista
panda typeset
panda doesn
panda paine
panda stang
panda stale
panda lingo
panda esearch
panda fuses
panda beeing
panda instal
panda libel
panda bening
panda inlining
panda belies
panda greta
panda inest
panda eased
panda indole
panda molino
panda seddon
panda prado
panda molar
panda moles
panda does
panda instar
panda eidos
panda sedaka
panda beaked
panda molest
panda praline
panda stata
panda estar
panda graying
panda bestar
panda grins
panda infuses
panda listas
panda liber
panda witless
panda pango
panda earing
panda lesbe
panda insta
panda listado
panda mollis
panda paling
panda indoles
panda fussed
panda tarina
panda listar
panda lipari
panda doling
panda palin
panda parco
panda typeid
panda parche
panda nines
panda done
molina klingon
molina keane
molina rhesus
molina kline
molina keine
molina khmer
molina kelis
molina kedar
molina rhein
molina kling
molina kepada
molina kernow
moline the
moline a
moline in
moline i
moline it
moline at
moline as
moline if
moline search
moline their
moline so
moline these
moline she
moline set
moline start
moline say
moline star
moline ad
moline sea
moline thus
moline ie
moline army
moline saying
moline tag
moline seat
moline arm
moline seal
moline stars
moline ah
moline sole
moline slip
moline starring
moline tale
moline tap
moline swim
moline adobe
moline sake
moline tales
moline spanning
moline sealing
moline stare
moline sparc
moline theta
moline starr
moline sprang
moline tango
moline ahmed
moline aarhus
moline starling
moline sling
moline spans
moline sparing
moline soledad
moline searing
moline sparco
moline seabed
moline seine
moline seibel
moline adoring
moline soleus
moline slingo
moline alesse
moline soles
moline adorno
moline irina
moline tapas
moline adaline
moline seadoo
moline tatarstan
moline sayin
moline spandau
moline adorn
moline sparcs
moline theist
moline sealine
moline searcg
moline swims
moline searcn
moline searcb
moline sdarch
moline searcu
moline searct
moline starline
moline taber
moline talib
moline tatars
moline tatar
moline idabel
moline tarina
moline spaeth
moline tangos
moline alist
moline aline
moline seale
moline searc
moline seastar
moline astar
moline ingot
moline talese
moline starlit
moline agreing
moline sakes
moline theanine
moline talia
moline ahmet
moline arche
moline spanne
moline sakar
pains tag
pains tale
pains tap
pains tales
pains tango
pains taber
pains talib
pains tatars
pains tatar
pains tarina
pains tangos
pains talese
pains talia
grind of
grind a
grind on
grind or
grind at
grind as
grind one
grind am
grind oh
grind ad
grind army
grind arm
grind oak
grind ah
grind adobe
grind oakdale
grind ahmed
grind oprah
grind aarhus
grind obese
grind orinda
grind alesse
grind adorno
grind adorn
grind oakes
grind aearch
grind alist
grind aline
grind astar
grind ahmet
grind arche
grind oline
ingress tale
ingress tap
ingress tales
ingress tango
ingress tapas
ingress taber
ingress talib
ingress tatars
ingress tatar
ingress tarina
ingress tangos
ingress talese
ingress talia
prada a
prada in
prada with
prada i
prada be
prada us
prada do
prada he
prada list
prada best
prada great
prada line
prada type
prada being
prada less
prada east
prada star
prada past
prada dog
prada bed
prada ring
prada pain
prada bear
prada eat
prada nine
prada dot
prada paint
prada pan
prada gray
prada earn
prada doing
prada types
prada stainless
prada dose
prada bet
prada pat
prada ear
prada ease
prada lie
prada pad
prada bee
prada inline
prada lip
prada typing
prada leslie
prada pasta
prada lid
prada pale
prada lining
prada staind
prada pant
prada grease
prada espanol
prada beg
prada staring
prada inning
prada grin
prada stains
prada starch
prada stalin
prada benin
prada espana
prada ealing
prada painless
prada panty
prada stain
prada molina
prada moline
prada pains
prada grind
prada ingress
prada bering
prada lesben
prada rinse
prada greased
prada mollie
prada witty
prada fusing
prada beaker
prada rinsed
prada bestality
prada panning
prada lipase
prada staines
prada pastas
prada earch
prada dorint
prada paring
prada panini
prada liberi
prada ingres
prada infuse
prada lista
prada typeset
prada doesn
prada paine
prada stang
prada stale
prada lingo
prada esearch
prada fuses
prada beeing
prada instal
prada libel
prada bening
prada inlining
prada belies
prada greta
prada inest
prada eased
prada indole
prada molino
prada seddon
prada molar
prada moles
prada does
prada instar
prada eidos
prada sedaka
prada beaked
prada molest
prada stata
prada estar
prada graying
prada bestar
prada panes
prada grins
prada infuses
prada listas
prada liber
prada dopant
prada witless
prada pango
prada earing
prada lesbe
prada insta
prada listado
prada mollis
prada paling
prada indoles
prada fussed
prada tarina
prada listar
prada lipari
prada doling
prada palin
prada parco
prada typeid
prada parche
prada nines
prada done
bering re
bering read
bering research
bering real
bering role
bering rest
bering reality
bering rear
bering retain
bering restart
bering realise
bering roles
bering realised
bering reset
bering retains
bering resins
bering resin
bering realist
bering rinse
bering reseal
bering rinsed
bering rakesh
bering reales
bering reise
bering rakes
bering realidad
bering reseau
bering resear
bering researc
bering roleta
bering reine
bering raked
bering reale
lesben in
lesben i
lesben be
lesben go
lesben buy
lesben best
lesben being
lesben god
lesben bay
lesben bar
lesben bring
lesben bus
lesben going
lesben bear
lesben beat
lesben bet
lesben beast
lesben beam
lesben bearing
lesben bean
lesben inline
lesben bind
lesben baking
lesben beastality
lesben bake
lesben baker
lesben bethesda
lesben baked
lesben goethe
lesben ingress
lesben bling
lesben ingres
lesben infuse
lesben binning
lesben instal
lesben brine
lesben inest
lesben goring
lesben betaine
lesben infusing
lesben instar
lesben bestar
lesben infuses
lesben brining
lesben beset
lesben indain
lesben insta
lesben gopal
lesben betas
lesben brinda
lesben bethea
lesben boles
lesben goole
lesben beane
lesben brines
lesben baying
lesben beilin
lesben blingo
lesben beaty
lesben busoni
rinse do
rinse day
rinse deal
rinse dead
rinse dear
rinse dealing
rinse dinar
rinse dakine
rinse detain
rinse dinning
rinse dinesh
rinse deity
rinse dakar
rinse deane
rinse dearch
rinse desing
rinse detalii
rinse dlidos
rinse detains
rinse deine
rinse dlese
greased a
greased in
greased with
greased i
greased be
greased us
greased do
greased he
greased list
greased data
greased best
greased line
greased type
greased less
greased star
greased past
greased bed
greased pain
greased bear
greased nine
greased dot
greased paint
greased pan
greased types
greased bet
greased dad
greased pat
greased lie
greased pad
greased dam
greased bee
greased inline
greased lip
greased leslie
greased pasta
greased lid
greased pale
greased staind
greased pant
greased espanol
greased starch
greased stalin
greased benin
greased espana
greased painless
greased panty
greased stain
greased panda
greased molina
greased moline
greased pains
greased prada
greased lesben
greased mollie
greased dainty
greased witty
greased beaker
greased pandas
greased bestality
greased staines
greased dorint
greased panini
greased liberi
greased infuse
greased lista
greased typeset
greased doesn
greased paine
greased stang
greased stale
greased lingo
greased fuses
greased beeing
greased instal
greased libel
greased darin
greased belies
greased inest
greased indole
greased molino
greased dales
greased prado
greased molar
greased moles
greased does
greased instar
greased eidos
greased beaked
greased molest
greased praline
greased dainese
greased stata
greased estar
greased darcs
greased bestar
greased panes
greased dahmer
greased infuses
greased lidar
greased liber
greased dopant
greased witless
greased pango
greased padang
greased espada
greased lesbe
greased insta
greased dangos
greased listado
greased mollis
greased paling
greased bedale
greased pandan
greased indoles
greased tarina
greased listar
greased lipari
greased doling
greased palin
greased parco
greased typeid
greased daines
greased parche
greased nines
greased done
mollie the
mollie a
mollie in
mollie i
mollie it
mollie at
mollie as
mollie if
mollie search
mollie their
mollie so
mollie these
mollie she
mollie set
mollie start
mollie say
mollie star
mollie thing
mollie ad
mollie sea
mollie thus
mollie ie
mollie army
mollie saying
mollie tag
mollie seat
mollie arm
mollie sin
mollie seal
mollie thin
mollie stars
mollie ah
mollie sole
mollie spain
mollie sing
mollie starring
mollie tale
mollie tap
mollie swim
mollie adobe
mollie sake
mollie tales
mollie staring
mollie spanning
mollie stare
mollie sparc
mollie theta
mollie atari
mollie thinning
mollie starr
mollie sprang
mollie tango
mollie ahmed
mollie aarhus
mollie sinus
mollie spans
mollie sparing
mollie soledad
mollie searing
mollie sparco
mollie seabed
mollie thine
mollie sinead
mollie sprains
mollie seine
mollie seibel
mollie sprain
mollie adoring
mollie soleus
mollie alesse
mollie soles
mollie adorno
mollie irina
mollie sinning
mollie tapas
mollie seadoo
mollie arcing
mollie tatarstan
mollie sayin
mollie spandau
mollie sinless
mollie adorn
mollie sparcs
mollie theist
mollie searcg
mollie swims
mollie shewing
mollie searcn
mollie searcb
mollie sdarch
mollie searcu
mollie taint
mollie searct
mollie taber
mollie tatars
mollie tatar
mollie idabel
mollie tarina
mollie spaeth
mollie tangos
mollie swiming
mollie seale
mollie searc
mollie seastar
mollie astar
mollie ingot
mollie talese
mollie agreing
mollie sakes
mollie theanine
mollie tainty
mollie ahmet
mollie arche
mollie spanne
mollie sakar
mollie alesina
dainty pm
dainty print
dainty pay
dainty pet
dainty paying
dainty pole
dainty pearce
dainty poles
dainty peirce
dainty peseta
dainty petal
dainty petaling
dainty pease
dainty plist
dainty pesetas
dainty polen
dainty prine
dainty polestar
dainty prins
dainty plies
witty pm
witty print
witty pay
witty pet
witty pin
witty paying
witty pine
witty pole
witty pearce
witty poles
witty pines
witty pinning
witty peirce
witty peseta
witty pineal
witty petal
witty petaling
witty pinole
witty pease
witty plist
witty pinus
witty pesetas
witty polen
witty pinless
witty pindar
witty prine
witty polestar
witty prins
witty pineau
witty plies
fusing re
fusing read
fusing research
fusing real
fusing role
fusing rest
fusing reality
fusing rear
fusing restart
fusing realise
fusing roles
fusing realised
fusing reset
fusing realist
fusing reseal
fusing rakesh
fusing reales
fusing reise
fusing rakes
fusing realidad
fusing reseau
fusing resear
fusing researc
fusing roleta
fusing reine
fusing raked
fusing reale
beaker in
beaker i
beaker not
beaker my
beaker no
beaker now
beaker car
beaker none
beaker nor
beaker clip
beaker cake
beaker nose
beaker ceiling
beaker inline
beaker norm
beaker cease
beaker ceased
beaker nod
beaker inning
beaker chew
beaker chewing
beaker cakes
beaker cling
beaker ingress
beaker mylist
beaker ingres
beaker infuse
beaker norco
beaker cesar
beaker cline
beaker coles
beaker instal
beaker inlining
beaker chews
beaker cdata
beaker inest
beaker indole
beaker myles
beaker nosed
beaker infusing
beaker instar
beaker nolita
beaker nodal
beaker coleus
beaker myrinet
beaker infuses
beaker noline
beaker indain
beaker insta
beaker noakes
beaker mydata
beaker ctype
beaker indoles
beaker cinese
rinsed a
rinsed in
rinsed with
rinsed i
rinsed be
rinsed us
rinsed do
rinsed he
rinsed list
rinsed data
rinsed best
rinsed great
rinsed line
rinsed type
rinsed being
rinsed less
rinsed east
rinsed star
rinsed past
rinsed dog
rinsed bed
rinsed pain
rinsed bear
rinsed eat
rinsed nine
rinsed dot
rinsed paint
rinsed pan
rinsed gray
rinsed earn
rinsed doing
rinsed types
rinsed bet
rinsed dad
rinsed pat
rinsed ear
rinsed lie
rinsed pad
rinsed dam
rinsed bee
rinsed inline
rinsed lip
rinsed typing
rinsed leslie
rinsed pasta
rinsed lid
rinsed pale
rinsed lining
rinsed staind
rinsed pant
rinsed espanol
rinsed beg
rinsed inning
rinsed grin
rinsed starch
rinsed stalin
rinsed benin
rinsed espana
rinsed ealing
rinsed painless
rinsed panty
rinsed stain
rinsed panda
rinsed molina
rinsed moline
rinsed pains
rinsed grind
rinsed ingress
rinsed prada
rinsed lesben
rinsed mollie
rinsed dainty
rinsed witty
rinsed fusing
rinsed beaker
rinsed pandas
rinsed bestality
rinsed panning
rinsed staines
rinsed earch
rinsed panini
rinsed ingres
rinsed infuse
rinsed lista
rinsed typeset
rinsed doesn
rinsed paine
rinsed stang
rinsed stale
rinsed lingo
rinsed esearch
rinsed fuses
rinsed beeing
rinsed instal
rinsed libel
rinsed bening
rinsed inlining
rinsed belies
rinsed greta
rinsed inest
rinsed indole
rinsed molino
rinsed dales
rinsed prado
rinsed molar
rinsed moles
rinsed does
rinsed instar
rinsed eidos
rinsed beaked
rinsed molest
rinsed praline
rinsed dainese
rinsed stata
rinsed estar
rinsed darcs
rinsed graying
rinsed bestar
rinsed panes
rinsed dahmer
rinsed grins
rinsed infuses
rinsed lidar
rinsed liber
rinsed dopant
rinsed witless
rinsed pango
rinsed padang
rinsed espada
rinsed lesbe
rinsed insta
rinsed dangos
rinsed listado
rinsed mollis
rinsed paling
rinsed bedale
rinsed pandan
rinsed indoles
rinsed tarina
rinsed listar
rinsed doling
rinsed palin
rinsed parco
rinsed typeid
rinsed daines
rinsed parche
rinsed nines
rinsed done
pandas tag
pandas tale
pandas tap
pandas tales
pandas tango
pandas tapas
pandas edina
pandas taliesin
pandas taint
pandas taber
pandas talib
pandas tatars
pandas tatar
pandas tarina
pandas tangos
pandas talese
pandas talia
pandas tainty
bestality pm
bestality print
bestality pay
bestality pet
bestality pin
bestality paying
bestality pine
bestality pole
bestality pearce
bestality poles
bestality pines
bestality pinning
bestality peirce
bestality peseta
bestality pineal
bestality petal
bestality pinole
bestality pease
bestality pinus
bestality pesetas
bestality polen
bestality pinless
bestality pindar
bestality prine
bestality prins
bestality pineau
panning re
panning read
panning research
panning real
panning role
panning rest
panning reality
panning rear
panning retain
panning restart
panning realise
panning roles
panning realised
panning reset
panning retains
panning resins
panning resin
panning realist
panning rinse
panning reseal
panning rinsed
panning rakesh
panning reales
panning reise
panning rakes
panning realidad
panning reseau
panning resear
panning researc
panning roleta
panning reine
panning raked
panning reale
lipase do
lipase day
lipase deal
lipase dead
lipase dear
lipase dinar
lipase dakine
lipase detain
lipase dinning
lipase dinesh
lipase deity
lipase dakar
lipase dearing
lipase deane
lipase dearch
lipase desing
lipase detains
lipase deine
lipase dlese
staines a
staines with
staines be
staines us
staines do
staines he
staines data
staines great
staines line
staines type
staines less
staines dog
staines bed
staines ring
staines bear
staines eat
staines nine
staines dot
staines pan
staines gray
staines earn
staines dose
staines bet
staines dad
staines pat
staines ear
staines ease
staines lie
staines pad
staines dam
staines bee
staines lip
staines leslie
staines lid
staines pale
staines lining
staines pant
staines grease
staines beg
staines daring
staines benin
staines ealing
staines panty
staines panda
staines prada
staines bering
staines lesben
staines rinse
staines greased
staines mollie
staines witty
staines beaker
staines rinsed
staines pandas
staines panning
staines lipase
staines earch
staines dorint
staines paring
staines liberi
staines lingo
staines beeing
staines libel
staines bening
staines darin
staines greta
staines eased
staines dales
staines seddon
staines prado
staines molar
staines eidos
staines sedaka
staines beaked
staines praline
staines darcs
staines graying
staines dahmer
staines lidar
staines liber
staines dopant
staines witless
staines pango
staines padang
staines earing
staines lesbe
staines dangos
staines mollis
staines paling
staines bedale
staines pandan
staines fussed
staines lipari
staines doling
staines palin
staines parco
staines typeid
staines parche
staines done
pastas edina
earch me
earch meet
earch medal
earch metart
earch merino
earch medalist
earch ewing
earch melia
earch meakin
earch metar
earch meine
earch menino
earch menina
earch melita
earch meines
dorint a
dorint area
dorint art
dorint areas
dorint arm
dorint aretha
dorint arpanet
dorint arles
dorint arline
dorint areal
paring re
paring read
paring research
paring real
paring role
paring rest
paring reality
paring rear
paring retain
paring restart
paring realise
paring roles
paring realised
paring reset
paring retains
paring resins
paring resin
paring realist
paring rinse
paring reseal
paring rinsed
paring rakesh
paring reales
paring reise
paring rakes
paring realidad
paring reseau
paring resear
paring researc
paring roleta
paring reine
paring raked
paring reale
panini no
panini net
panini near
panini naked
panini nest
panini neat
panini nearing
panini neale
panini nline
liberi no
liberi net
liberi near
liberi naked
liberi nine
liberi nest
liberi neat
liberi neale
liberi naking
liberi nines
ingres a
ingres with
ingres be
ingres us
ingres do
ingres he
ingres list
ingres data
ingres best
ingres line
ingres type
ingres less
ingres east
ingres star
ingres past
ingres bed
ingres bear
ingres eat
ingres nine
ingres dot
ingres pan
ingres earn
ingres dose
ingres bet
ingres dad
ingres pat
ingres ear
ingres ease
ingres lie
ingres pad
ingres dam
ingres bee
ingres lip
ingres leslie
ingres pasta
ingres lid
ingres pale
ingres pant
ingres starch
ingres stalin
ingres benin
ingres ealing
ingres panty
ingres panda
ingres prada
ingres lesben
ingres rinse
ingres mollie
ingres witty
ingres beaker
ingres rinsed
ingres pandas
ingres bestality
ingres lipase
ingres pastas
ingres earch
ingres dorint
ingres liberi
ingres lista
ingres stang
ingres stale
ingres lingo
ingres beeing
ingres libel
ingres darin
ingres eased
ingres dales
ingres seddon
ingres prado
ingres molar
ingres eidos
ingres sedaka
ingres beaked
ingres praline
ingres stata
ingres darcs
ingres bestar
ingres dahmer
ingres listas
ingres lidar
ingres liber
ingres dopant
ingres witless
ingres pango
ingres padang
ingres lesbe
ingres dangos
ingres listado
ingres mollis
ingres paling
ingres bedale
ingres pandan
ingres fussed
ingres listar
ingres lipari
ingres doling
ingres palin
ingres parco
ingres typeid
ingres parche
ingres done
infuse the
infuse a
infuse in
infuse i
infuse it
infuse at
infuse as
infuse search
infuse their
infuse so
infuse am
infuse these
infuse she
infuse set
infuse start
infuse say
infuse star
infuse ad
infuse sea
infuse thus
infuse ie
infuse army
infuse saying
infuse tag
infuse seat
infuse arm
infuse seal
infuse stars
infuse ah
infuse sole
infuse slip
infuse starring
infuse tale
infuse tap
infuse swim
infuse adobe
infuse sake
infuse tales
infuse spanning
infuse sealing
infuse stare
infuse sparc
infuse theta
infuse starr
infuse sprang
infuse tango
infuse ahmed
infuse aarhus
infuse stardom
infuse starling
infuse sling
infuse spans
infuse sparing
infuse soledad
infuse searing
infuse sparco
infuse seabed
infuse seine
infuse seibel
infuse adoring
infuse soleus
infuse slingo
infuse alesse
infuse soles
infuse adorno
infuse irina
infuse tapas
infuse adaline
infuse seadoo
infuse tatarstan
infuse sayin
infuse spandau
infuse adorn
infuse sparcs
infuse theist
infuse sealine
infuse searcg
infuse swims
infuse searcn
infuse searcb
infuse sdarch
infuse searcu
infuse searct
infuse searcm
infuse starline
infuse taber
infuse talib
infuse tatars
infuse tatar
infuse idabel
infuse tarina
infuse spaeth
infuse tangos
infuse alist
infuse aline
infuse seale
infuse searc
infuse seastar
infuse adamo
infuse astar
infuse ingot
infuse talese
infuse starlit
infuse agreing
infuse sakes
infuse theanine
infuse talia
infuse ahmet
infuse arche
infuse spanne
infuse sakar
lista a
lista in
lista with
lista i
lista be
lista us
lista do
lista he
lista data
lista great
lista type
lista being
lista less
lista dog
lista bed
lista ring
lista pain
lista bear
lista eat
lista nine
lista dot
lista paint
lista pan
lista gray
lista earn
lista doing
lista types
lista dose
lista bet
lista dad
lista pat
lista ear
lista ease
lista pad
lista dam
lista bee
lista typing
lista pale
lista pant
lista grease
lista espanol
lista beg
lista inning
lista grin
lista daring
lista benin
lista espana
lista painless
lista panty
lista panda
lista molina
lista moline
lista pains
lista grind
lista ingress
lista prada
lista bering
lista lesben
lista rinse
lista greased
lista dainty
lista witty
lista fusing
lista beaker
lista rinsed
lista pandas
lista panning
lista earch
lista dorint
lista paring
lista panini
lista ingres
lista infuse
lista typeset
lista doesn
lista paine
lista esearch
lista fuses
lista beeing
lista bening
lista darin
lista greta
lista inest
lista eased
lista indole
lista molino
lista dales
lista seddon
lista prado
lista molar
lista moles
lista does
lista eidos
lista sedaka
lista beaked
lista molest
lista dainese
lista estar
lista darcs
lista graying
lista panes
lista dahmer
lista grins
lista infuses
lista dopant
lista witless
lista pango
lista padang
lista earing
lista espada
lista lesbe
lista dangos
lista bedale
lista pandan
lista indoles
lista fussed
lista tarina
lista parco
lista typeid
lista daines
lista parche
lista nines
lista done
typeset a
typeset at
typeset as
typeset he
typeset am
typeset head
typeset ad
typeset hi
typeset hear
typeset army
typeset heat
typeset hearing
typeset hole
typeset arm
typeset ha
typeset healing
typeset ah
typeset adobe
typeset hint
typeset hay
typeset heal
typeset holes
typeset atari
typeset ahmed
typeset aarhus
typeset adoring
typeset alesse
typeset adorno
typeset adaline
typeset hearn
typeset arcing
typeset adorn
typeset heist
typeset aearch
typeset heine
typeset heilig
typeset alist
typeset aline
typeset holed
typeset adamo
typeset astar
typeset agreing
typeset ahmet
typeset arche
typeset alesina
typeset heise
doesn in
doesn i
doesn be
doesn go
doesn buy
doesn being
doesn god
doesn bay
doesn bar
doesn bring
doesn bus
doesn going
doesn bear
doesn beat
doesn bet
doesn beast
doesn beam
doesn bearing
doesn bean
doesn inline
doesn bind
doesn blessed
doesn baking
doesn bless
doesn beastality
doesn bake
doesn baker
doesn baked
doesn goethe
doesn bling
doesn infuse
doesn binning
doesn instal
doesn brine
doesn goring
doesn betaine
doesn infusing
doesn beale
doesn instar
doesn brining
doesn indain
doesn insta
doesn gopal
doesn betas
doesn brinda
doesn bethea
doesn boles
doesn goole
doesn beane
doesn baying
doesn beilin
doesn blingo
doesn beaty
doesn busoni
paine the
paine a
paine in
paine i
paine it
paine at
paine as
paine if
paine search
paine their
paine so
paine am
paine these
paine she
paine set
paine start
paine say
paine star
paine ad
paine sea
paine thus
paine ie
paine army
paine saying
paine tag
paine seat
paine arm
paine seal
paine stars
paine ah
paine sole
paine slip
paine starring
paine tale
paine tap
paine swim
paine adobe
paine sake
paine tales
paine spanning
paine sealing
paine stare
paine theta
paine starr
paine sprang
paine tango
paine ahmed
paine aarhus
paine stardom
paine starling
paine sling
paine spans
paine soledad
paine searing
paine seabed
paine seine
paine seibel
paine adoring
paine soleus
paine slingo
paine alesse
paine soles
paine adorno
paine irina
paine adaline
paine seadoo
paine tatarstan
paine sayin
paine spandau
paine adorn
paine theist
paine sealine
paine searcg
paine swims
paine searcn
paine searcb
paine sdarch
paine searcu
paine searct
paine searcm
paine starline
paine taber
paine talib
paine tatars
paine tatar
paine idabel
paine tarina
paine tangos
paine alist
paine aline
paine seale
paine searc
paine seastar
paine adamo
paine astar
paine ingot
paine talese
paine starlit
paine agreing
paine sakes
paine theanine
paine talia
paine ahmet
paine arche
paine spanne
paine sakar
stang of
stang on
stang or
stang one
stang oh
stang oak
stang oakdale
stang oprah
stang obese
stang orinda
stang orcinus
stang oakes
stang oline
stale search
stale so
stale she
stale set
stale start
stale say
stale star
stale sea
stale saying
stale seat
stale sin
stale seal
stale stars
stale sole
stale spain
stale slip
stale sing
stale starring
stale swim
stale sake
stale staring
stale spanning
stale sealing
stale stare
stale sparc
stale starr
stale sprang
stale stares
stale stardom
stale starling
stale sinus
stale sling
stale spans
stale sparing
stale soledad
stale searing
stale sparco
stale seabed
stale sinead
stale sprains
stale seine
stale seibel
stale sprain
stale soleus
stale slingo
stale soles
stale sinning
stale seadoo
stale sayin
stale spandau
stale sparcs
stale sealine
stale searcg
stale swims
stale sesrch
stale shewing
stale searcn
stale searcb
stale sdarch
stale searcu
stale searct
stale searcm
stale starline
stale spaeth
stale swiming
stale searc
stale starlit
stale sakes
stale spanne
stale sakar
lingo a
lingo in
lingo with
lingo i
lingo be
lingo us
lingo do
lingo he
lingo data
lingo best
lingo great
lingo type
lingo being
lingo less
lingo east
lingo star
lingo past
lingo dog
lingo bed
lingo ring
lingo pain
lingo bear
lingo eat
lingo nine
lingo dot
lingo paint
lingo pan
lingo gray
lingo earn
lingo doing
lingo types
lingo stainless
lingo dose
lingo bet
lingo dad
lingo pat
lingo ear
lingo ease
lingo pad
lingo dam
lingo bee
lingo typing
lingo pasta
lingo pale
lingo staind
lingo pant
lingo grease
lingo espanol
lingo beg
lingo staring
lingo inning
lingo grin
lingo stains
lingo daring
lingo starch
lingo benin
lingo espana
lingo painless
lingo panty
lingo stain
lingo panda
lingo molina
lingo moline
lingo pains
lingo grind
lingo ingress
lingo prada
lingo bering
lingo lesben
lingo rinse
lingo greased
lingo dainty
lingo witty
lingo fusing
lingo beaker
lingo rinsed
lingo pandas
lingo panning
lingo staines
lingo pastas
lingo earch
lingo dorint
lingo paring
lingo panini
lingo ingres
lingo infuse
lingo typeset
lingo doesn
lingo paine
lingo stale
lingo esearch
lingo fuses
lingo instal
lingo bening
lingo darin
lingo greta
lingo inest
lingo eased
lingo indole
lingo molino
lingo dales
lingo seddon
lingo prado
lingo molar
lingo moles
lingo does
lingo instar
lingo eidos
lingo sedaka
lingo beaked
lingo molest
lingo dainese
lingo stata
lingo estar
lingo darcs
lingo bestar
lingo panes
lingo dahmer
lingo grins
lingo infuses
lingo dopant
lingo witless
lingo earing
lingo espada
lingo lesbe
lingo insta
lingo bedale
lingo pandan
lingo indoles
lingo fussed
lingo tarina
lingo parco
lingo typeid
lingo daines
lingo parche
lingo nines
lingo done
esearch me
esearch meet
esearch medal
esearch metart
esearch merino
esearch medalist
esearch ewing
esearch melia
esearch meakin
esearch metar
esearch meine
esearch menino
esearch menina
esearch melita
fuses a
fuses in
fuses with
fuses i
fuses be
fuses us
fuses do
fuses he
fuses list
fuses data
fuses best
fuses great
fuses line
fuses type
fuses being
fuses less
fuses east
fuses star
fuses past
fuses dog
fuses bed
fuses ring
fuses pain
fuses bear
fuses eat
fuses nine
fuses dot
fuses paint
fuses pan
fuses gray
fuses earn
fuses doing
fuses stainless
fuses dose
fuses bet
fuses dad
fuses pat
fuses ear
fuses ease
fuses lie
fuses pad
fuses dam
fuses bee
fuses inline
fuses lip
fuses typing
fuses leslie
fuses pasta
fuses lid
fuses pale
fuses lining
fuses staind
fuses pant
fuses grease
fuses beg
fuses staring
fuses inning
fuses grin
fuses stains
fuses daring
fuses starch
fuses stalin
fuses benin
fuses ealing
fuses painless
fuses panty
fuses stain
fuses panda
fuses molina
fuses moline
fuses pains
fuses grind
fuses prada
fuses bering
fuses lesben
fuses rinse
fuses greased
fuses mollie
fuses dainty
fuses witty
fuses beaker
fuses rinsed
fuses pandas
fuses bestality
fuses panning
fuses lipase
fuses pastas
fuses earch
fuses dorint
fuses paring
fuses panini
fuses liberi
fuses lista
fuses paine
fuses stang
fuses stale
fuses lingo
fuses beeing
fuses instal
fuses libel
fuses bening
fuses inlining
fuses darin
fuses greta
fuses eased
fuses indole
fuses molino
fuses dales
fuses seddon
fuses prado
fuses molar
fuses instar
fuses eidos
fuses sedaka
fuses beaked
fuses praline
fuses stata
fuses darcs
fuses graying
fuses bestar
fuses dahmer
fuses grins
fuses listas
fuses lidar
fuses liber
fuses dopant
fuses witless
fuses pango
fuses padang
fuses earing
fuses lesbe
fuses insta
fuses dangos
fuses listado
fuses mollis
fuses paling
fuses bedale
fuses pandan
fuses indoles
fuses tarina
fuses listar
fuses lipari
fuses doling
fuses palin
fuses parco
fuses typeid
fuses parche
fuses done
beeing of
beeing on
beeing or
beeing one
beeing oh
beeing oak
beeing oakdale
beeing oprah
beeing orinda
beeing orcinus
beeing oakes
beeing oline
instal in
instal i
instal it
instal if
instal ie
instal espanol
instal espana
instal esearch
instal irina
instal estar
instal espada
instal idabel
instal ingot
libel espanol
libel espana
libel esearch
libel estar
libel espada
bening re
bening read
bening research
bening real
bening role
bening rest
bening reality
bening rear
bening retain
bening restart
bening realise
bening roles
bening realised
bening reset
bening retains
bening resins
bening resin
bening realist
bening rinse
bening reseal
bening rinsed
bening rakesh
bening reales
bening reise
bening rakes
bening realidad
bening reseau
bening resear
bening researc
bening roleta
bening reine
bening raked
bening reale
inlining re
inlining read
inlining research
inlining real
inlining role
inlining rest
inlining rear
inlining restart
inlining roles
inlining reset
inlining reseal
inlining rakesh
inlining reales
inlining reise
inlining rakes
inlining reseau
inlining resear
inlining researc
inlining roleta
inlining reine
inlining raked
inlining reale
darin a
darin in
darin with
darin i
darin be
darin us
darin do
darin he
darin list
darin best
darin great
darin line
darin type
darin being
darin less
darin east
darin star
darin past
darin dog
darin bed
darin pain
darin bear
darin eat
darin nine
darin dot
darin paint
darin pan
darin gray
darin earn
darin doing
darin types
darin stainless
darin dose
darin bet
darin pat
darin ear
darin ease
darin lie
darin pad
darin bee
darin inline
darin lip
darin typing
darin leslie
darin pasta
darin lid
darin pale
darin lining
darin staind
darin pant
darin grease
darin espanol
darin beg
darin inning
darin grin
darin stains
darin starch
darin stalin
darin benin
darin espana
darin ealing
darin painless
darin panty
darin stain
darin molina
darin moline
darin pains
darin grind
darin ingress
darin lesben
darin greased
darin mollie
darin witty
darin fusing
darin beaker
darin bestality
darin panning
darin lipase
darin staines
darin pastas
darin earch
darin panini
darin ingres
darin infuse
darin lista
darin typeset
darin doesn
darin paine
darin stang
darin stale
darin lingo
darin esearch
darin fuses
darin beeing
darin instal
darin libel
darin bening
darin inlining
darin belies
darin greta
darin inest
darin eased
darin indole
darin molino
darin seddon
darin prado
darin molar
darin moles
darin does
darin instar
darin eidos
darin sedaka
darin beaked
darin molest
darin praline
darin stata
darin estar
darin graying
darin bestar
darin panes
darin grins
darin infuses
darin listas
darin liber
darin dopant
darin witless
darin pango
darin lesbe
darin insta
darin listado
darin mollis
darin paling
darin indoles
darin fussed
darin tarina
darin listar
darin doling
darin palin
darin parco
darin typeid
darin parche
darin nines
darin done
belies a
belies in
belies with
belies i
belies us
belies do
belies he
belies data
belies great
belies type
belies less
belies east
belies star
belies past
belies dog
belies ring
belies pain
belies eat
belies nine
belies dot
belies paint
belies pan
belies gray
belies earn
belies doing
belies stainless
belies dose
belies dad
belies pat
belies ear
belies ease
belies pad
belies dam
belies typing
belies pasta
belies pale
belies staind
belies pant
belies grease
belies staring
belies inning
belies grin
belies stains
belies daring
belies starch
belies painless
belies panty
belies stain
belies panda
belies molina
belies moline
belies pains
belies grind
belies prada
belies rinse
belies greased
belies dainty
belies witty
belies fusing
belies rinsed
belies pandas
belies panning
belies pastas
belies earch
belies dorint
belies paring
belies panini
belies infuse
belies paine
belies stang
belies stale
belies instal
belies darin
belies greta
belies eased
belies indole
belies molino
belies dales
belies seddon
belies prado
belies molar
belies instar
belies eidos
belies sedaka
belies stata
belies darcs
belies graying
belies dahmer
belies grins
belies dopant
belies witless
belies pango
belies padang
belies earing
belies insta
belies dangos
belies pandan
belies indoles
belies fussed
belies tarina
belies parco
belies typeid
belies parche
belies done
greta a
greta in
greta with
greta i
greta be
greta us
greta do
greta he
greta list
greta data
greta best
greta line
greta type
greta less
greta east
greta star
greta past
greta bed
greta pain
greta bear
greta eat
greta nine
greta dot
greta paint
greta pan
greta earn
greta types
greta stainless
greta dose
greta bet
greta dad
greta pat
greta ear
greta ease
greta lie
greta pad
greta dam
greta bee
greta inline
greta lip
greta leslie
greta pasta
greta lid
greta pale
greta staind
greta pant
greta espanol
greta stains
greta starch
greta stalin
greta benin
greta espana
greta ealing
greta painless
greta panty
greta stain
greta panda
greta molina
greta moline
greta pains
greta prada
greta lesben
greta rinse
greta mollie
greta dainty
greta witty
greta beaker
greta rinsed
greta pandas
greta bestality
greta lipase
greta staines
greta pastas
greta earch
greta dorint
greta panini
greta liberi
greta infuse
greta lista
greta typeset
greta doesn
greta paine
greta stang
greta stale
greta lingo
greta esearch
greta fuses
greta beeing
greta instal
greta libel
greta darin
greta belies
greta inest
greta eased
greta indole
greta molino
greta dales
greta seddon
greta prado
greta molar
greta moles
greta does
greta instar
greta eidos
greta sedaka
greta beaked
greta molest
greta praline
greta dainese
greta stata
greta estar
greta darcs
greta bestar
greta panes
greta dahmer
greta infuses
greta listas
greta lidar
greta liber
greta dopant
greta witless
greta pango
greta padang
greta espada
greta lesbe
greta insta
greta dangos
greta listado
greta mollis
greta paling
greta bedale
greta pandan
greta indoles
greta fussed
greta tarina
greta listar
greta lipari
greta doling
greta palin
greta parco
greta typeid
greta daines
greta parche
greta nines
greta done
inest a
inest area
inest art
inest areas
inest arm
inest aretha
inest arpanet
inest arles
inest arline
inest areal
eased a
eased in
eased with
eased i
eased be
eased us
eased do
eased he
eased list
eased data
eased best
eased line
eased type
eased being
eased less
eased star
eased past
eased dog
eased bed
eased ring
eased pain
eased bear
eased nine
eased dot
eased paint
eased pan
eased gray
eased doing
eased types
eased bet
eased dad
eased pat
eased lie
eased pad
eased dam
eased bee
eased inline
eased lip
eased typing
eased leslie
eased pasta
eased lid
eased pale
eased lining
eased staind
eased pant
eased espanol
eased beg
eased staring
eased inning
eased grin
eased daring
eased starch
eased stalin
eased benin
eased espana
eased painless
eased panty
eased stain
eased panda
eased molina
eased moline
eased pains
eased grind
eased ingress
eased prada
eased bering
eased lesben
eased mollie
eased dainty
eased witty
eased fusing
eased beaker
eased pandas
eased bestality
eased panning
eased staines
eased dorint
eased paring
eased panini
eased liberi
eased ingres
eased infuse
eased lista
eased typeset
eased doesn
eased paine
eased stang
eased stale
eased lingo
eased fuses
eased beeing
eased instal
eased libel
eased bening
eased inlining
eased darin
eased belies
eased greta
eased inest
eased indole
eased molino
eased dales
eased prado
eased molar
eased moles
eased does
eased instar
eased eidos
eased beaked
eased molest
eased praline
eased dainese
eased stata
eased estar
eased darcs
eased graying
eased bestar
eased panes
eased dahmer
eased grins
eased infuses
eased lidar
eased liber
eased dopant
eased witless
eased pango
eased padang
eased espada
eased lesbe
eased insta
eased dangos
eased listado
eased mollis
eased paling
eased bedale
eased pandan
eased indoles
eased tarina
eased listar
eased lipari
eased doling
eased palin
eased parco
eased typeid
eased daines
eased parche
eased nines
eased done
indole search
indole so
indole she
indole set
indole start
indole say
indole star
indole sea
indole saying
indole seat
indole seal
indole stars
indole sole
indole slip
indole starring
indole swim
indole sake
indole spanning
indole sealing
indole stare
indole sparc
indole starr
indole sprang
indole stares
indole starling
indole sling
indole spans
indole sparing
indole searing
indole sparco
indole seabed
indole seine
indole seibel
indole soleus
indole slingo
indole soles
indole sayin
indole spandau
indole sparcs
indole sealine
indole searcg
indole swims
indole sesrch
indole searcn
indole searcb
indole sdarch
indole searcu
indole searct
indole searcm
indole starline
indole spaeth
indole searc
indole seastar
indole starlit
indole sakes
indole spanne
indole sakar
molino let
molino leg
molino leaked
molino leaks
molino lenin
molino lestat
molino leben
molino lehmer
molino lerche
molino leprae
molino lening
molino leake
molino lewitt
dales a
dales in
dales with
dales i
dales be
dales us
dales do
dales he
dales list
dales best
dales great
dales line
dales type
dales being
dales east
dales star
dales past
dales dog
dales bed
dales ring
dales pain
dales bear
dales eat
dales nine
dales dot
dales paint
dales pan
dales gray
dales earn
dales doing
dales types
dales dose
dales bet
dales pat
dales ear
dales ease
dales lie
dales pad
dales bee
dales inline
dales lip
dales typing
dales pasta
dales lid
dales lining
dales staind
dales pant
dales grease
dales espanol
dales beg
dales staring
dales inning
dales grin
dales stains
dales starch
dales stalin
dales benin
dales espana
dales ealing
dales panty
dales stain
dales molina
dales moline
dales pains
dales grind
dales ingress
dales bering
dales rinse
dales greased
dales mollie
dales witty
dales fusing
dales beaker
dales rinsed
dales bestality
dales panning
dales lipase
dales staines
dales pastas
dales earch
dales dorint
dales paring
dales panini
dales liberi
dales ingres
dales infuse
dales lista
dales typeset
dales doesn
dales paine
dales stang
dales lingo
dales esearch
dales fuses
dales beeing
dales instal
dales bening
dales inlining
dales belies
dales greta
dales inest
dales eased
dales molino
dales seddon
dales prado
dales molar
dales moles
dales does
dales instar
dales eidos
dales sedaka
dales beaked
dales molest
dales praline
dales stata
dales estar
dales graying
dales bestar
dales panes
dales grins
dales infuses
dales listas
dales liber
dales dopant
dales pango
dales earing
dales insta
dales listado
dales mollis
dales paling
dales fussed
dales tarina
dales listar
dales lipari
dales doling
dales palin
dales parco
dales typeid
dales parche
dales nines
dales done
seddon in
seddon i
seddon be
seddon go
seddon buy
seddon best
seddon being
seddon god
seddon bay
seddon bar
seddon bring
seddon bus
seddon going
seddon bear
seddon beat
seddon bet
seddon beast
seddon beam
seddon bearing
seddon bean
seddon inline
seddon bind
seddon baking
seddon bless
seddon beastality
seddon bake
seddon baker
seddon bethesda
seddon baked
seddon goethe
seddon ingress
seddon bling
seddon ingres
seddon infuse
seddon binning
seddon instal
seddon brine
seddon inest
seddon goring
seddon betaine
seddon infusing
seddon beale
seddon instar
seddon bestar
seddon infuses
seddon brining
seddon beset
seddon indain
seddon insta
seddon gopal
seddon betas
seddon brinda
seddon bethea
seddon boles
seddon goole
seddon beane
seddon brines
seddon baying
seddon beilin
seddon blingo
seddon beaty
seddon busoni
prado a
prado in
prado with
prado i
prado be
prado us
prado he
prado list
prado data
prado best
prado great
prado line
prado type
prado being
prado less
prado east
prado star
prado past
prado bed
prado ring
prado pain
prado bear
prado eat
prado nine
prado paint
prado pan
prado gray
prado earn
prado types
prado stainless
prado bet
prado pat
prado ear
prado ease
prado lie
prado pad
prado dam
prado bee
prado inline
prado lip
prado typing
prado leslie
prado pasta
prado lid
prado pale
prado lining
prado staind
prado pant
prado grease
prado espanol
prado beg
prado staring
prado inning
prado grin
prado stains
prado daring
prado starch
prado stalin
prado benin
prado espana
prado ealing
prado painless
prado panty
prado stain
prado panda
prado molina
prado moline
prado pains
prado grind
prado ingress
prado bering
prado lesben
prado rinse
prado greased
prado mollie
prado dainty
prado witty
prado fusing
prado beaker
prado rinsed
prado pandas
prado bestality
prado panning
prado lipase
prado staines
prado pastas
prado earch
prado paring
prado panini
prado liberi
prado ingres
prado infuse
prado lista
prado typeset
prado paine
prado stang
prado stale
prado lingo
prado esearch
prado fuses
prado beeing
prado instal
prado libel
prado bening
prado inlining
prado darin
prado belies
prado greta
prado inest
prado eased
prado molino
prado dales
prado molar
prado moles
prado instar
prado sedaka
prado beaked
prado molest
prado dainese
prado stata
prado estar
prado darcs
prado graying
prado bestar
prado panes
prado dahmer
prado grins
prado infuses
prado listas
prado lidar
prado liber
prado witless
prado pango
prado padang
prado earing
prado espada
prado lesbe
prado insta
prado dangos
prado mollis
prado paling
prado bedale
prado pandan
prado fussed
prado tarina
prado listar
prado lipari
prado palin
prado parco
prado typeid
prado daines
prado parche
prado nines
molar he
molar head
molar hi
molar hear
molar heat
molar hearing
molar hole
molar ha
molar healing
molar hint
molar hay
molar heal
molar holes
molar hines
molar hesse
molar hearn
molar heist
molar heshe
molar heine
molar heilig
molar holed
molar heise
moles a
moles in
moles with
moles i
moles be
moles us
moles do
moles he
moles list
moles data
moles best
moles great
moles line
moles type
moles being
moles less
moles east
moles star
moles past
moles dog
moles bed
moles ring
moles pain
moles bear
moles eat
moles nine
moles dot
moles paint
moles pan
moles gray
moles earn
moles doing
moles stainless
moles dose
moles bet
moles dad
moles pat
moles ear
moles ease
moles lie
moles pad
moles bee
moles inline
moles lip
moles typing
moles leslie
moles pasta
moles lid
moles pale
moles lining
moles staind
moles pant
moles grease
moles beg
moles staring
moles inning
moles grin
moles stains
moles daring
moles starch
moles stalin
moles benin
moles ealing
moles painless
moles panty
moles stain
moles panda
moles pains
moles grind
moles prada
moles bering
moles lesben
moles rinse
moles greased
moles dainty
moles witty
moles fusing
moles beaker
moles rinsed
moles pandas
moles bestality
moles panning
moles lipase
moles pastas
moles earch
moles dorint
moles paring
moles panini
moles liberi
moles infuse
moles lista
moles paine
moles stang
moles stale
moles lingo
moles beeing
moles instal
moles libel
moles bening
moles inlining
moles darin
moles greta
moles eased
moles indole
moles dales
moles seddon
moles prado
moles instar
moles eidos
moles sedaka
moles beaked
moles praline
moles stata
moles darcs
moles graying
moles bestar
moles dahmer
moles grins
moles listas
moles lidar
moles liber
moles dopant
moles witless
moles pango
moles padang
moles earing
moles lesbe
moles insta
moles dangos
moles listado
moles paling
moles bedale
moles pandan
moles indoles
moles fussed
moles tarina
moles listar
moles lipari
moles doling
moles palin
moles parco
moles typeid
moles parche
moles done
does a
does in
does with
does i
does be
does us
does he
does list
does data
does best
does great
does line
does type
does being
does less
does east
does star
does past
does bed
does ring
does pain
does bear
does eat
does nine
does paint
does pan
does gray
does earn
does stainless
does bet
does pat
does ear
does ease
does lie
does pad
does dam
does bee
does inline
does lip
does typing
does leslie
does pasta
does lid
does pale
does lining
does staind
does pant
does grease
does beg
does staring
does inning
does grin
does stains
does daring
does starch
does stalin
does benin
does ealing
does painless
does panty
does stain
does panda
does molina
does moline
does pains
does grind
does prada
does bering
does lesben
does rinse
does greased
does mollie
does dainty
does witty
does fusing
does beaker
does rinsed
does pandas
does bestality
does panning
does lipase
does pastas
does earch
does paring
does panini
does liberi
does infuse
does lista
does paine
does stang
does stale
does lingo
does beeing
does instal
does libel
does bening
does inlining
does darin
does greta
does eased
does molino
does dales
does molar
does instar
does sedaka
does beaked
does praline
does stata
does darcs
does graying
does bestar
does dahmer
does grins
does listas
does lidar
does liber
does witless
does pango
does padang
does earing
does lesbe
does insta
does dangos
does mollis
does paling
does bedale
does pandan
does fussed
does tarina
does listar
does lipari
does palin
does parco
does typeid
does parche
instar in
instar i
instar not
instar my
instar no
instar now
instar car
instar none
instar nor
instar clip
instar cake
instar nose
instar ceiling
instar inline
instar norm
instar cease
instar ceased
instar nod
instar inning
instar chew
instar cakes
instar cling
instar nobel
instar ingress
instar ingres
instar infuse
instar norco
instar cesar
instar cline
instar coles
instar inlining
instar chews
instar cdata
instar inest
instar indole
instar myles
instar nosed
instar nolita
instar nodal
instar coleus
instar myrinet
instar infuses
instar noline
instar noakes
instar mydata
instar ctype
instar indoles
eidos tag
eidos tale
eidos tap
eidos tales
eidos tango
eidos tapas
eidos edina
eidos taliesin
eidos taint
eidos taber
eidos talib
eidos tatars
eidos tatar
eidos tarina
eidos tangos
eidos talese
eidos talia
eidos tainty
sedaka rhino
sedaka rhesus
sedaka rhine
sedaka keine
sedaka kelis
sedaka kedar
sedaka rhein
sedaka kepada
sedaka kernow
beaked of
beaked a
beaked on
beaked or
beaked at
beaked as
beaked one
beaked am
beaked oh
beaked ad
beaked army
beaked arm
beaked oak
beaked ongoing
beaked ah
beaked atari
beaked oakdale
beaked ahmed
beaked oprah
beaked aarhus
beaked orinda
beaked orcinus
beaked adoring
beaked alesse
beaked adorno
beaked arcing
beaked adorn
beaked oakes
beaked aearch
beaked alist
beaked aline
beaked astar
beaked agreing
beaked ahmet
beaked arche
beaked oline
beaked alesina
molest a
molest area
molest art
molest areas
molest aretha
molest arpanet
molest arles
molest arline
molest areal
praline of
praline on
praline or
praline oh
praline oak
praline ongoing
praline oakdale
praline obese
praline orinda
praline orcinus
praline oakes
dainese the
dainese a
dainese in
dainese i
dainese it
dainese at
dainese as
dainese if
dainese their
dainese am
dainese ad
dainese thus
dainese ie
dainese army
dainese tag
dainese arm
dainese ah
dainese tale
dainese tap
dainese adobe
dainese tales
dainese theta
dainese tango
dainese ahmed
dainese aarhus
dainese adoring
dainese alesse
dainese adorno
dainese irina
dainese tapas
dainese tatarstan
dainese adorn
dainese theist
dainese taber
dainese talib
dainese tatars
dainese tatar
dainese tarina
dainese tangos
dainese alist
dainese aline
dainese astar
dainese ingot
dainese talese
dainese agreing
dainese theanine
dainese talia
dainese ahmet
dainese arche
stata re
stata read
stata research
stata real
stata role
stata ring
stata rest
stata reality
stata rear
stata retain
stata realise
stata roles
stata realised
stata reset
stata retains
stata resins
stata resin
stata rearing
stata rinse
stata reseal
stata rinsed
stata raking
stata reining
stata rakesh
stata reales
stata reise
stata rakes
stata realidad
stata reseau
stata resear
stata researc
stata reine
stata raked
stata reale
estar a
estar in
estar with
estar i
estar be
estar us
estar do
estar he
estar list
estar best
estar great
estar line
estar type
estar being
estar less
estar east
estar star
estar past
estar dog
estar bed
estar ring
estar pain
estar bear
estar eat
estar nine
estar dot
estar paint
estar pan
estar gray
estar earn
estar doing
estar stainless
estar dose
estar bet
estar dad
estar pat
estar ear
estar ease
estar lie
estar pad
estar dam
estar bee
estar inline
estar lip
estar typing
estar leslie
estar pasta
estar lid
estar pale
estar lining
estar staind
estar pant
estar grease
estar beg
estar staring
estar inning
estar grin
estar stains
estar daring
estar starch
estar stalin
estar benin
estar ealing
estar painless
estar panty
estar stain
estar panda
estar molina
estar moline
estar pains
estar grind
estar prada
estar bering
estar lesben
estar rinse
estar greased
estar mollie
estar dainty
estar witty
estar fusing
estar beaker
estar rinsed
estar pandas
estar bestality
estar panning
estar lipase
estar pastas
estar earch
estar dorint
estar paring
estar panini
estar liberi
estar infuse
estar lista
estar paine
estar stang
estar stale
estar lingo
estar beeing
estar instal
estar libel
estar bening
estar inlining
estar darin
estar greta
estar eased
estar indole
estar molino
estar dales
estar seddon
estar prado
estar molar
estar instar
estar eidos
estar sedaka
estar beaked
estar praline
estar darcs
estar graying
estar bestar
estar dahmer
estar grins
estar listas
estar lidar
estar liber
estar dopant
estar witless
estar pango
estar padang
estar earing
estar lesbe
estar insta
estar dangos
estar listado
estar mollis
estar paling
estar bedale
estar pandan
estar indoles
estar fussed
estar listar
estar lipari
estar doling
estar palin
estar parco
estar typeid
estar parche
estar done
darcs tag
darcs tale
darcs tap
darcs tales
darcs tango
darcs tapas
darcs edina
darcs taliesin
darcs taint
darcs taber
darcs talib
darcs tatars
darcs tatar
darcs tarina
darcs tangos
darcs talese
darcs talia
darcs tainty
graying of
graying on
graying or
graying one
graying oh
graying oak
graying oakdale
graying oprah
graying obese
graying orinda
graying orcinus
graying oakes
graying oline
bestar in
bestar i
bestar not
bestar my
bestar no
bestar now
bestar car
bestar none
bestar nor
bestar clip
bestar cake
bestar nose
bestar ceiling
bestar inline
bestar norm
bestar cease
bestar ceased
bestar nod
bestar inning
bestar chew
bestar chewing
bestar cakes
bestar cling
bestar ingress
bestar ingres
bestar infuse
bestar norco
bestar cesar
bestar cline
bestar coles
bestar inlining
bestar chews
bestar cdata
bestar inest
bestar indole
bestar myles
bestar nosed
bestar infusing
bestar nolita
bestar nodal
bestar coleus
bestar myrinet
bestar infuses
bestar noline
bestar indain
bestar noakes
bestar mydata
bestar ctype
bestar indoles
bestar cinese
panes a
panes in
panes with
panes i
panes be
panes us
panes do
panes he
panes list
panes data
panes best
panes great
panes line
panes type
panes being
panes less
panes east
panes star
panes past
panes dog
panes bed
panes ring
panes pain
panes bear
panes eat
panes nine
panes dot
panes paint
panes pan
panes gray
panes earn
panes doing
panes stainless
panes dose
panes bet
panes dad
panes pat
panes ear
panes ease
panes lie
panes pad
panes dam
panes bee
panes inline
panes lip
panes typing
panes leslie
panes pasta
panes lid
panes pale
panes lining
panes staind
panes grease
panes beg
panes staring
panes inning
panes grin
panes stains
panes daring
panes starch
panes stalin
panes benin
panes ealing
panes painless
panes stain
panes molina
panes moline
panes pains
panes grind
panes prada
panes bering
panes lesben
panes rinse
panes greased
panes mollie
panes dainty
panes witty
panes fusing
panes beaker
panes rinsed
panes bestality
panes lipase
panes pastas
panes earch
panes dorint
panes paring
panes panini
panes liberi
panes infuse
panes lista
panes paine
panes stang
panes stale
panes lingo
panes beeing
panes instal
panes libel
panes bening
panes inlining
panes darin
panes greta
panes eased
panes indole
panes molino
panes dales
panes seddon
panes prado
panes molar
panes instar
panes eidos
panes sedaka
panes beaked
panes praline
panes stata
panes darcs
panes graying
panes bestar
panes dahmer
panes grins
panes listas
panes lidar
panes liber
panes witless
panes pango
panes padang
panes earing
panes lesbe
panes insta
panes dangos
panes listado
panes mollis
panes paling
panes bedale
panes indoles
panes fussed
panes tarina
panes listar
panes lipari
panes doling
panes palin
panes parco
panes typeid
panes parche
panes done
dahmer in
dahmer i
dahmer not
dahmer my
dahmer no
dahmer now
dahmer car
dahmer none
dahmer nor
dahmer clip
dahmer cake
dahmer nose
dahmer ceiling
dahmer inline
dahmer norm
dahmer cease
dahmer ceased
dahmer nod
dahmer inning
dahmer chew
dahmer chewing
dahmer cakes
dahmer cling
dahmer nobel
dahmer ingress
dahmer mylist
dahmer ingres
dahmer infuse
dahmer norco
dahmer cesar
dahmer cline
dahmer coles
dahmer instal
dahmer inlining
dahmer chews
dahmer inest
dahmer indole
dahmer myles
dahmer nosed
dahmer infusing
dahmer instar
dahmer nolita
dahmer coleus
dahmer myrinet
dahmer infuses
dahmer noline
dahmer insta
dahmer noakes
dahmer ctype
dahmer indoles
dahmer cinese
grins tale
grins tap
grins tales
grins tango
grins tapas
grins taber
grins talib
grins tatars
grins tatar
grins tarina
grins tangos
grins talese
grins talia
infuses a
infuses with
infuses be
infuses us
infuses do
infuses he
infuses list
infuses data
infuses best
infuses great
infuses line
infuses type
infuses less
infuses east
infuses star
infuses past
infuses dog
infuses bed
infuses ring
infuses bear
infuses eat
infuses nine
infuses dot
infuses pan
infuses gray
infuses earn
infuses dose
infuses bet
infuses dad
infuses pat
infuses ear
infuses ease
infuses lie
infuses pad
infuses dam
infuses bee
infuses lip
infuses leslie
infuses pasta
infuses lid
infuses pale
infuses lining
infuses pant
infuses grease
infuses beg
infuses staring
infuses daring
infuses starch
infuses stalin
infuses benin
infuses ealing
infuses panty
infuses panda
infuses prada
infuses bering
infuses lesben
infuses rinse
infuses greased
infuses mollie
infuses witty
infuses beaker
infuses rinsed
infuses pandas
infuses bestality
infuses panning
infuses lipase
infuses pastas
infuses earch
infuses dorint
infuses paring
infuses liberi
infuses lista
infuses stang
infuses stale
infuses lingo
infuses beeing
infuses libel
infuses bening
infuses darin
infuses greta
infuses eased
infuses dales
infuses seddon
infuses prado
infuses molar
infuses eidos
infuses sedaka
infuses beaked
infuses praline
infuses stata
infuses darcs
infuses graying
infuses bestar
infuses dahmer
infuses listas
infuses lidar
infuses liber
infuses dopant
infuses witless
infuses pango
infuses padang
infuses earing
infuses lesbe
infuses dangos
infuses listado
infuses mollis
infuses paling
infuses bedale
infuses pandan
infuses listar
infuses lipari
infuses doling
infuses palin
infuses parco
infuses typeid
infuses parche
infuses done
listas edina
lidar in
lidar i
lidar not
lidar my
lidar no
lidar now
lidar car
lidar none
lidar nor
lidar cake
lidar nose
lidar norm
lidar cease
lidar ceased
lidar nod
lidar inning
lidar chew
lidar chewing
lidar cakes
lidar nobel
lidar ingress
lidar ingres
lidar infuse
lidar norco
lidar cesar
lidar coles
lidar instal
lidar chews
lidar inest
lidar indole
lidar myles
lidar nosed
lidar infusing
lidar instar
lidar coleus
lidar myrinet
lidar infuses
lidar insta
lidar noakes
lidar ctype
lidar indoles
lidar cinese
liber in
liber i
liber not
liber my
liber no
liber now
liber car
liber none
liber nor
liber cake
liber nose
liber norm
liber cease
liber ceased
liber nod
liber inning
liber chew
liber chewing
liber cakes
liber ingress
liber ingres
liber infuse
liber norco
liber cesar
liber coles
liber instal
liber chews
liber cdata
liber inest
liber indole
liber myles
liber nosed
liber infusing
liber instar
liber nodal
liber coleus
liber myrinet
liber infuses
liber indain
liber insta
liber noakes
liber mydata
liber ctype
liber indoles
liber cinese
dopant a
dopant area
dopant art
dopant areas
dopant arm
dopant aretha
dopant arles
dopant arline
dopant areal
witless tag
witless tap
witless tango
witless tapas
witless edina
witless taliesin
witless taint
witless taber
witless talib
witless tatars
witless tatar
witless tarina
witless tangos
witless talia
witless tainty
pango a
pango in
pango with
pango i
pango be
pango us
pango do
pango he
pango list
pango data
pango best
pango great
pango line
pango type
pango being
pango less
pango east
pango star
pango dog
pango bed
pango ring
pango bear
pango eat
pango nine
pango dot
pango pan
pango gray
pango earn
pango doing
pango types
pango stainless
pango dose
pango bet
pango dad
pango ear
pango ease
pango lie
pango dam
pango bee
pango inline
pango lip
pango typing
pango leslie
pango lid
pango lining
pango staind
pango pant
pango grease
pango espanol
pango beg
pango staring
pango inning
pango grin
pango stains
pango daring
pango starch
pango stalin
pango benin
pango espana
pango panty
pango stain
pango panda
pango molina
pango moline
pango grind
pango ingress
pango prada
pango bering
pango lesben
pango rinse
pango greased
pango mollie
pango dainty
pango witty
pango fusing
pango beaker
pango rinsed
pango pandas
pango bestality
pango panning
pango staines
pango earch
pango dorint
pango liberi
pango ingres
pango infuse
pango lista
pango typeset
pango doesn
pango stale
pango esearch
pango fuses
pango instal
pango libel
pango bening
pango inlining
pango darin
pango belies
pango greta
pango inest
pango eased
pango indole
pango molino
pango dales
pango seddon
pango prado
pango molar
pango moles
pango does
pango instar
pango eidos
pango sedaka
pango beaked
pango molest
pango praline
pango dainese
pango stata
pango estar
pango darcs
pango bestar
pango panes
pango dahmer
pango grins
pango infuses
pango listas
pango lidar
pango liber
pango dopant
pango witless
pango earing
pango lesbe
pango insta
pango listado
pango mollis
pango bedale
pango pandan
pango indoles
pango fussed
pango tarina
pango listar
pango typeid
pango daines
pango nines
pango done
padang of
padang on
padang or
padang one
padang oh
padang oak
padang oprah
padang obese
padang orcinus
padang oakes
padang oline
earing re
earing role
earing rest
earing retain
earing restart
earing roles
earing reset
earing retains
earing resins
earing resin
earing rinse
earing rinsed
earing rakesh
earing reise
earing rakes
earing roleta
earing reine
earing raked
espada a
espada in
espada with
espada i
espada be
espada us
espada do
espada he
espada list
espada best
espada great
espada line
espada type
espada being
espada less
espada east
espada star
espada dog
espada bed
espada ring
espada bear
espada eat
espada nine
espada dot
espada pan
espada gray
espada earn
espada doing
espada stainless
espada dose
espada bet
espada ear
espada ease
espada lie
espada bee
espada inline
espada lip
espada typing
espada leslie
espada lid
espada lining
espada staind
espada pant
espada grease
espada beg
espada staring
espada inning
espada grin
espada stains
espada starch
espada stalin
espada benin
espada ealing
espada panty
espada stain
espada molina
espada moline
espada grind
espada bering
espada lesben
espada rinse
espada greased
espada mollie
espada witty
espada fusing
espada beaker
espada rinsed
espada bestality
espada panning
espada earch
espada dorint
espada liberi
espada infuse
espada lista
espada stang
espada stale
espada lingo
espada beeing
espada instal
espada libel
espada bening
espada inlining
espada greta
espada eased
espada indole
espada molino
espada seddon
espada prado
espada molar
espada instar
espada eidos
espada sedaka
espada beaked
espada praline
espada stata
espada graying
espada bestar
espada grins
espada listas
espada liber
espada dopant
espada witless
espada earing
espada lesbe
espada insta
espada listado
espada mollis
espada indoles
espada fussed
espada tarina
espada listar
espada doling
espada typeid
espada done
lesbe a
lesbe in
lesbe with
lesbe i
lesbe us
lesbe do
lesbe he
lesbe list
lesbe data
lesbe great
lesbe line
lesbe type
lesbe east
lesbe star
lesbe past
lesbe dog
lesbe ring
lesbe pain
lesbe eat
lesbe nine
lesbe dot
lesbe paint
lesbe pan
lesbe gray
lesbe earn
lesbe doing
lesbe types
lesbe dose
lesbe dad
lesbe pat
lesbe ear
lesbe ease
lesbe lie
lesbe pad
lesbe dam
lesbe inline
lesbe lip
lesbe typing
lesbe pasta
lesbe lid
lesbe lining
lesbe staind
lesbe pant
lesbe grease
lesbe espanol
lesbe staring
lesbe inning
lesbe grin
lesbe stains
lesbe daring
lesbe starch
lesbe stalin
lesbe espana
lesbe ealing
lesbe panty
lesbe stain
lesbe panda
lesbe molina
lesbe moline
lesbe pains
lesbe grind
lesbe ingress
lesbe prada
lesbe rinse
lesbe greased
lesbe mollie
lesbe dainty
lesbe witty
lesbe fusing
lesbe rinsed
lesbe pandas
lesbe panning
lesbe lipase
lesbe staines
lesbe pastas
lesbe earch
lesbe dorint
lesbe paring
lesbe panini
lesbe ingres
lesbe infuse
lesbe lista
lesbe typeset
lesbe doesn
lesbe paine
lesbe stang
lesbe lingo
lesbe esearch
lesbe fuses
lesbe instal
lesbe inlining
lesbe darin
lesbe greta
lesbe inest
lesbe eased
lesbe molino
lesbe seddon
lesbe prado
lesbe molar
lesbe moles
lesbe does
lesbe instar
lesbe eidos
lesbe sedaka
lesbe molest
lesbe praline
lesbe dainese
lesbe stata
lesbe estar
lesbe darcs
lesbe graying
lesbe panes
lesbe dahmer
lesbe grins
lesbe infuses
lesbe listas
lesbe lidar
lesbe dopant
lesbe pango
lesbe padang
lesbe earing
lesbe espada
lesbe insta
lesbe dangos
lesbe listado
lesbe mollis
lesbe paling
lesbe pandan
lesbe fussed
lesbe tarina
lesbe listar
lesbe lipari
lesbe doling
lesbe palin
lesbe parco
lesbe typeid
lesbe daines
lesbe parche
lesbe nines
lesbe done
insta a
insta with
insta be
insta us
insta do
insta he
insta data
insta great
insta line
insta type
insta less
insta dog
insta bed
insta ring
insta bear
insta eat
insta nine
insta dot
insta pan
insta gray
insta earn
insta types
insta dose
insta bet
insta dad
insta pat
insta ear
insta ease
insta lie
insta pad
insta dam
insta bee
insta lip
insta leslie
insta lid
insta pale
insta lining
insta pant
insta grease
insta espanol
insta beg
insta daring
insta benin
insta espana
insta ealing
insta panty
insta panda
insta prada
insta bering
insta lesben
insta rinse
insta greased
insta mollie
insta witty
insta beaker
insta rinsed
insta pandas
insta panning
insta lipase
insta earch
insta dorint
insta paring
insta liberi
insta typeset
insta doesn
insta lingo
insta esearch
insta fuses
insta beeing
insta libel
insta bening
insta darin
insta belies
insta greta
insta eased
insta dales
insta seddon
insta prado
insta molar
insta moles
insta does
insta eidos
insta sedaka
insta beaked
insta molest
insta praline
insta estar
insta darcs
insta graying
insta panes
insta dahmer
insta lidar
insta liber
insta dopant
insta witless
insta pango
insta padang
insta earing
insta espada
insta lesbe
insta dangos
insta mollis
insta paling
insta bedale
insta pandan
insta fussed
insta lipari
insta doling
insta palin
insta parco
insta typeid
insta parche
insta nines
insta done
dangos tag
dangos tale
dangos tap
dangos tales
dangos tapas
dangos edina
dangos taliesin
dangos taint
dangos taber
dangos talib
dangos tatars
dangos tatar
dangos tarina
dangos talese
dangos talia
dangos tainty
listado a
listado in
listado with
listado i
listado be
listado us
listado he
listado data
listado great
listado type
listado being
listado less
listado bed
listado ring
listado pain
listado bear
listado eat
listado nine
listado paint
listado pan
listado gray
listado earn
listado types
listado bet
listado pat
listado ear
listado ease
listado pad
listado dam
listado bee
listado typing
listado pale
listado pant
listado grease
listado espanol
listado beg
listado inning
listado grin
listado daring
listado benin
listado espana
listado painless
listado panty
listado panda
listado molina
listado moline
listado pains
listado grind
listado ingress
listado prada
listado bering
listado lesben
listado rinse
listado greased
listado dainty
listado witty
listado fusing
listado beaker
listado rinsed
listado pandas
listado panning
listado earch
listado paring
listado panini
listado ingres
listado infuse
listado typeset
listado paine
listado esearch
listado fuses
listado beeing
listado bening
listado darin
listado greta
listado inest
listado eased
listado molino
listado dales
listado molar
listado moles
listado sedaka
listado beaked
listado molest
listado dainese
listado estar
listado darcs
listado graying
listado panes
listado dahmer
listado grins
listado infuses
listado witless
listado pango
listado padang
listado earing
listado espada
listado lesbe
listado dangos
listado bedale
listado pandan
listado fussed
listado tarina
listado parco
listado typeid
listado daines
listado parche
listado nines
mollis tag
mollis tale
mollis tap
mollis tales
mollis tango
mollis tapas
mollis edina
mollis taint
mollis taber
mollis tatars
mollis tatar
mollis tarina
mollis tangos
mollis talese
mollis tainty
paling of
paling on
paling or
paling one
paling oh
paling oak
paling oakdale
paling oprah
paling obese
paling orinda
paling orcinus
paling oakes
bedale search
bedale so
bedale she
bedale set
bedale start
bedale say
bedale star
bedale sea
bedale saying
bedale seat
bedale sin
bedale seal
bedale stars
bedale sole
bedale spain
bedale slip
bedale sing
bedale starring
bedale swim
bedale sake
bedale staring
bedale spanning
bedale sealing
bedale stare
bedale sparc
bedale starr
bedale sprang
bedale stares
bedale stardom
bedale starling
bedale sinus
bedale sling
bedale spans
bedale sparing
bedale searing
bedale sparco
bedale sinead
bedale sprains
bedale seine
bedale sprain
bedale soleus
bedale slingo
bedale soles
bedale sinning
bedale seadoo
bedale sayin
bedale sparcs
bedale sealine
bedale searcg
bedale swims
bedale sesrch
bedale shewing
bedale searcn
bedale searcu
bedale searct
bedale searcm
bedale starline
bedale spaeth
bedale swiming
bedale searc
bedale seastar
bedale starlit
bedale sakes
bedale spanne
bedale sakar
pandan in
pandan i
pandan be
pandan go
pandan buy
pandan best
pandan being
pandan god
pandan bay
pandan bar
pandan bring
pandan bus
pandan going
pandan bear
pandan beat
pandan bet
pandan beast
pandan beam
pandan bearing
pandan bean
pandan inline
pandan bind
pandan blessed
pandan baking
pandan bless
pandan beastality
pandan bake
pandan baker
pandan baked
pandan goethe
pandan ingress
pandan bling
pandan ingres
pandan infuse
pandan binning
pandan instal
pandan brine
pandan inest
pandan indole
pandan goring
pandan betaine
pandan infusing
pandan beale
pandan instar
pandan bestar
pandan infuses
pandan brining
pandan beset
pandan insta
pandan gopal
pandan betas
pandan indoles
pandan bethea
pandan boles
pandan goole
pandan beane
pandan brines
pandan baying
pandan beilin
pandan blingo
pandan beaty
pandan busoni
indoles a
indoles with
indoles be
indoles us
indoles he
indoles list
indoles data
indoles best
indoles great
indoles line
indoles type
indoles east
indoles star
indoles past
indoles bed
indoles ring
indoles bear
indoles eat
indoles nine
indoles pan
indoles gray
indoles earn
indoles types
indoles bet
indoles pat
indoles ear
indoles ease
indoles lie
indoles pad
indoles dam
indoles bee
indoles lip
indoles pasta
indoles lid
indoles lining
indoles pant
indoles grease
indoles espanol
indoles beg
indoles staring
indoles daring
indoles starch
indoles stalin
indoles benin
indoles espana
indoles ealing
indoles panty
indoles panda
indoles prada
indoles bering
indoles rinse
indoles greased
indoles mollie
indoles witty
indoles beaker
indoles rinsed
indoles pandas
indoles bestality
indoles panning
indoles lipase
indoles pastas
indoles earch
indoles paring
indoles liberi
indoles lista
indoles typeset
indoles stang
indoles lingo
indoles esearch
indoles fuses
indoles beeing
indoles bening
indoles darin
indoles belies
indoles greta
indoles eased
indoles molar
indoles moles
indoles sedaka
indoles beaked
indoles molest
indoles praline
indoles stata
indoles estar
indoles darcs
indoles graying
indoles bestar
indoles panes
indoles dahmer
indoles listas
indoles lidar
indoles liber
indoles pango
indoles padang
indoles earing
indoles espada
indoles dangos
indoles mollis
indoles paling
indoles pandan
indoles fussed
indoles listar
indoles lipari
indoles palin
indoles parco
indoles typeid
indoles parche
indoles nines
fussed a
fussed in
fussed with
fussed i
fussed be
fussed us
fussed do
fussed he
fussed list
fussed data
fussed best
fussed great
fussed line
fussed type
fussed being
fussed less
fussed east
fussed star
fussed past
fussed dog
fussed bed
fussed ring
fussed pain
fussed bear
fussed eat
fussed nine
fussed dot
fussed paint
fussed pan
fussed gray
fussed earn
fussed doing
fussed types
fussed bet
fussed dad
fussed pat
fussed ear
fussed lie
fussed pad
fussed dam
fussed bee
fussed inline
fussed lip
fussed typing
fussed leslie
fussed pasta
fussed lid
fussed pale
fussed lining
fussed staind
fussed pant
fussed espanol
fussed beg
fussed staring
fussed inning
fussed grin
fussed daring
fussed starch
fussed stalin
fussed benin
fussed espana
fussed ealing
fussed painless
fussed panty
fussed stain
fussed panda
fussed molina
fussed moline
fussed pains
fussed grind
fussed ingress
fussed prada
fussed bering
fussed lesben
fussed mollie
fussed dainty
fussed witty
fussed beaker
fussed pandas
fussed bestality
fussed panning
fussed staines
fussed earch
fussed dorint
fussed paring
fussed panini
fussed liberi
fussed ingres
fussed lista
fussed typeset
fussed doesn
fussed paine
fussed stang
fussed stale
fussed lingo
fussed esearch
fussed beeing
fussed instal
fussed libel
fussed bening
fussed inlining
fussed darin
fussed belies
fussed greta
fussed inest
fussed indole
fussed molino
fussed dales
fussed prado
fussed molar
fussed moles
fussed does
fussed instar
fussed eidos
fussed beaked
fussed molest
fussed praline
fussed dainese
fussed stata
fussed estar
fussed darcs
fussed graying
fussed bestar
fussed panes
fussed dahmer
fussed grins
fussed lidar
fussed liber
fussed dopant
fussed witless
fussed pango
fussed padang
fussed earing
fussed espada
fussed lesbe
fussed insta
fussed dangos
fussed listado
fussed mollis
fussed paling
fussed bedale
fussed pandan
fussed indoles
fussed tarina
fussed listar
fussed lipari
fussed doling
fussed palin
fussed parco
fussed typeid
fussed daines
fussed parche
fussed nines
fussed done
tarina klingon
tarina keane
tarina rhesus
tarina kline
tarina keine
tarina khmer
tarina kelis
tarina kedar
tarina rhein
tarina kling
tarina kepada
tarina kernow
listar in
listar i
listar not
listar my
listar no
listar now
listar car
listar none
listar nor
listar cake
listar nose
listar norm
listar cease
listar ceased
listar nod
listar inning
listar chew
listar chewing
listar cakes
listar nobel
listar ingress
listar ingres
listar infuse
listar norco
listar cesar
listar coles
listar chews
listar cdata
listar inest
listar indole
listar myles
listar nosed
listar infusing
listar nodal
listar coleus
listar myrinet
listar infuses
listar indain
listar noakes
listar mydata
listar ctype
listar indoles
listar cinese
lipari no
lipari net
lipari near
lipari naked
lipari nine
lipari nest
lipari neat
lipari neale
lipari naking
lipari nines
doling of
doling on
doling or
doling one
doling oh
doling oak
doling oakdale
doling oprah
doling obese
doling orinda
doling orcinus
doling oakes
palin in
palin i
palin be
palin go
palin buy
palin best
palin being
palin god
palin bay
palin bar
palin bring
palin bus
palin going
palin bear
palin beat
palin bet
palin beast
palin beam
palin bearing
palin bean
palin bind
palin blessed
palin baking
palin bless
palin bake
palin baker
palin bethesda
palin baked
palin goethe
palin ingress
palin ingres
palin infuse
palin binning
palin instal
palin brine
palin inest
palin indole
palin goring
palin betaine
palin infusing
palin beale
palin instar
palin bestar
palin infuses
palin brining
palin beset
palin indain
palin insta
palin betas
palin brinda
palin indoles
palin bethea
palin boles
palin goole
palin beane
palin brines
palin baying
palin beaty
palin busoni
parco let
parco leg
parco leaking
parco leaked
parco leaks
parco lenin
parco lestat
parco leben
parco lehmer
parco leprae
parco lening
parco lesedi
parco leake
parco lewitt
typeid of
typeid a
typeid on
typeid or
typeid at
typeid as
typeid one
typeid am
typeid oh
typeid ad
typeid army
typeid arm
typeid oak
typeid ongoing
typeid ah
typeid adobe
typeid atari
typeid oakdale
typeid ahmed
typeid oprah
typeid aarhus
typeid obese
typeid orinda
typeid orcinus
typeid adoring
typeid alesse
typeid adorno
typeid arcing
typeid adorn
typeid oakes
typeid aearch
typeid alist
typeid aline
typeid astar
typeid ahmet
typeid arche
typeid oline
typeid alesina
daines a
daines with
daines be
daines us
daines do
daines he
daines list
daines best
daines great
daines line
daines type
daines less
daines east
daines star
daines past
daines dog
daines bed
daines ring
daines bear
daines eat
daines nine
daines dot
daines pan
daines gray
daines earn
daines dose
daines bet
daines pat
daines ear
daines ease
daines lie
daines pad
daines bee
daines lip
daines leslie
daines pasta
daines lid
daines pale
daines lining
daines pant
daines grease
daines beg
daines staring
daines starch
daines stalin
daines benin
daines ealing
daines panty
daines bering
daines lesben
daines rinse
daines greased
daines mollie
daines witty
daines beaker
daines rinsed
daines bestality
daines panning
daines lipase
daines pastas
daines earch
daines dorint
daines paring
daines liberi
daines lista
daines stang
daines stale
daines lingo
daines beeing
daines libel
daines bening
daines greta
daines eased
daines seddon
daines prado
daines molar
daines eidos
daines sedaka
daines beaked
daines praline
daines stata
daines graying
daines bestar
daines listas
daines liber
daines dopant
daines witless
daines pango
daines earing
daines lesbe
daines listado
daines mollis
daines paling
daines fussed
daines listar
daines lipari
daines doling
daines palin
daines parco
daines typeid
daines parche
daines done
parche we
parche way
parche west
parche war
parche window
parche win
parche wine
parche wind
parche winning
parche wear
parche wet
parche wing
parche wake
parche wearing
parche waking
parche wines
parche warhol
parche weaning
parche warhead
parche weariness
parche wakes
parche winstar
parche windom
parche winless
parche weiber
parche winline
parche westar
parche wearin
parche weibel
parche wakeling
parche wakelin
parche waker
nines a
nines in
nines with
nines i
nines be
nines us
nines do
nines he
nines list
nines data
nines best
nines great
nines line
nines type
nines being
nines less
nines east
nines star
nines past
nines dog
nines bed
nines ring
nines pain
nines bear
nines eat
nines dot
nines paint
nines pan
nines gray
nines earn
nines doing
nines stainless
nines dose
nines bet
nines dad
nines pat
nines ear
nines ease
nines lie
nines pad
nines dam
nines bee
nines inline
nines lip
nines typing
nines leslie
nines pasta
nines lid
nines pale
nines staind
nines pant
nines grease
nines beg
nines staring
nines grin
nines stains
nines daring
nines starch
nines stalin
nines ealing
nines painless
nines panty
nines stain
nines panda
nines molina
nines moline
nines pains
nines grind
nines prada
nines bering
nines lesben
nines rinse
nines greased
nines mollie
nines dainty
nines witty
nines fusing
nines beaker
nines rinsed
nines pandas
nines bestality
nines lipase
nines pastas
nines earch
nines dorint
nines paring
nines liberi
nines infuse
nines lista
nines paine
nines stang
nines stale
nines lingo
nines beeing
nines instal
nines libel
nines darin
nines greta
nines eased
nines indole
nines molino
nines dales
nines seddon
nines prado
nines molar
nines instar
nines eidos
nines sedaka
nines beaked
nines praline
nines stata
nines darcs
nines graying
nines bestar
nines dahmer
nines grins
nines listas
nines lidar
nines liber
nines dopant
nines witless
nines pango
nines padang
nines earing
nines lesbe
nines insta
nines dangos
nines listado
nines mollis
nines paling
nines bedale
nines pandan
nines indoles
nines fussed
nines tarina
nines listar
nines lipari
nines doling
nines palin
nines parco
nines typeid
nines parche
nines done
done of
done on
done or
done oh
done oak
done ongoing
done oakdale
done oprah
done obese
done orinda
done orcinus
done oakes
print("""
AK AKE ARH AYI BE DA DO EA EES EI ES ETA ETH EYB FUS GAR GR HEW
HME HON IN KAN KEB LES LI MOL NB NEO NGO NIN OLE OOS PA PAN PLA
PRA RAT RC RIN RMY RNO SED SNA STA TAR TLE TYP USO UYT WIM WIT
YER
""".strip().replace(' ', '\t'))AK AKE ARH AYI BE DA DO EA EES EI ES ETA ETH EYB FUS GAR GR HEW
HME HON IN KAN KEB LES LI MOL NB NEO NGO NIN OLE OOS PA PAN PLA
PRA RAT RC RIN RMY RNO SED SNA STA TAR TLE TYP USO UYT WIM WIT
YER
</code>
|
{
"repository": "PhilHarnish/forge",
"path": "src/puzzle/examples/msph/2018/the major.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 220957,
"hexsha": "cb4d21793336c6600f0ea79f71b3f638598b8bb4",
"max_line_length": 91,
"avg_line_length": 22.9852283366,
"alphanum_fraction": 0.4797087216
}
|
# Notebook from max-de-rooij/8dm50-machine-learning
Path: practicals/week_3.ipynb
# Preliminaries
The `pandas` library allows the user several data structures for different data manipulation tasks:
1. Data storage through its `Series` and `DataFrame` data structures.
2. Data filtering using multiple methods from the package.
3. Reading data from many different file formats such as `csv`, `txt`, `xlsx`, ...
Below we provide a brief overview of the `pandas` functionalities needed for these exercises. The complete documentation can be found on the [`pandas` website](https://pandas.pydata.org/).
## Pandas data structures
### Series
The Pandas Series data structure is similar to a one-dimensional array. It can store any type of data. The values are mutable but the size not.
To create `Series`, we call the `pd.Series()` method and pass an array. A `Series` may also be created from a numpy array._____no_output_____
<code>
import pandas as pd
import numpy as np
first_series = pd.Series([1,10,100,1000])
print(first_series)
teams = np.array(['PSV','Ajax','Feyenoord','Twente'])
second_series = pd.Series(teams)
print('\n')
print(second_series)0 1
1 10
2 100
3 1000
dtype: int64
0 PSV
1 Ajax
2 Feyenoord
3 Twente
dtype: object
</code>
### DataFrame
One can think of a `DataFrame` as a table with rows and columns (2D structure). The columns can be of a different type (as opposed to `numpy` arrays) and the size of the `DataFrame` is mutable.
To create `DataFrame`, we call the `pd.DataFrame()` method and we can create it from scratch or we can convert a numpy array or a list into a `DataFrame`._____no_output_____
<code>
# DataFrame from scratch
first_dataframe = pd.DataFrame({
"Position": [1, 2, 3, 4],
"Team": ['PSV','Ajax','Feyenoord','Twente'],
"GF": [80, 75, 75, 70],
"GA": [30, 25, 40, 60],
"Points": [79, 78, 70, 66]
})
print("From scratch: \n {} \n".format(first_dataframe))
# DataFrme from a list
data = [[1, 2, 3, 4], ['PSV','Ajax','Feyenoord','Twente'],
[80, 75, 75, 70], [30, 25, 40, 60], [79, 78, 70, 66]]
columns = ["Position", "Team", "GF", "GA", "Points"]
second_dataframe = pd.DataFrame(data, index=columns)
print("From list: \n {} \n".format(second_dataframe.T)) # the '.T' operator is explained later on
# DataFrame from numpy array
data = np.array([[1, 2, 3, 4], ['PSV','Ajax','Feyenoord','Twente'],
[80, 75, 75, 70], [30, 25, 40, 60], [79, 78, 70, 66]])
columns = ["Position", "Team", "GF", "GA", "Points"]
third_dataframe = pd.DataFrame(data.T, columns=columns)
print("From numpy array: \n {} \n".format(third_dataframe))From scratch:
Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
From list:
Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
From numpy array:
Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
</code>
### DataFrame attributes
This section gives a quick overview of some of the `pandas.DataFrame` attributes such as `T`, `index`, `columns`, `iloc`, `loc`, `shape` and `values`._____no_output_____
<code>
# transpose the index and columns
print(third_dataframe.T) 0 1 2 3
Position 1 2 3 4
Team PSV Ajax Feyenoord Twente
GF 80 75 75 70
GA 30 25 40 60
Points 79 78 70 66
# index makes reference to the row labels
print(third_dataframe.index)RangeIndex(start=0, stop=4, step=1)
# columns makes reference to the column labels
print(third_dataframe.columns)Index(['Position', 'Team', 'GF', 'GA', 'Points'], dtype='object')
# iloc allows to access the index by integer-location (e.g. all team names, which are in the second columm)
print(third_dataframe.iloc[:,1])0 PSV
1 Ajax
2 Feyenoord
3 Twente
Name: Team, dtype: object
# loc allows to access the index by label(s)-location (e.g. all team names, which are in the "Team" columm)
print(third_dataframe.loc[0, 'Team'])PSV
# shape returns a tuple with the DataFrame dimension, similar to numpy
print(third_dataframe.shape)(4, 5)
# values return a Numpy representation of the DataFrame data
print(third_dataframe.values)[['1' 'PSV' '80' '30' '79']
['2' 'Ajax' '75' '25' '78']
['3' 'Feyenoord' '75' '40' '70']
['4' 'Twente' '70' '60' '66']]
</code>
### DataFrame methods
This section gives a quick overview of some of the `pandas.DataFrame` methods such as `head`, `describe`, `concat`, `groupby`,`rename`, `filter`, `drop` and `isna`. To import data from CSV or MS Excel files, we can make use of `read_csv` and `read_excel`, respectively._____no_output_____
<code>
# print the first few rows in your dataset with head()
print(third_dataframe.head()) # In this case, it is not very useful because we don't have thousands of rows Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
# get the summary statistics of the DataFrame with describe()
print(third_dataframe.describe()) Position Team GF GA Points
count 4 4 4 4 4
unique 4 4 3 4 4
top 2 Ajax 75 40 70
freq 1 1 2 1 1
# concatenate (join) DataFrame objects using concat()
# first, we will split the above DataFrame in two different ones
df_a = third_dataframe.loc[[0,1],:]
df_b = third_dataframe.loc[[2,3],:]
print(df_a)
print('\n')
print(df_b)
print('\n')
# now, we concatenate both datasets
df = pd.concat([df_a, df_b])
print(df) Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
Position Team GF GA Points
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
Position Team GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
# group the data by certain variable via groupby()
# here, we have grouped the data by goals for, which in this case is 75
group = df.groupby('GF')
print(group.get_group('75')) Position Team GF GA Points
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
# rename() helps you change the column or index names
print(df.rename(columns={'Position':'Pos','Team':'Club'})) Pos Club GF GA Points
0 1 PSV 80 30 79
1 2 Ajax 75 25 78
2 3 Feyenoord 75 40 70
3 4 Twente 70 60 66
# build a subset of rows or columns of your dataset according to labels via filter()
# here, items refer to the variable names: 'Team' and 'Points'; to select columns, we specify axis=1
print(df.filter(items=['Team', 'Points'], axis=1)) Team Points
0 PSV 79
1 Ajax 78
2 Feyenoord 70
3 Twente 66
# dropping some labels
print(df.drop(columns=['GF', 'GA'])) Position Team Points
0 1 PSV 79
1 2 Ajax 78
2 3 Feyenoord 70
3 4 Twente 66
# search for NA (not available) entries in the DataFrame
print(df.isna()) # No NA values
print('\n')
# create a pandas Series with a NA value
# the Series as W (winnin matches)
tmp = pd.Series([np.NaN, 25, 24, 19], name="W")
# concatenate the Series with the DataFrame
df = pd.concat([df,tmp], axis = 1)
print(df)
print('\n')
# again, check for NA entries
print(df.isna()) Position Team GF GA Points
0 False False False False False
1 False False False False False
2 False False False False False
3 False False False False False
Position Team GF GA Points W
0 1 PSV 80 30 79 NaN
1 2 Ajax 75 25 78 25.0
2 3 Feyenoord 75 40 70 24.0
3 4 Twente 70 60 66 19.0
Position Team GF GA Points W
0 False False False False False True
1 False False False False False False
2 False False False False False False
3 False False False False False False
</code>
## Dataset
For this week exercises we will use a dataset from the Genomics of Drug Sensitivity in Cancer (GDSC) project (https://www.cancerrxgene.org/). In this study (['Iorio et al., Cell, 2016']()), 265 compounds were tested on 1001 cancer cell lines for which different types of -omics data (RNA expression, DNA methylation, Copy Number Alteration, DNA sequencing) are available. This is a valuable resource to look for biomarkers of drugs sensitivity in order to try to understand why cancer patients responds very differently to cancer drugs and find ways to assign the optimal treatment to each patient.
For this exercise we will use a subset of the data, focusing the response to the drug YM155 (Sepantronium bromide) on four cancer types, for a total of 148 cancer cell lines.
| ID | Cancer type |
|-------------|----------------------------------|
| COAD/READ | Colorectal adenocarcinoma |
| NB | Neuroblastoma |
| KIRC | Kidney renal clear cell carcinoma|
| BRCA | Breast carcinoma |
We will use the RNA expression data (RMA normalised). Only genes with high variability across cell lines (variance > 5, resulting in 238 genes) have been kept.
Drugs have been tested at different concentration, measuring each time the viability of the cells. Drug sensitivity is measured using the natural log of the fitted IC50 metric, which is defined as the half maximal inhibitory concentration. A lower IC50 corresponds to a more sensitive cell line because a lower amount of drug is sufficient to have a strong response, while a higher IC50 corresponds to a more resistant cell line because more drug is needed for killing the cells.
Based on the IC50 metric, cells can be classified as sensitive or resistant. The classification is done by computing the $z$-score across all cell lines in the GDSC for each drug, and considering as sensitive the ones with $z$-score < 0 and resistant the ones with $z$-score > 0.
The dataset is originally provided as 3 files ([original source](https://www.sciencedirect.com/science/article/pii/S0092867416307462?via%3Dihub)) :
`GDSC_RNA_expression.csv`: gene expression matrix with the cell lines in the rows (148) and the genes in the columns (238).
`GDSC_drug_response.csv`: vector with the cell lines response to the drug YM155 in terms of log(IC50) and as classification in sensitive or resistant.
`GDSC_metadata.csv`: metadata for the 148 cell lines including name, COSMIC ID and tumor type (using the classification from ['The Cancer Genome Atlas TCGA'](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga))
For convenience, we provide the data already curated.
`RNA_expression_curated.csv`: [148 cell lines , 238 genes]
`drug_response_curated.csv`: [148 cell lines , YM155 drug]
The curated data cam be read as `pandas` `DataFrame`s in the following way:_____no_output_____
<code>
import pandas as pd
gene_expression = pd.read_csv("./data/RNA_expression_curated.csv", sep=',', header=0, index_col=0)
drug_response = pd.read_csv("./data/drug_response_curated.csv", sep=',', header=0, index_col=0)_____no_output_____
</code>
You can use the `DataFrame`s directly as inputs to the the `sklearn` models. The advantage over using `numpy` arrays is that the variable are annotated, i.e. each input and output has a name._____no_output_____## Tools
The `scikit-learn` library provides the required tools for linear regression/classification and shrinkage, as well as for logistic regression._____no_output_____
<code>
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import LogisticRegression_____no_output_____
</code>
Note that the notation used for the hyperparameters in the `scikit-learn` library is different from the one used in the lecture. More specifically, in the lecture $\alpha$ is the tunable parameter to select the compromise between Ridge and Lasso. Whereas, `scikit-learn` library refers to `alpha` as the tunable parameter $\lambda$. Please check the documentation for more details._____no_output_____# Exercises
## Selection of the hyperparameter
Implement cross-validation (using `sklearn.grid_search.GridSearchCV`) to select the `alpha` hyperparameter of `sklearn.linear_model.Lasso`.
## Feature selection
Look at the features selected using the hyperparameter which corresponds to the minimum cross-validation error.
<p><font color='#770a0a'>Is the partition in training and validation sets playing a role in the selection of the hyperparameter? How will this affect the selection of the relevant features?</font></p>
**Answer**: The partition in itself has no direct relation to the selection of the hyperparameter (see the graph with selection frequency), as these partitions are averaged in the hyperparameter selection. Nevertheless, the selected features may be sensitive to this partition. Therefore, it is useful to repeat cross-validation multiple times (using bootstrap).
<p><font color='#770a0a'>Should the value of the intercept also be shrunk to zero with Lasso and Ridge regression? Motivate your answer.</font></p>
**Answer**: No, this should not be done, because then the optimization procedure would become dependent on the origin chosen for the output variable $\mathbf{y}$. For example, adding a constant value to your training $\mathbf{y}$, would not result in an addition of this constant value for the predictions. This would be the case for a non-penalized intercept.
## Bias-variance
Show the effect of the regularization on the parameter estimates in terms of bias and variance. For this you can repeat the optimization 100 times using bootstrap and visualise the profile of the Lasso regression coefficient over a grid of the hyperparameter, optionally including the variability as error bars.
<p><font color='#770a0a'>Based on the visual analysis of the plot, what are your observation on bias and variance in relation to model complexity? Motivate your answer.</font></p>
**Answer**: For a low $\alpha$, many parameters are included, leading to a complex model with high variance. As $\alpha$ increases, the amount and values of the parameters decrease, leading to a less complex model. A less and less complex model increases the bias, but decreases the variance.
## Logistic regression
<p><font color='#770a0a'>Write the expression of the objective function for the penalized logistic regression with $L_1$ and $L_2$ regularisation (as in Elastic net).</font></p>
**Logistic Regression with Elastic net**
$$\max_{\beta_0, \beta} \left\{ \sum^{N}_{i=1} \left[y_i\left(\beta_0 + \beta^T x_i\right) - \log{\left(1+e^{\beta_0 + \beta^T x_i}\right)}\right]-\left[\lambda_1 \sum_{j=1}^{p} |\beta_j | + \lambda_2 \sum_{j=1}^{p} \beta_j^2 \right] \right\}$$
_____no_output_____**Selection of the Hyperparameter $\alpha$**_____no_output_____
<code>
import sys
sys.path.append('code/')
from week_3_utils import *
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
X_train, X_test, y_train, y_test = train_test_split(gene_expression, drug_response, test_size=0.2, random_state=40)
alpha_range = np.linspace(10e-4,1,num=100)
model = cv_lasso(alpha_range,folds=5)
model.fit(X_train, y_train)
print(model.best_estimator_)Pipeline(steps=[('normalize', StandardScaler()),
('lasso', Lasso(alpha=0.4752727272727273))])
</code>
**Feature Selection**_____no_output_____
<code>
features = gene_expression.columns
counter = np.zeros((1,len(features)))
amt_of_rep = 5
for ix in range(amt_of_rep):
alpha_range = np.linspace(10e-4,1,num=50)
model = cv_lasso(alpha_range,folds=5)
model.fit(X_train, y_train)
coefficients = model.best_estimator_.named_steps['lasso'].coef_
nonzero_coef = np.array((coefficients != 0.)).astype(int)
counter = counter+nonzero_coef
print(f'{ix} of {amt_of_rep-1}')
0 of 4
1 of 4
2 of 4
3 of 4
4 of 4
counter = counter.ravel()
features_in_plot = features[counter != 0]
counters_in_plot = counter[counter != 0]
#print(features_in_plot)
plt.bar(list(range(0,4*len(features_in_plot),4)), counters_in_plot/amt_of_rep, tick_label=features_in_plot)
plt.xticks(rotation=30)
plt.ylabel('Fraction of Selection')
plt.show()
Index(['PRSS3', 'GAL', 'CDH17', 'ABCB1', 'CYR61', 'FABP1'], dtype='object')
</code>
**Bias-Variance**_____no_output_____
<code>
from sklearn.utils import resample
from week_3_utils import lasso_estimator
n_bootstrap = 100
samplesize = 80
alpha_range = np.linspace(0,3,num=100)
coef = np.zeros((len(alpha_range),n_bootstrap,len(gene_expression.columns)))
for j in range(n_bootstrap):
x_bs, y_bs = resample(X_train, y_train, replace=True, n_samples=samplesize)
for i,alpha in enumerate(alpha_range):
model_bs = lasso_estimator(alpha=alpha)
model_bs.fit(x_bs, y_bs)
coef[i,j,:] = model_bs.named_steps['lasso'].coef_
average_coef = np.mean(coef, axis=1)
std_coef = np.std(coef, axis=1)
for k in range(len(gene_expression.columns)):
plt.plot(alpha_range, average_coef[:,k],linewidth=0.5)
plt.xlabel('alpha')
plt.ylabel('Coefficients')
plt.show()_____no_output_____
</code>
|
{
"repository": "max-de-rooij/8dm50-machine-learning",
"path": "practicals/week_3.ipynb",
"matched_keywords": [
"gene expression",
"RNA",
"genomics",
"biomarkers"
],
"stars": null,
"size": 824676,
"hexsha": "cb4e570f0b0feacc4e0f5a9e4f8488e4b734c633",
"max_line_length": 729161,
"avg_line_length": 1120.4836956522,
"alphanum_fraction": 0.7365644205
}
|
# Notebook from qiringji/python-causality-handbook
Path: causal-inference-for-the-brave-and-true/03-Stats-Review-The-Most-Dangerous-Equation.ipynb
# 03 - Stats Review: The Most Dangerous Equation
In his famous article of 2007, Howard Wainer writes about very dangerous equations:
"Some equations are dangerous if you know them, and others are dangerous if you do not. The first category may pose danger because the secrets within its bounds open doors behind which lies terrible peril. The obvious winner in this is Einstein’s ionic equation \\(E = MC^2\\), for it provides a measure of the enormous energy hidden within ordinary matter. \[...\] Instead I am interested in equations that unleash their danger not when we know about them, but rather when we do not. Kept close at hand, these equations allow us to understand things clearly, but their absence leaves us dangerously ignorant."
The equation he talks about is Moivre’s equation:
$
SE = \dfrac{\sigma}{\sqrt{n}}
$
where \\(SE\\) is the standard error of the mean, \\(\sigma\\) is the standard deviation and \\(n\\) is the sample size. Sounds like a piece of math the brave and true should master, so let's get to it.
To see why not knowing this equation is very dangerous, let's take a look at some education data. I've compiled data on ENEM scores (Brazilian standardised high school scores, similar to SAT) from different schools for a period of 3 years. I also did some cleaning on the data to keep only the information relevant to us. The original data can be downloaded in the [Inep website](http://portal.inep.gov.br/web/guest/microdados#).
If we look at the top performing school, something catches the eye: those schools have a fairly small number of students. _____no_output_____
<code>
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from scipy import stats
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import style
style.use("fivethirtyeight")
df = pd.read_csv("./data/enem_scores.csv")
df.sort_values(by="avg_score", ascending=False).head(10)_____no_output_____
</code>
Looking at it from another angle, we can separate only the 1% top schools and study them. What are they like? Perhaps we can learn something from the best and replicate it elsewhere. And sure enough, if we look at the top 1% schools, we figure out they have, on average, fewer students._____no_output_____
<code>
plot_data = (df
.assign(top_school = df["avg_score"] >= np.quantile(df["avg_score"], .99))
[["top_school", "number_of_students"]]
.query(f"number_of_students<{np.quantile(df['number_of_students'], .98)}")) # remove outliers
plt.figure(figsize=(6,6))
sns.boxplot(x="top_school", y="number_of_students", data=plot_data)
plt.title("Number of Students of 1% Top Schools (Right)");_____no_output_____
</code>
One natural conclusion that follows is that small schools lead to higher academic performance. This makes intuitive sense, since we believe that less students per teacher allows the teacher to give focused attention to each student. But what does this have to do with Moivre’s equation? And why is it dangerous?
Well, it becomes dangerous once people start to make important and expensive decisions based on this information. In his article, Howard continues:
"In the 1990s, it became popular to champion reductions in the size of schools. Numerous philanthropic organisations and government agencies funded the division of larger schools based on the fact that students at small schools are over represented in groups with high test scores."
What people forgot to do was to look also at the bottom 1% of schools. If we do that, lo and behold! They also have very few students!_____no_output_____
<code>
q_99 = np.quantile(df["avg_score"], .99)
q_01 = np.quantile(df["avg_score"], .01)
plot_data = (df
.sample(10000)
.assign(Group = lambda d: np.select([d["avg_score"] > q_99, d["avg_score"] < q_01],
["Top", "Bottom"], "Middle")))
plt.figure(figsize=(10,5))
sns.scatterplot(y="avg_score", x="number_of_students", hue="Group", data=plot_data)
plt.title("ENEM Score by Number of Students in the School");_____no_output_____
</code>
What we are seeing above is exactly what is expected according to the Moivre’s equation. As the number of students grows, the average score becomes more and more precise. Schools with very few samples can have very high and very low scores simply due to chance. This is less likley to occur with large schools. Moivre’s equation talks about a fundamental fact about the reality of information and records in the form of data: it is always imprecise. The question then becomes how imprecise.
Statistics is the science that deals with these imprecisions so they don't catch us off-guard. As Taleb puts it in his book, Fooled by Randomness:
> Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance.
One way to quantify our uncertainty is the **variance of our estimates**. Variance tells us how much observation deviates from their central and most probably value. As indicated by Moivre’s equation, this uncertainty shrinks as the amount of data we observe increases. This makes sense, right? If we see lots and lots of students performing excellently at a school, we can be more confident that this is indeed a good school. However, if we see a school with only 10 students and 8 of them perform well, we need to be more suspicious. It could be that, by chance, that school got some above average students.
The beautiful triangular plot we see above tells exactly this story. It shows us how our estimates of the school performance has a huge variance when the sample sizes are small. It also shows that variance shrinks as the sample size increases. This is true for the average score in a school, but it is also true about any summary statistics that we have, including the ATE we so often want to estimate.
## The Standard Error of Our Estimates
Since this is just a review on statistics, I'll take the liberty to go a bit faster now. If you are not familiar with distributions, variance and standard errors, please, do read on, but keep in mind that you might need some additional resources. I suggest you google any MIT course on introduction to statistics. They are usually quite good.
In the previous section, we estimated the average treatment effect \\(E[Y_1-Y_0]\\) as the difference in the means between the treated and the untreated \\(E[Y|T=1]-E[Y|T=0]\\). As our motivating example, we figured out the \\(ATE\\) for online classes. We also saw that it was a negative impact, that is, online classes made students perform about 5 points worse than the students with face to face classes. Now, we get to see if this impact is statistically significant.
To do so, we need to estimate the \\(SE\\). We already have \\(n\\), our sample size. To get the estimate for the standard deviation we can do the following
$
\hat{\sigma}=\frac{1}{N-1}\sum_{i=0}^N (x-\bar{x})^2
$
where \\(\bar{x}\\) is the mean of \\(x\\). Fortunately for us, most programming software already implements this. In Pandas, we can use the method [std](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html)._____no_output_____
<code>
data = pd.read_csv("./data/online_classroom.csv")
online = data.query("format_ol==1")["falsexam"]
face_to_face = data.query("format_ol==0 & format_blended==0")["falsexam"]
def se(y: pd.Series):
return y.std() / np.sqrt(len(y))
print("SE for Online:", se(online))
print("SE for Face to Face:", se(face_to_face))SE for Online: 1.5371593973041635
SE for Face to Face: 0.8723511456319106
</code>
## Confidence Intervals
The standard error of our estimate is a measure of confidence. To understand exactly what it means, we need to go into turbulent and polemic statistical waters. For one view of statistics, the frequentist view, we would say that the data we have is nothing more than a manifestation of a true data generating process. This process is abstract and ideal. It is governed by true parameters that are unchanging but also unknown to us. In the context of the students test, if we could run multiple experiments and collect multiple datasets, all would resemble the true underlying data generating process, but wouldn't be exactly like it. This is very much like Plato's writing on the Forms:
> Each [of the essential forms] manifests itself in a great variety of combinations, with actions, with material things, and with one another, and each seems to be many
To better grasp this, let's suppose we have a true abstract distribution of students' test score. This is a normal distribution with true mean of 74 and true standard deviation of 2. From this distribution, we can run 10000 experiments. On each one, we collect 500 samples. Some experiment data will have a mean lower than the true one, some will be higher. If we plot them in a histogram, we can see that means of the experiments are distributed around the true mean._____no_output_____
<code>
true_std = 2
true_mean = 74
n = 500
def run_experiment():
return np.random.normal(true_mean,true_std, 500)
np.random.seed(42)
plt.figure(figsize=(8,5))
freq, bins, img = plt.hist([run_experiment().mean() for _ in range(10000)], bins=40, label="Experiment Means")
plt.vlines(true_mean, ymin=0, ymax=freq.max(), linestyles="dashed", label="True Mean", color="orange")
plt.legend();
_____no_output_____
</code>
Notice that we are talking about the mean of means here. So, by chance, we could have an experiment where the mean is somewhat below or above the true mean. This is to say that we can never be sure that the mean of our experiment matches the true platonic and ideal mean. However, **with the standard error, we can create an interval that will contain the true mean 95% of the time**.
In real life, we don't have the luxury of simulating the same experiment with multiple datasets. We often only have one. But we can draw on the intuition above to construct what we call **confidence intervals**. Confidence intervals come with a probability attached to them. The most common one is 95%. This probability tells us how many of the hypothetical confidence intervals we would build from different studies contain the true mean. For example, the 95% confidence intervals computed from many similar studies would contain the true mean 95% of the time.
To calculate the confidence interval, we use what is called the **central limit theorem**. This theorem states that **means of experiments are normally distributed**. From statistical theory, we know that 95% of the mass of a normal distribution is between 2 standard deviations above and below the mean. Technically, 1.96, but 2 is close enough.

The Standard Error of the mean serves as our estimate of the distribution of the experiment means. So, if we multiply it by 2 and add and subtract it from the mean of one of our experiments, we will construct a 95% confidence interval for the true mean._____no_output_____
<code>
np.random.seed(321)
exp_data = run_experiment()
exp_se = exp_data.std() / np.sqrt(len(exp_data))
exp_mu = exp_data.mean()
ci = (exp_mu - 2 * exp_se, exp_mu + 2 * exp_se)
print(ci)(73.82718114045632, 74.17341543460314)
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="95% CI")
plt.legend()
plt.show()_____no_output_____
</code>
Of course, we don't need to restrict ourselves to the 95% confidence interval. We could generate the 99% interval by finding what we need to multiply the standard deviation by so the interval contains 99% of the mass of a normal distribution.
The function `ppf` in python gives us the inverse of the CDF. So, `ppf(0.5)` will return 0.0, saying that 50% of the mass of the standard normal distribution is below 0.0. By the same token, if we plug 99.5%, we will have the value `z`, such that 99.5% of the distribution mass falls below this value. In other words, 0.05% of the mass falls above this value. Instead of multiplying the standard error by 2 like we did to find the 95% CI, we will multiply it by `z`, which will result in the 99% CI._____no_output_____
<code>
from scipy import stats
z = stats.norm.ppf(.995)
print(z)
ci = (exp_mu - z * exp_se, exp_mu + z * exp_se)
ci2.5758293035489004
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="99% CI")
plt.legend()
plt.show()_____no_output_____
</code>
Back to our classroom experiment, we can construct the confidence interval for the mean exam score for both the online and face to face students' group_____no_output_____
<code>
def ci(y: pd.Series):
return (y.mean() - 2 * se(y), y.mean() + 2 * se(y))
print("95% CI for Online:", ci(online))
print("95% for Face to Face:", ci(face_to_face))95% CI for Online: (70.56094429049804, 76.7095818797147)
95% for Face to Face: (76.80278229206951, 80.29218687459715)
</code>
What we can see is that the 95% CI of the groups don't overlap. The lower end of the CI for Face to Face class is above the upper end of the CI for online classes. This is evidence that our result is not by chance, and that the true mean for students in face to face clases is higher than the true mean for students in online classes. In other words, there is a significant causal decrease in academic performance when switching from face to face to online classes.
As a recap, confidence intervals are a way to place uncertainty around our estimates. The smaller the sample size, the larger the standard error and the wider the confidence interval. Finally, you should always be suspicious of measurements without any uncertainty metric attached to it. Since they are super easy to compute, lack of confidence intervals signals either some bad intentions or simply lack of knowledge, which is equally concerning.

One final word of caution here. Confidence intervals are trickier to interpret than at first glance. For instance, I **shouldn't** say that this particular 95% confidence interval contains the true population mean with 95% chance. That's because in frequentist statistics, the one that uses confidence intervals, the population mean is regarded as a true population constant. So it either is or isn't in our particular confidence interval. In other words, our particular confidence interval either contains or doesn't contain the true mean. If it does, the chance of containing it would be 100%, not 95%. If it doesn't, the chance would be 0%. Rather, in confidence intervals, the 95% refers to the frequency that such confidence intervals, computed in many many studies, contain the true mean. 95% is our confidence in the algorithm used to compute the 95% CI, not on the particular interval itself.
Now, having said that, as an Economist (statisticians, please look away now), I think this purism is not very useful. In practice, you will see people saying that the particular confidence interval contains the true mean 95% of the time. Although wrong, this is not very harmful, as it still places a precise degree of uncertainty in our estimates. Moreover, if we switch to Bayesian statistics and use probable intervals instead of confidence intervals, we would be able to say that the interval contains the distribution mean 95% of the time. Also, from what I've seen in practice, with decent sample sizes, bayesian probability intervals are more similar to confidence intervals than both bayesian and frequentists would like to admit. So, if my word counts for anything, feel free to say whatever you want about your confidence interval. I don't care if you say they contain the true mean 95% of the time. Just, please, never forget to place them around your estimates, otherwise you will look silly.
## Hypothesis Testing
Another way to incorporate uncertainty is to state a hypothesis test: is the difference in means statistically different from zero (or any other value)? To do so, we will recall that the sum or difference of 2 normal distributions is also a normal distribution. The resulting mean will be the sum or difference between the two distributions, while the variance will always be the sum of the variance:
$
N(\mu_1, \sigma_1^2) - N(\mu_2, \sigma_2^2) = N(\mu_1 - \mu_2, \sigma_1^2 + \sigma_2^2)
$
$
N(\mu_1, \sigma_1^2) + N(\mu_2, \sigma_2^2) = N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)
$
If you don't recall, its OK. We can always use code and simulated data to check:_____no_output_____
<code>
np.random.seed(123)
n1 = np.random.normal(4, 3, 30000)
n2 = np.random.normal(1, 4, 30000)
n_diff = n2 - n1
sns.distplot(n1, hist=False, label="N(4,3)")
sns.distplot(n2, hist=False, label="N(1,4)")
sns.distplot(n_diff, hist=False, label=f"N(4,3) - N(1,4) = N(-1, 5)")
plt.show()_____no_output_____
</code>
If we take the distribution of the means of our 2 groups and subtract one from the other, we will have a third distribution. The mean of this final distribution will be the difference in the means and the standard deviation of this distribution will be the square root of the sum of the standard deviations.
$
\mu_{diff} = \mu_1 - \mu_2
$
$
SE_{diff} = \sqrt{SE_1 + SE_2} = \sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}
$
Let's return to our classroom example. We will construct this distribution of the difference. Of course, once we have it, building the 95% CI is very easy._____no_output_____
<code>
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
ci = (diff_mu - 1.96*diff_se, diff_mu + 1.96*diff_se)
print(ci)(-8.376410208363385, -1.4480327880905248)
x = np.linspace(diff_mu - 4*diff_se, diff_mu + 4*diff_se, 100)
y = stats.norm.pdf(x, diff_mu, diff_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=.05)
plt.vlines(ci[0], ymin=0, ymax=.05, label="95% CI")
plt.legend()
plt.show()_____no_output_____
</code>
With this at hand, we can say that we are 95% confident that the true difference between the online and face to face group falls between -8.37 and -1.44. We can also construct a **z statistic** by dividing the difference in mean by the \\\(SE\\\\) of the differences.
$
z = \dfrac{\mu_{diff} - H_{0}}{SE_{diff}} = \dfrac{(\mu_1 - \mu_2) - H_{0}}{\sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}}
$
Where \\(H_0\\) is the value which we want to test our difference against.
The z statistic is a measure of how extreme the observed difference is. To test our hypothesis that the difference in the means is statistically different from zero, we will use contradiction. We will assume that the opposite is true, that is, we will assume that the difference is zero. This is called a null hypothesis, or \\(H_0\\). Then, we will ask ourselves "is it likely that we would observe such a difference if the true difference were indeed zero?" In statistical math terms, we can translate this question to checking how far from zero is our z statistic.
Under \\(H_0\\), the z statistic follows a standard normal distribution. So, if the difference is indeed zero, we would see the z statistic within 2 standard deviations of the mean 95% of the time. The direct consequence of this is that if z falls above or below 2 standard deviations, we can reject the null hypothesis with 95% confidence.
Let's see how this looks like in our classroom example._____no_output_____
<code>
z = diff_mu / diff_se
print(z)-2.7792810791031224
x = np.linspace(-4,4,100)
y = stats.norm.pdf(x, 0, 1)
plt.plot(x, y, label="Standard Normal")
plt.vlines(z, ymin=0, ymax=.05, label="Z statistic", color="C1")
plt.legend()
plt.show()_____no_output_____
</code>
This looks like a pretty extreme value. Indeed, it is above 2, which means there is less than a 5% chance that we would see such an extreme value if there were no difference in the groups. This again leads us to conclude that switching from face to face to online classes causes a statistically significant drop in academic performance.
One final interesting thing about hypothesis tests is that it is less conservative than checking if the 95% CI from the treated and untreated group overlaps. In other words, if the confidence intervals in the two groups overlap, it can still be the case that the result is statistically significant. For example, let's pretend that the face-to-face group has an average score of 74 and standard error of 7 and the online group has an average score of 71 with a standard error of 1. _____no_output_____
<code>
cont_mu, cont_se = (71, 1)
test_mu, test_se = (74, 7)
diff_mu = test_mu - cont_mu
diff_se = np.sqrt(cont_se + cont_se)
print("Control 95% CI:", (cont_mu-1.96*cont_se, cont_mu+1.96*cont_se))
print("Test 95% CI:", (test_mu-1.96*test_se, test_mu+1.96*test_se))
print("Diff 95% CI:", (diff_mu-1.96*diff_se, diff_mu+1.96*diff_se))Control 95% CI: (69.04, 72.96)
Test 95% CI: (60.28, 87.72)
Diff 95% CI: (0.22814141774873375, 5.771858582251266)
</code>
If we construct the confidence intervals for these groups, they overlap. The upper bound for the 95% CI of the online group is 72.96 and the lower bound for the face-to-face group is 60.28. However, once we compute the 95% confidence interval for the difference between the groups, we can see that it does not contain zero. In summary, even though the individual confidence intervals overlap, the difference can still be statistically different from zero.
## P-values
I've said previously that there is less than 5% chance that we would observe such an extreme value if the difference between online and face to face groups were actually zero. But can we estimate exactly what is that chance? How likely are we to observe such an extreme value? Enters p-values!
Just like with confidence intervals (and most frequentist statistics, as a matter of fact) the true definition of p-values can be very confusing. So, to not take any risks, I'll copy the definition from Wikipedia: "the p-value is the probability of obtaining test results at least as extreme as the results actually observed during the test, assuming that the null hypothesis is correct".
To put it more succinctly, the p-value is the probability of seeing such data, given that the null-hypothesis is true. It measures how unlikely it is that you are seeing a measurement if the null-hypothesis is true. Naturally, this often gets confused with the probability of the null-hypothesis being true. Note the difference here. The p-value is NOT \\(P(H_0|data)\\), but rather \\(P(data|H_0)\\).
But don't let this complexity fool you. In practical terms, they are pretty straightforward to use.

To get the p-value, we need to compute the area under the standard normal distribution before or after the z statistic. Fortunately, we have a computer to do this calculation for us. We can simply plug the z statistic in the CDF of the standard normal distribution._____no_output_____
<code>
print("P-value:", stats.norm.cdf(z))P-value: 0.0027239680835563383
</code>
This means that there is only a 0.2% chance of observing this extreme z statistic if the difference was zero. Notice how the p-value is interesting because it avoids us having to specify a confidence level, like 95% or 99%. But, if we wish to report one, from the p-value, we know exactly at which confidence our test will pass or fail. For instance, with a p-value of 0.0027, we know that we have significance up to the 0.2% level. So, while the 95% CI and the 99% CI for the difference will neither contain zero, the 99.9% CI will._____no_output_____
<code>
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
print("95% CI:", (diff_mu - stats.norm.ppf(.975)*diff_se, diff_mu + stats.norm.ppf(.975)*diff_se))
print("99% CI:", (diff_mu - stats.norm.ppf(.995)*diff_se, diff_mu + stats.norm.ppf(.995)*diff_se))
print("99.9% CI:", (diff_mu - stats.norm.ppf(.9995)*diff_se, diff_mu + stats.norm.ppf(.9995)*diff_se))95% CI: (-8.376346553082909, -1.4480964433710017)
99% CI: (-9.46485353526404, -0.3595894611898709)
99.9% CI: (-10.728040658245558, 0.9035976617916459)
</code>
## Keys Ideas
We've seen how important it is to know Moivre’s equation and we used it to place a degree of certainty around our estimates. Namely, we figured out that the online classes cause a decrease in academic performance compared to face to face classes. We also saw that this was a statistically significant result. We did it by comparing the Confidence Intervals of the means for the 2 groups, by looking at the confidence interval for the difference, by doing a hypothesis test and by looking at the p-value. Let's wrap everything up in a single function that does A/B testing comparison like the one we did above_____no_output_____
<code>
def AB_test(test: pd.Series, control: pd.Series, confidence=0.95, h0=0):
mu1, mu2 = test.mean(), control.mean()
se1, se2 = test.std() / np.sqrt(len(test)), control.std() / np.sqrt(len(control))
diff = mu1 - mu2
se_diff = np.sqrt(test.var()/len(test) + control.var()/len(control))
z_stats = (diff-h0)/se_diff
p_value = stats.norm.cdf(z_stats)
def critial(se): return -se*stats.norm.ppf((1 - confidence)/2)
print(f"Test {confidence*100}% CI: {mu1} +- {critial(se1)}")
print(f"Control {confidence*100}% CI: {mu2} +- {critial(se2)}")
print(f"Test-Control {confidence*100}% CI: {diff} +- {critial(se_diff)}")
print(f"Z Statistic {z_stats}")
print(f"P-Value {p_value}")
AB_test(online, face_to_face)Test 95.0% CI: 73.63526308510637 +- 3.0127770572134565
Control 95.0% CI: 78.54748458333333 +- 1.7097768273108005
Test-Control 95.0% CI: -4.912221498226955 +- 3.4641250548559537
Z Statistic -2.7792810791031224
P-Value 0.0027239680835563383
</code>
Since our function is generic enough, we can test other null hypotheses. For instance, can we try to reject that the difference between online and face to face class performance is -1. With the results we get, we can say with 95% confidence that the difference is greater than -1. But we can't say it with 99% confidence:_____no_output_____
<code>
AB_test(online, face_to_face, h0=-1)Test 95.0% CI: 73.63526308510637 +- 3.0127770572134565
Control 95.0% CI: 78.54748458333333 +- 1.7097768273108005
Test-Control 95.0% CI: -4.912221498226955 +- 3.4641250548559537
Z Statistic -2.2134920404560883
P-Value 0.013431870694630114
</code>
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
In this particular section, I've also referenced The [Most Dangerous Equation](https://www.researchgate.net/publication/255612702_The_Most_Dangerous_Equation), by Howard Wainer.
Finally, if you are curious about the correct interpretation of the statistical concepts we've discussed here, I recommend reading the paper by Greenland et al, 2016: [Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations](https://link.springer.com/content/pdf/10.1007/s10654-016-0149-3.pdf).

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers)._____no_output_____
|
{
"repository": "qiringji/python-causality-handbook",
"path": "causal-inference-for-the-brave-and-true/03-Stats-Review-The-Most-Dangerous-Equation.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 356812,
"hexsha": "cb4e71efff440725ba72bec3c5d88c7a103ad09e",
"max_line_length": 153780,
"avg_line_length": 388.6840958606,
"alphanum_fraction": 0.9266588568
}
|
# Notebook from gaby-chu/info3350-s22
Path: lectures/lec-07-vectors-distances-regression.ipynb
# INFO 3350/6350
## Lecture 07: Vectorization, distance metrics, and regression
## To do
* Read HDA ch. 5 and Grimmer and Stewart for Monday (a lot of reading)
* HW3 (gender and sentiment; dictionary methods) due by Thursday night at 11:59.
* Extra credit for good, consistent answers on Ed
* Study groups are great for homeworks.
* Questions?
## Definitions
* What is a **vector**?
* An ordered collection of numbers that locate a point in space relative to a shared reference point (called the *origin*).
* We can also think of vectors as representing the quantified *features* of an object.
* Vectors are usually written as *row matrices*, or just as lists: $vec = [1.0, 0.5, 3.0, 1.2]$
* Vectors have as many *dimensions* as there are features of the object to represent.
* The number of features to represent is a choice of the experiment. There is no correct choice, though some choices are better than others for a given purpose.
* What is **vectorization**?
* The process of transforming an object into its vector representation, typically by measuring some of the object's properties.
## Why would we want to do this?
One goal of humanistic inquiry and of scientific research is to compare objects, so that we can gather them into types and compare any one object to others that we observe. Think of biological species or literary genres or historical eras. But how can we measure the difference or similarity between objects that are, after all, always necessarily individual and unique?
* Measuring the *properties* of objects lets us compare those objects to one another.
* But ... *which* properties?
* Example: We counted words by type to compare gender and sentiment in novels.
* Establishing a vector representation allows us to define a **distance metric** between objects that aren't straightforwardly spatial.
* "Distance" is a metaphor. Ditto "similarity."
* Nothing is, in itself, like or unlike anything else.
* We sometimes seek to assert that objects are similar by erasing aspects of their particularity.
* Measuring similarity and difference are (always and only) interpretive interventions.
## A spatial example
Consider this map of central campus:

**How far apart are Gates Hall (purple star) and the clock tower (orange star)?**
What do we need to know or define in order to answer this question?
* Where is each building in physical space.
* Latitude/longitude; meters north/south and east/west of the book store; etc.
* How do we want to measure the distance between them (walking, driving, flying, tunneling, ...). Minutes or miles?
Normal, boring answer: about 0.4 miles on foot via Campus Rd and Ho Plaza, or a bit less if you cut some corners, or less than 0.3 miles if you can fly.
| Clock tower | Gates Hall |
| --- | --- |
|  |  |
More interesting version: How far apart are these buildings conceptually? Architecturally? Historically?
* What are the features and metrics you would use to answer this question?
* This is a lot more like the problem of comparing texts.
## A textual example_____no_output_____
<code>
text = '''\
My cat likes water.
The dog eats food.
The dog and the cat play together.
A dog and a cat meet another dog and cat.
The end.'''
# Print with sentence numbers
for line in enumerate(text.split('\n')):
print(line)(0, 'My cat likes water.')
(1, 'The dog eats food.')
(2, 'The dog and the cat play together.')
(3, 'A dog and a cat meet another dog and cat.')
(4, 'The end.')
</code>
Let us stipulate that we want to compare these five sentences according to their "*dogness*" and "*catness*." We care about those two aspects alone, nothing else.
Let's develop some intuitions here:
* Sentences 0 and 1 are as far apart as can be: 0 is about cats, 1 is about dogs.
* Sentence 2 lies between 0 and 1. It contains a mix of dogness and catness.
* Sentence 3 is kind of like sentence 2, but it has twice as much of both dogness and catness.
* How different are sentences 2 and 3? (There's no objectively correct answer.)
* Sentence 4 is a zero point. It has no dogness or catness.
### Count relevant words
||**cat**|**dog**|
|---|---|---|
|**sent**| | |
|0|1|0|
|1|0|1|
|2|1|1|
|3|2|2|
|4|0|0|
The **vector representation** of sentence 0 is `[1, 0]`. The vector representation of sentence 3 is `[2, 2]`. And so on ...
### Visualize (scatter plot)
Sketch this by hand ...
### Distance measures
How far apart are sentences 0 and 1 (and all the rest)?
#### Manhattan distance
* Also called "city block" distance.
* Not much used, but easy to understand and to compute (which matters for very large data sets).
* Sum of the absolute difference in each dimension.
For **sentences 0 and 1**, the Manhattan distance = |1| + |-1| = 2.
#### Euclidean distance
* Straight-line or "as the crow flies" distance.
* Widely used in data science, but not always the best choice for textual data.
Recall the Pythagorean theorem for the hypotenuse of a triangle: $a^2 = b^2 + c^2$ or $a = \sqrt{b^2 +c^2}$.
For **sentences 0 and 1**, the Euclidean distance = $\sqrt{1^2 + 1^2} = \sqrt{2} = 1.414$.
OK, but what about the Euclidean distance between **sentence 0 and sentence 3**? Well, that distance = $\sqrt{1^2 + 2^2} = \sqrt{5} = 2.24$.
And between **sentences 2 and 3** (both balanced 50:50 between dogs and cats)? That's 1.4 again, the same as the distance between sentences 0 and 1 (which, recall, are totally divergent in dog/cat content).
An obvious improvement in this case would be to **normalize word counts by document length**.
#### Cosine distance
Maybe instead of distance, we could measure the difference in **direction** from the origin between points.
* **Sentences 0 and 1** are 90 degrees apart.
* **Sentences 2 and 3** are 0 degrees apart.
* **Sentences 0 and 1** are each 45 degrees away from **sentences 2 and 3**.
Now, recall the values of the **cosine** of an angle between 0 and 90 degrees. (Sketch by hand)
So, the cosines of the angles between sentences are:
sentences|angle|cosine
---|---|---
0 and 1|90|0
2 and 3|0|1
0 and 2|45|0.707
0 and 3|45|0.707
1 and 2|45|0.707
We could then transform these cosine **similarities** into **distances** by subtracting them from 1, so that the most *dissimilar* sentences (like 0 and 1) have the greatest distance between them.
The big advantage here is that we don't need to worry about getting length normalization right. Cosine distance is often a good choice for text similarity tasks.
#### Higher dimensions
All of these metrics can be calculated in arbitrarily many dimensions. Which is good, because textual data is often very high-dimensional. Imagine counting the occurrences of each word type in a large corpus of novels or historical documents. Can easily be tens of thousands of dimensions.
## In the real world
* There's nothing wrong with any of these vectorizations and distance metrics, exactly, but they're not state of the art.
* If you've done some recent NLP work, you'll know that, at the very least, you'd want to use static word embeddings in place of raw tokens.
* This allows you to capture the similarity of meaning between, e.g., "cat" and "kitten."
* If you were especially ambitious, you'd be looking at something like BERT or ELMo or GPT-2/3, etc.
* These transformer-based methods allow for *contextual* embeddings, that is, they represent a word token differently depending on the context in which it appears, so that the representation of "bank" in "my money is in the bank" is different from the the representation of "bank" in "we walked along the bank of the river."
* We'll touch on contextual embeddings near the end of the semester.
* And then you might want features that correspond to aspects of a text other than the specific words it contains.
* When was it written?
* By *whom* was it written?
* How long is it?
* In what style is it written?
* Who read it?
* How much did it cost?
* How many people read or reviewed it?
* What else did its readers also read?
* And so on ...
Here, though, we're trying to grasp the *idea* behind document similarity, on which all of these methods depend: transform text into a numeric representation of its features (often, a representation of its content or meaning), then quantify the difference or similarity between those numeric representations.
## In the problem set world
We'll dig into how, as a practical matter, we can vectorize texts and calclulate distance metrics in this week's problem set.
We'll use `scikit-learn` to implement vectorization and distance metrics. The `scikit-learn` API almost always involves *three* steps:
1. Instantiate a learning object (such as a vectorizer, regressor, classifier, etc.). This is the object that will hold the parameters of your fitted model.
1. Call the instantiated learning object's `.fit()` method, passing in your data. This allows the model to learn the optimal parameters from your data.
1. Call the fitted model's `.transform()` or `.predict()` method, passing in either the same data from the `fit` step or new data. This step uses the fitted model to generate outputs given the input data you supply.
For example:_____no_output_____
<code>
from sklearn.feature_extraction.text import CountVectorizer
# get example text as one doc per line
docs = [sent for sent in text.split('\n')]
# instantiate vectorizer object
# note setup options
vectorizer = CountVectorizer(
vocabulary=['cat', 'dog']
)
# fit to data
vectorizer.fit(docs)
# transform docs to features
features = vectorizer.transform(docs)
# print output feature matrix
print(vectorizer.get_feature_names_out())
print(features.toarray())_____no_output_____# calculate distances
from sklearn.metrics.pairwise import euclidean_distances, cosine_distances, cosine_similarity
import numpy as np
print("Euclidean distances")
print(np.round(euclidean_distances(features),2))
print("\nCosine distances")
print(np.round(cosine_distances(features),2))
print("\nCosine **similarities**")
print(np.round(cosine_similarity(features),2))_____no_output_____# FYI, a heatmap vis
import seaborn as sns
print("Euclidean distances")
sns.heatmap(
euclidean_distances(features),
annot=True,
square=True
);_____no_output_____
</code>
## Regression
We are often interested in the relationships between measured properties of texts, or between a textual property and some other variable (year of publication, number of sales, and so on).
Maybe the most basic way to measure the relationship between two variables is to use **linear regression**. The idea is to calculate a straight line through your data such that the average distance between the observed data points and the line is as small as possible.
(Sketch what this looks like)
You can then calculate the **coefficient of determination**, written $r^2$ ("r squared"), which measures the fraction of the variation in the dependent (y) variable that is predictable from the independent (x) variable.
$r^2$ = 1 - (sum of squared residuals)/(sum of squared values)
An $r^2$ value of 1 indicates perfect correlation between the variables; zero means no correlation.
* There's a *lot* more to this. We'll spend some time on it later in the semester.
* For now, focus on the fact that regression is a way to calculate a line of best fit through a data set.
* Notice that we could also try to find something like a "line of *worst* fit," which we could think of as the dividing line between two regions of feature space. This would be something like the line on which we are least likely to encounter any actual data points.
* Think about what use-value such a dividing line might have ..._____no_output_____
|
{
"repository": "gaby-chu/info3350-s22",
"path": "lectures/lec-07-vectors-distances-regression.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 15291,
"hexsha": "cb4ecec5e413599348f1ffee30101d13af4b4245",
"max_line_length": 379,
"avg_line_length": 45.1061946903,
"alphanum_fraction": 0.6232424302
}
|
# Notebook from michele1783/ADM-HW2
Path: main.ipynb
<code>
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
import seaborn as sns
from datetime import datetime
from functools import reduce
from collections import Counter
import functions
from scipy.stats import ks_2samp
from scipy.stats import pearsonr
import statsmodels.api as sm
import statsmodels.formula.api as smf
pd.options.mode.chained_assignment = None_____no_output_____
</code>
# Load the dataset
we load our dataset and using the function **parsedate** we have changed the format of our timestamp_____no_output_____
<code>
dataset = pd.read_csv('steam_reviews.csv',
index_col=0,
parse_dates=['timestamp_created', 'timestamp_updated', 'author.last_played'],
date_parser=functions.parsedate)C:\Users\Clara\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\arraysetops.py:580: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
mask |= (ar1 == a)
dataset.head(20)_____no_output_____dataset.columns_____no_output_____dataset.shape_____no_output_____dataset.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 21747371 entries, 0 to 21747375
Data columns (total 22 columns):
# Column Dtype
--- ------ -----
0 app_id int64
1 app_name object
2 review_id int64
3 language object
4 review object
5 timestamp_created datetime64[ns]
6 timestamp_updated datetime64[ns]
7 recommended bool
8 votes_helpful int64
9 votes_funny int64
10 weighted_vote_score float64
11 comment_count int64
12 steam_purchase bool
13 received_for_free bool
14 written_during_early_access bool
15 author.steamid int64
16 author.num_games_owned int64
17 author.num_reviews int64
18 author.playtime_forever float64
19 author.playtime_last_two_weeks float64
20 author.playtime_at_review float64
21 author.last_played datetime64[ns]
dtypes: bool(4), datetime64[ns](3), float64(4), int64(8), object(3)
memory usage: 3.2+ GB
</code>
# RQ1_____no_output_____### Exploratory Data Analysis (EDA)
To try to better understand our dataset we have made a bunch of plots and tables in which we have tried to catch some information about these reviews received for the applications in Steam._____no_output_____
<code>
dataset.describe()_____no_output_____
</code>
#### Application more reviewed:
To start our analysis we have made a pie chart about applications more reviewed. In particular we have decided to pick the first thirty games more reviewed and understand how the number of rewiews is splitted between them. Indeed the percentage written in the slices of the pie plot is referred not to the total number of reviews but the to the sum of reviews written for these thirty more popular games. The choice of thirty is due to make cleaner the plot and because we are interested only in the more popular games. The most talked-about._____no_output_____
<code>
a = pd.Series(dataset.groupby("app_name").app_id.count().sort_values(ascending=False).head(30))
plt.rcParams['figure.figsize'] = (10, 10)
plt.pie(a,
labels = a.index,
explode = [0.1 for value in range(0, a.index.nunique())],
shadow = True, autopct = '%.1f%%')
plt.title('Application name', fontsize = 20)
plt.axis('off')
plt.show()_____no_output_____
</code>
#### Correlation matrix:
Then we have tried to make a correlation matrix to understand if there are some variables correlated between them _____no_output_____
<code>
fig, ax = plt.subplots(figsize=(13,13))
sns.heatmap(dataset.corr(), cbar=True, annot = True, cmap='BrBG', linewidths=.3,fmt='.1g')_____no_output_____
</code>
We have noticed that there was not any particular correlation between columns except for the ones related to time played by the player therefore we have decided to see in depth these correlations to have clearer information about them. _____no_output_____
<code>
df = pd.DataFrame(dataset,columns=['author.playtime_forever','author.playtime_last_two_weeks',\
'author.playtime_at_review'])
corrMatrix = df.corr()
sns.heatmap(corrMatrix, annot=True)
plt.show()
_____no_output_____
</code>
#### Time and Language:
At this point we want to extract some information about the language of the reviews and time when they were written. We have divided the day in three parts: morning (8am-2pm), afternoon (2pm-10pm) and night (10pm-8am).
So for each part of the day we have grouped the reviews by language, counted them and picked the ten languages more popular.
In this way in our final barplot for each popular language we have the number of reviews written in each part of the day. We have also made a table to explain better the number obtained. _____no_output_____
<code>
arr_1 = dataset['timestamp_created'].dt.time_____no_output_____time_1 = [datetime.strptime('08:00:00', '%H:%M:%S').time(),
datetime.strptime('13:59:59', '%H:%M:%S').time()]
index_1 = [x for x in arr_1.index if (time_1[0] <= arr_1[x] <= time_1[1])]_____no_output_____time_2 = [datetime.strptime('14:00:00', '%H:%M:%S').time(),
datetime.strptime('21:59:59', '%H:%M:%S').time()]
index_2 = [x for x in arr_1.index if (time_2[0] <= arr_1[x] <= time_2[1])]_____no_output_____time_3 = [datetime.strptime('22:00:00', '%H:%M:%S').time(),
datetime.strptime('23:59:59', '%H:%M:%S').time(),
datetime.strptime('00:00:00', '%H:%M:%S').time(),
datetime.strptime('07:59:59', '%H:%M:%S').time()]
index_3 = [x for x in arr_1.index
if ((time_3[0] <= arr_1[x] <= time_3[1]) or
(time_3[2] <= arr_1[x] <= time_3[3]))]_____no_output_____# counting occurrences in the languages
mat1 = Counter((dataset['language'][index_1]).tolist())
pom1 = Counter((dataset['language'][index_2]).tolist())
not1 = Counter((dataset['language'][index_3]).tolist())_____no_output_____# sorting the occurrences
mat2 = {k: v for k, v in sorted(mat1.items(), key=lambda item: item[1], reverse=True)}
pom2 = {k: v for k, v in sorted(pom1.items(), key=lambda item: item[1], reverse=True)}
not2 = {k: v for k, v in sorted(not1.items(), key=lambda item: item[1], reverse=True)}_____no_output_____# taking only the first 10 languages, that happens to be the same for every time slot
mattina = list(mat2.items())[:10]
pomeriggio = list(pom2.items())[:10]
notte = list(not2.items())[:10]_____no_output_____# creating an empty dataframe with timeslots as cols and languages as indexes
df = pd.DataFrame(index=list(mat2.keys())[:10], columns=['8am-2pm', '2pm-10pm', '10pm-8am'])_____no_output_____# adding the values in the dataframe
for (couple1, couple2, couple3) in zip(mattina, pomeriggio, notte):
df['8am-2pm'][couple1[0]] = couple1[1]
df['2pm-10pm'][couple2[0]] = couple2[1]
df['10pm-8am'][couple3[0]] = couple3[1]_____no_output_____df.index.name = 'language'
df_____no_output_____ax = df.plot(y=["8am-2pm", "2pm-10pm", "10pm-8am"], kind="bar")
ax.set_yscale('log')
ax.set_xlabel('languages')
ax.set_ylabel("number reviews")_____no_output_____
</code>
In this stacked barplot we can see that the majority of the reviews are written during the afternoon while during the night fewer people usually write on Steam. The language more used as expected is English_____no_output_____#### Viral Comments:
In this table we have wanted to look at the ten reviews which have received more comments because we have thought that it could be interesting look at them to understand which comments are popular on Steam. _____no_output_____
<code>
dataset_7 = dataset.sort_values(by=['comment_count'], ascending = False)
dataset_7 = dataset_7.reset_index()_____no_output_____dataset_7[["author.steamid", "language", "app_name", "review", "comment_count"]].head(10)_____no_output_____
</code>
Unfortunately the majority of them are written not in english!_____no_output_____#### Games more played:
In our dataset there is a column in which is stored the time played by that player to that particular game. So we have decided to explore what are the games more played in terms of hours. We have decided to pick the top 20 games because we have thought that 20 is a good trade-off between a clear plot and a meaningful number of games. _____no_output_____
<code>
#dataset_8 = dataset_8[["author.steamid", "author.playtime_forever","app_name"]]
dataset_8 = pd.Series(dataset.groupby("app_name")["author.playtime_forever"].sum().sort_values(ascending=False))
ore_di_gioco = dataset_8.values
giochi = dataset_8.index_____no_output_____plt.figure(figsize = ((15, 8)))
sns.barplot(x = ore_di_gioco[:20],
y = giochi[:20], orient = 'h')
plt.title('TOP 20 games more played in terms of hours', size = 20)
plt.ylabel('Games', size = 14, style = 'italic')
plt.xlabel('Number of hours', size = 14, style = 'italic')
#plt.xscale('log')
plt.xticks(np.arange(1000000000,60000000000,2000000000))
plt.show()_____no_output_____
</code>
In this barplot we have found some confirms: the games more played are also often the games more reviewed that were appeared in the pie chart._____no_output_____#### Active players:
To conclude this first analysis we have tried to understand what are the players more useful for Steam: we have selected the ten authors that have written the most number of helpful and funny reviews. _____no_output_____
<code>
dataset_9 = pd.Series(dataset[(dataset.votes_helpful > 0)].groupby("author.steamid").votes_helpful.count().sort_values(ascending=False))
dataset_10 = pd.Series(dataset[(dataset.votes_funny > 0)].groupby("author.steamid").votes_funny.count().sort_values(ascending=False))_____no_output_____pd.concat([dataset_9[:11], dataset_10[:11]], axis=1).reset_index().fillna(0).sort_values(by=['votes_helpful'],ascending=False).reset_index(drop = True)_____no_output_____
</code>
It's interesting to see that the authors who have written some funny reviews have also written helpful reviews. _____no_output_____#### Languages and subplots_____no_output_____
<code>
print("The total number of languages used to write reviews is ",'\033[1m' +str(len(dataset["language"].unique())) +'\033[0m')The total number of languages used to write reviews is [1m28[0m
</code>
Making a subplot we have been able to visualize all the present languages in the dataset and counting the number of reviews. The two subplots have different measure in y-scales!_____no_output_____
<code>
fig=plt.figure(figsize=(25,18))
ax1=fig.add_subplot(2,1,1)
dataset['language'].value_counts().head(10).plot.bar(figsize = (18, 10),title='Top 10 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1,rot=0, logy = True, color = "orange")
ax2=fig.add_subplot(2,1,2)
dataset['language'].value_counts().iloc[-18:].plot.bar(figsize = (18, 10),title='Other 18 Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax2,rot=0, color = "orchid")
fig.tight_layout();
#dataset['language'].value_counts().plot.bar(figsize = (18, 7),title='Top Languages',xlabel='Language',ylabel='Number of Reviews', ax = ax1)_____no_output_____
</code>
# RQ2_____no_output_____### Plot the number of reviews for each application in descending order._____no_output_____We have decided to make a barplot in which we have counted the number of reviews for the first 50 applications. We have decided 50 because it have seemed to us a good tradeoff to have a clean representation a pick the more reviewed games_____no_output_____
<code>
number_review = dataset.groupby("app_name").review_id.count().sort_values(ascending=False)
number_review[0:50].plot.bar(figsize = (18, 7), title=' Number of review', xlabel='Name of application',
ylabel='Number of review', color = "coral", logy = True)
plt.show()
_____no_output_____# for a visual table to have an idea of how many reviews for the first 50 apps
number_review.reset_index().head(50)_____no_output_____
</code>
### What applications have the best Weighted Vote Score?_____no_output_____Each review has a **Weighted Vote Score** that represents the helpfuness score of that review. To extract the weighted vote score for each game we have computed the mean between all the vote for each application. In this way we have an idea about what applications have received the most helpfulness reviews. Then we have decided to select only average votes above 0.3 because we have considered it a good threshold for the best votes. _____no_output_____
<code>
medie = pd.DataFrame(dataset.groupby("app_name").weighted_vote_score.mean().sort_values(ascending=False))
medie = medie[medie.values > 0.3]
medie_____no_output_____
</code>
### Which applications have the most and the least recommendations_____no_output_____In this point, we thought that for most and least recommended apps, the percentage values where the ones to be aware of, meaning that an app was the most recommended if it has the higher percentage value of the most recommended reviews_____no_output_____
<code>
#Most
# recommended. group_by app_name. count all recommended,
# count True recommended and False recommended in separate cols, and percentage of these.
# taking only the useful cols
new_data = dataset[['app_name', 'recommended']]
# count_rec col counts all recommended respectively False and True of an application
new_data['count_rec'] = new_data.groupby(['app_name', 'recommended'], sort=False)['recommended'].transform('count')_____no_output_____# all_rec col counts all recommedations, False and True together
new_data['all_rec'] = new_data.groupby("app_name", sort=False)['count_rec'].transform('count')_____no_output_____# final dataframe which contains only the True recommendations
# this means that we can calculate the most and the least recommended apps
final = new_data[(new_data['recommended']==True)].drop_duplicates()_____no_output_____# perc_rec calculates the percentage recommendation
final['perc_rec'] = (final['count_rec']/final['all_rec'])*100
# drop not useful cols
final.drop(['recommended', 'count_rec'], axis=1, inplace=True)_____no_output_____# most recommended, first 50
final.sort_values(by='perc_rec', ascending=False).reset_index(drop=True).head(50)_____no_output_____
</code>
We can see that the most recommended apps are not the one with the higher reviews_____no_output_____
<code>
# least recommended, first 50
final.sort_values(by='perc_rec', ascending=True).reset_index(drop=True).head(50)_____no_output_____
</code>
### How many of these applications were purchased, and how many were given for free?_____no_output__________no_output_____
<code>
# steam_purchase
# taking only the useful cols
new_data1 = dataset[['app_name', 'steam_purchase']]_____no_output_____# same modus operandi of counting recommendation
new_data1['count_pur'] = new_data1.groupby(['app_name', 'steam_purchase'], sort=False)['steam_purchase'].transform('count')_____no_output_____# taking only the ones purchased
final1 = new_data1[(new_data1['steam_purchase']==True)].drop_duplicates()_____no_output_____# drop not useful col
final1.drop(['steam_purchase'], axis=1, inplace=True)_____no_output_____# received_for_free
# taking only the useful cols
new_data2 = dataset[['app_name', 'received_for_free']]_____no_output_____# same modus operandi
new_data2['count_free'] = new_data2.groupby(['app_name', 'received_for_free'], sort=False)['received_for_free'].transform('count')_____no_output_____# take only the ones received_for_free
final2 = new_data2[(new_data2['received_for_free']==True)].drop_duplicates()_____no_output_____# drop not useful col
final2.drop(['received_for_free'], axis=1, inplace=True)_____no_output_____# now it's time to calculate the final result, by doing a merge of the final dataframes
dfs = [final, final1, final2]
final_df = reduce(lambda left,right: pd.merge(left,right,on=['app_name'],
how='outer'), dfs)_____no_output_____# taking the first 40 apps that are most recommended and displaying how many times were
# purchased and how many times were received for free
final_df.sort_values(by='perc_rec', ascending=False).head(40)_____no_output_____# least recommended
final_df.sort_values(by='perc_rec').head(40)_____no_output_____
</code>
# RQ 3_____no_output_____### What is the most common time that authors review an application? For example, authors usually write a review at 17:44._____no_output_____First of all, we take only the `timestamp_created` col and we convert in `string` the time values. Next, with a simple dictionary and a `for` cycle, we count the occurrences of every single time (HH:MM) and at the end we return only the most common time._____no_output_____
<code>
# first point
# taking only the timestamp_created col
timestamp_col = np.array(dataset["timestamp_created"].dt.time.astype('str'))_____no_output_____dict_time = {}
for time in timestamp_col:
# taking only hour and minute
new_time = time[:5]
if new_time not in list(dict_time.keys()):
dict_time[new_time] = 1
else:
dict_time[new_time] += 1_____no_output_____# sorting the dictionary in descending order
dict_time_sorted = {k: v for k, v in sorted(dict_time.items(), key=lambda item: item[1], reverse=True)}_____no_output_____# returning the most common time (without seconds)
next(iter(dict_time_sorted))_____no_output_____
</code>
### Create a function that receives as a parameter a list of time intervals and returns the plot the number of reviews for each of the intervals.
Using the function **orario** we can extract for a given list of time interval the number of reviews written in each time interval
_____no_output_____### Use the function that you created in the previous literal to plot the number of reviews between the following time intervals:_____no_output_____
<code>
intervalli = ['06:00:00', '10:59:59', '11:00:00', '13:59:59', '14:00:00', '16:59:59',
'17:00:00', '19:59:59', '20:00:00', '23:59:59', '00:00:00', '02:59:59', '03:00:00',
'05:59:59']_____no_output_____functions.orario(intervalli)C:\Users\Clara\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\arraysetops.py:580: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
mask |= (ar1 == a)
</code>
On the x-axis for each bar is indicated the starting point of the time interval. We have observed that fewer people have written reviews during the night while the majority of people have written their reviews in the first hours of the morning and in the dinner hours_____no_output_____# RQ4_____no_output_____### What are the top 3 languages used to review applications?_____no_output_____
<code>
top_languages = pd.DataFrame(dataset.groupby("language").review_id.count().sort_values(ascending=False).head(3))
top_languages_____no_output_____
</code>
As expected the majority of the reviews are written in english, chinese and russian!_____no_output_____
<code>
top_languages = list(top_languages.index)
top_languages_____no_output_____
</code>
### Create a function that receives as parameters both the name of a data set and a list of languages’ names and returns a data frame filtered only with the reviews written in the provided languages._____no_output_____There we have used the function **get_reviews_by_languages** to accomplish a dataframe where there are only reviews written in the top 3 languages_____no_output_____
<code>
dataset_filter = functions.get_reviews_by_languages(dataset, top_languages)_____no_output_____
</code>
### Use the function created in the previous literal to find what percentage of these reviews (associated with the top 3 languages) were voted as funny?_____no_output_____For this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one funny vote and then we have computed the ratio between them and all the reviews written in that language.
To compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**_____no_output_____
<code>
numeratore_1 = []
denominatore_1 = []
rapporto_1 = []
for i in range(len(top_languages)):
numeratore_1.append(dataset_filter.loc[(dataset_filter.votes_funny != 0) & (dataset_filter.language == top_languages[i])].votes_funny.count())
denominatore_1.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_funny.count())
rapporto_1.append(round((numeratore_1[i]/denominatore_1[i])*100, 2))
print("The percentage of reviews written in " + '\033[1m' + top_languages[i] +'\033[0m' +
" that has received at least a funny vote is " +
'\033[1m' + str(rapporto_1[i]) + "%" + '\033[0m')
The percentage of reviews written in [1menglish[0m that has received at least a funny vote is [1m11.27%[0m
The percentage of reviews written in [1mschinese[0m that has received at least a funny vote is [1m11.82%[0m
The percentage of reviews written in [1mrussian[0m that has received at least a funny vote is [1m16.68%[0m
</code>
At this point we have also wanted to compute the percentage of reviews that have received at least a funny vote among all these three languages. _____no_output_____
<code>
# same as above
print("The percentage of reviews written in one of the top 3 language that has received at "
"least a funny vote is " + '\033[1m' + str(round((sum(numeratore_1)/sum(denominatore_1))*100, 2)) + "%" + '\033[0m')The percentage of reviews written in one of the top 3 language that has received at least a funny vote is [1m12.21%[0m
</code>
### Use the function created in the literal “a” to find what percentage of these reviews (associated with the top 3 languages) were voted as helpful?_____no_output_____For this request we have used the new filtered dataset and for each language we have selected the reviews that have received at least one helpful vote and then we have computed the ratio between them and all the reviews written in that language.
To compute this percentage we have used **dataset_filter** that is the new dataframe obtained using the previous function **filtro**_____no_output_____
<code>
numeratore_2 = []
denominatore_2 = []
rapporto_2 = []
for i in range(len(top_languages)):
numeratore_2.append(dataset_filter.loc[(dataset_filter.votes_helpful != 0) & (dataset_filter.language == top_languages[i])].votes_helpful.count())
denominatore_2.append(dataset_filter[dataset_filter.language == top_languages[i]].votes_helpful.count())
rapporto_2.append(round((numeratore_2[i]/denominatore_2[i])*100, 2))
print("The percentage of reviews written in " + '\033[1m' + top_languages[i] + '\033[0m' +
" that has received at least a helpful vote is " +
'\033[1m' + str(rapporto_2[i]) + "%" + '\033[0m')The percentage of reviews written in [1menglish[0m that has received at least a helpful vote is [1m29.2%[0m
The percentage of reviews written in [1mschinese[0m that has received at least a helpful vote is [1m25.1%[0m
The percentage of reviews written in [1mrussian[0m that has received at least a helpful vote is [1m35.5%[0m
</code>
At this point we have also wanted to compute the percentage of reviews that have received at least a helpful vote among all these three languages._____no_output_____
<code>
# same as above
print("The percentage of reviews written in one of the top 3 language that has received at "
"least a helpful vote is " + '\033[1m' + str(round((sum(numeratore_2)/sum(denominatore_2))*100, 2)) + "%" + '\033[0m')The percentage of reviews written in one of the top 3 language that has received at least a helpful vote is [1m29.16%[0m
</code>
# RQ5_____no_output_____### Plot the top 10 most popular reviewers and the number of reviews._____no_output_____
<code>
num_reviewers = dataset['author.steamid'].value_counts().head(10)_____no_output_____num_reviewers.plot(kind='bar',
xlabel='TOP 10 reviewers',
ylabel='number of reviews')_____no_output_____
</code>
### What applications did the most popular author review?
_____no_output_____At first, we took the previous result of the most popular author to leave only the rows of the reviews written by him/her, and then we returned all the applications reviewed by this author._____no_output_____
<code>
num_rev = pd.DataFrame({'reviewers':num_reviewers.index, 'num_reviews':num_reviewers.values})_____no_output_____pop_auth = num_rev['reviewers'][0]_____no_output_____apps_rev = dataset[dataset['author.steamid'] == pop_auth].app_name_____no_output_____app_name_rev = list(apps_rev.values)_____no_output_____app_name_rev = [el for el, count in Counter(app_name_rev).items()]_____no_output_____print(app_name_rev)['Half-Life', 'Counter-Strike: Source', 'Half-Life 2: Episode Two', 'Portal 2', "Garry's Mod", "Sid Meier's Civilization V", 'Dead by Daylight', "Sid Meier's Civilization VI", 'Subnautica', 'Human: Fall Flat', 'Banished', 'Celeste', 'Getting Over It with Bennett Foddy', 'A Hat in Time', 'The Forest', 'Axiom Verge', 'The Binding of Isaac: Rebirth', 'To the Moon', 'Cave Story+', 'Titan Souls', 'Super Meat Boy', "Don't Escape: 4 Days to Survive", 'Volgarr the Viking', 'Enter the Gungeon', 'Salt and Sanctuary', 'Hollow Knight', 'The End Is Nigh', 'Factorio', 'RimWorld', 'Insurgency: Sandstorm', 'Euro Truck Simulator 2', 'Foundation', 'Kenshi', 'Into the Breach', 'Warhammer: Vermintide 2', 'DOOM Eternal', 'Age of Empires: Definitive Edition', 'Void Bastards', 'Stardew Valley', 'Among Us', 'Blackwake', 'Little Nightmares', 'Bomber Crew', 'Rust', 'HITMAN™ 2', 'Phasmophobia', 'Mount & Blade: Warband', 'Resident Evil 2', 'Slime Rancher', 'Hotline Miami', 'Tomb Raider', 'BattleBlock Theater', 'Dishonored', 'South Park™: The Stick of Truth™', 'Undertale', "Don't Starve", 'Rocket League', 'Dead Cells', 'Broforce', 'The Wolf Among Us', 'The Walking Dead', 'One Finger Death Punch', 'Oxygen Not Included', 'Cuphead', 'ULTRAKILL', 'Castle Crashers', 'Townscaper', 'Papers, Please', 'GRIS', 'DUSK', 'Outlast', 'FTL: Faster Than Light', 'Dying Light', 'American Truck Simulator', 'Saints Row: The Third', 'STAR WARS™ Empire at War: Gold Pack', 'Age of Empires II (2013)', 'Super Hexagon', 'BioShock Infinite', 'DOOM', 'Black Mesa', 'Finding Paradise', 'Keep Talking and Nobody Explodes', 'Duck Game', 'Mark of the Ninja', 'Phoenix Wright: Ace Attorney Trilogy', 'Gunpoint', "PLAYERUNKNOWN'S BATTLEGROUNDS", 'Monster Hunter: World', 'The Elder Scrolls Online', 'Total War: WARHAMMER II', 'Cities: Skylines', 'Stellaris', 'Black Desert Online', 'Kingdom Come: Deliverance', 'Jurassic World Evolution', 'ARK: Survival Evolved', "No Man's Sky", 'Frostpunk', 'Fallout 4', 'DARK SOULS™ III', 'Rise of the Tomb Raider', 'Middle-earth™: Shadow of War™', 'Hearts of Iron IV', 'They Are Billions', 'Total War Saga: Thrones of Britannia', 'Total War: ROME II - Emperor Edition', 'Terraria', 'PAYDAY 2', 'XCOM 2', 'Deep Rock Galactic', 'Hunt: Showdown', 'Conan Exiles', 'Two Point Hospital', 'Total War: WARHAMMER', 'The Elder Scrolls V: Skyrim Special Edition', 'NieR:Automata™', 'House Flipper', 'Surviving Mars', 'Ni no Kuni™ II: Revenant Kingdom', 'Railway Empire', 'Rise of Industry', 'Devil May Cry HD Collection', 'Heroes of Hammerwatch', 'Ghost of a Tale', 'Ancestors Legacy', 'FAR: Lone Sails', 'Totally Accurate Battlegrounds', 'Vampyr', 'Yakuza 0', 'Thief Simulator', 'Darksiders III', 'Mutant Year Zero: Road to Eden', 'Just Cause 4', 'Planet Coaster', 'Nioh: Complete Edition', 'Europa Universalis IV', 'Just Cause 3', 'Resident Evil 7 Biohazard', 'Urban Empire', 'Youtubers Life', 'Night in the Woods', 'Northgard', 'Sniper Elite 4', 'Day of Infamy', 'SimAirport', 'Dead Rising 4', 'Styx: Shards of Darkness']
</code>
### How many applications did he/she purchase, and how many did he/she get as free? Provide the number (count) and the percentage._____no_output_____
<code>
# taking only the steam_purchase and received_for_free apps of the author
app_count = dataset[dataset['author.steamid'] == pop_auth][['steam_purchase', 'received_for_free']]_____no_output_____# how many app did the author reviewed
tot_app_rev = len(app_count.index)
_____no_output_____purchased = dict(Counter(app_count['steam_purchase']))
free_apps = dict(Counter(app_count['received_for_free']))_____no_output_____purchased[True] = [purchased[True], "{:.2%}".format(purchased[True]/tot_app_rev)]
purchased[False] = [purchased[False], "{:.2%}".format(purchased[False]/tot_app_rev)]
free_apps[True] = [free_apps[True], "{:.2%}".format(free_apps[True]/tot_app_rev)]
free_apps[False] = [free_apps[False], "{:.2%}".format(free_apps[False]/tot_app_rev)]_____no_output_____purch_df = pd.DataFrame(purchased, index=['count', 'Percentage']).T
free_df = pd.DataFrame(free_apps, index=['count', 'Percentage']).T_____no_output_____purch_df.index.name = 'App Purchased'
free_df.index.name = 'App given Free'_____no_output_____purch_df_____no_output_____
</code>
`True` means that the apps were purchased, `False` doesn't._____no_output_____
<code>
free_df_____no_output_____
</code>
`True` means that the apps were given for free, `False` doesn't._____no_output_____There is a significant difference between the purchased and the free apps: the first ones were mostly purchased on Steam, and the latter only 4 apps were given for free, then this means that not every app that the author reviewed was purchased on Steam, because if we assume that all the purchased apps are counted also in the "not given for free" ones, then we have 35 apps purchased somewhere else, and counting also the 4 apps given for free, we have all the apps not purchased on Steam, which are 39._____no_output_____### How many of the applications he/she purchased reviewed positively, and how many negatively? How about the applications he received for free?_____no_output_____
<code>
# have to use the recommended col
app_recomm = dataset.loc[(dataset['author.steamid'] == pop_auth) & (dataset['recommended'] == True)][['steam_purchase', 'received_for_free']]_____no_output_____purchased_rec = dict(Counter(app_recomm['steam_purchase']))
free_apps_rec = dict(Counter(app_recomm['received_for_free']))
tot_app_rec = len(app_recomm.index)_____no_output_____print('{} applications purchased were reviewed positively, and {} were reviewed negatively'
.format(purchased_rec[True], purchased_rec[False]))
print('{} applications given for free were reviewed positively, and {} were reviewed negatively'
.format(free_apps_rec[True], free_apps_rec[False]))108 applications purchased were reviewed positively, and 38 were reviewed negatively
4 applications given for free were reviewed positively, and 142 were reviewed negatively
</code>
Comparing these results with the ones in the previous question, we can see that 3 apps were not recommended positively nor negatively, and those are, using the same hypothesis of the previous answer, 2 purchased on Steam and 1 purchased elsewhere. Also we can see that all apps given for free where recommended positively, which means that the author liked playing with them (and we assume that he/she also liked their quality of being "free")_____no_output_____# RQ6
_____no_output_____### What is the average time (days and minutes) a user lets pass before he updates a review?_____no_output_____Just to start we have computed the difference between the time when the review is written and time when the review is updated and then we have transformed this difference in terms of days_____no_output_____
<code>
dataset['difference_days'] = (dataset['timestamp_updated'] - dataset['timestamp_created'])
dataset['difference_days'] = dataset['difference_days']/np.timedelta64(1,'D')_____no_output_____
</code>
After that we have deleted who did not update his review because we have thought that is meaningless consider them. Then we have computed the mean between days and the integer part of this number represents the average number of days after an author updates his review. Instead to transform the decimal part in minutes we have to multiply it for 1440 because in one day there are 1440 minutes. We have made a simple proportion: *1 : 1440 = x : (decimal part of our number)*_____no_output_____
<code>
dataset_1 = dataset[dataset.difference_days != 0]
average = dataset_1.difference_days.mean()
minutes = round((average % 1) * 1440, 0)
days = average // 1
print("The average time a user lets pass before he updates a review is "+
'\033[1m' + str(days) + '\033[0m' + " days and " + '\033[1m' + str(minutes) + '\033[0m' + " minutes")The average time a user lets pass before he updates a review is [1m321.0[0m days and [1m46.0[0m minutes
</code>
On average an author updates his review almost after a year! _____no_output_____### Plot the top 3 authors that usually update their reviews._____no_output_____We have used the dataframe **dataset_1** in which there are only the reviews that have been updated. We did not use the starting dataset because we have to extract who are the authors that usually update their reviews so authors that have updated more reviews through time._____no_output_____
<code>
a = pd.Series(dataset_1.groupby('author.steamid').review_id.count().sort_values(ascending=False).head(3))
a_____no_output_____#bar plot
plt.figure(figsize=(12, 8))
ax = a.plot(kind="bar", color = ["orchid", "orange", "green"], alpha=0.75, rot=0)
ax.set_title("TOP 3 authors that have updated more reviews")
ax.set_xlabel("Steam ID")
ax.set_ylabel("Number of reviews updated")
#needed to put values on top of the bar
for i, v in enumerate(a.values):
ax.text(i, v+1, str(v), color='black', fontweight='bold')_____no_output_____
</code>
We have put the number of reviews over the bars because the second and the third author have updated almost the same number of reviews._____no_output_____# RQ7_____no_output_____### What’s the probability that a review has a Weighted Vote Score equal to or bigger than 0.5?_____no_output_____We have used the definition of probability to compute these values indeed we have count the number of reviews that has a Weighted Vote Score equal to or bigger than 0.5 and this number represents the favourable case (we have stored this number in **casi_fav**)while the number of total case is represented by the number of the lines of our dataset, stored in **casi_tot**. The probability is the ratio between them. _____no_output_____
<code>
#filter the dataset picking only weighted_vote_score >= 0.5
#and count the rows of filter dataset
casi_fav = dataset[dataset.weighted_vote_score >= 0.5].weighted_vote_score.count()_____no_output_____#number of rows of initial dataset
casi_tot = dataset.weighted_vote_score.count()_____no_output_____result_1 = round(casi_fav/casi_tot, 2)
print("The probability is of a review has a Weighted Vote Score equal to or bigger than 0.5 is "+ '\033[1m' +str(result_1)+'\033[0m')The probability is of a review has a Weighted Vote Score equal to or bigger than 0.5 is [1m0.22[0m
</code>
### What’s the probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5?_____no_output_____We want to compute this conditional probability P(B|A) where B is the event: *a review has at least one vote as funny*. The sample space will be reduced, indeed we have filtered the dataset in such way that we are going to look for reviews with at least one vote as funny just among reviews with Weighted Vote Score is bigger than 0.5._____no_output_____
<code>
#new sample space: filter dataset like before
# A
dataset_prob = dataset[dataset.weighted_vote_score > 0.5]_____no_output_____#count the reviews with at least a funny vote in the new filter dataset
#B intersect A
casi_fav_2 = dataset_prob[dataset_prob.votes_funny != 0].votes_funny.count()_____no_output_____#A
casi_tot2 = dataset_prob.weighted_vote_score.count()
#P(B|A)
result_2 = round(casi_fav_2/casi_tot2, 2)
print("The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5 is ",'\033[1m' +str(result_2)+'\033[0m')The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is bigger than 0.5 is [1m0.25[0m
</code>
### Is the probability that “a review has at least one vote as funny” independent of the “probability that a review has a Weighted Vote Score equal or bigger than 0.5"?_____no_output_____To be independent these two events it would happen that the probability of the event B: *a review has at least one vote as funny* would be equal to *probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5, that is P(B|A);* because in this way the conditioning of the two probability is useless given that they are independent._____no_output_____To be independent these two events it would happen that the P(B) would be equal to P(B|A) because in this way the conditioning of the two probability is useless given that they are independent: P(B|A) = P(B)._____no_output_____
<code>
#P(B|A)
casi_fav_ba = dataset[(dataset.weighted_vote_score >= 0.5) & (dataset.votes_funny != 0)].votes_funny.count()
result_3a = round(casi_fav_ba/casi_fav, 2)
print("The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5 is ",'\033[1m' +str(result_3a)+'\033[0m')The conditional probability that a review has at least one vote as funny given that the Weighted Vote Score is equal or bigger than 0.5 is [1m0.25[0m
#count the reviews with at least a funny vote in the starting dataset
#B
casi_fav_3 = dataset[dataset.votes_funny != 0].votes_funny.count()_____no_output_____#P(B)
result_3 = round(casi_fav_3/casi_tot,2)
print("The probability of a review has at least one vote as funny is "+ '\033[1m' +str(result_3)+'\033[0m')The probability of a review has at least one vote as funny is [1m0.12[0m
</code>
0.12 is different from 0.25 so these two events are **dependent!**_____no_output_____# RQ8_____no_output_____### Is there a significant difference in the Weighted Vote Score of reviews made in Chinese vs the ones made in Russian? Use an appropriate statistical test or technique and support your choice._____no_output_____We'll use a non-parametric(Kolgomoronov-Smirnov) test in order to find if the 2 distribution are the same(comes from the same population) or not, since the 2 distributions are not normally distributed_____no_output_____
<code>
data_lang = functions.get_reviews_by_languages(dataset,["schinese","russian"])_____no_output_____
</code>
First at all we compare chinese weighted score distribution and russian weighted score distribution using histograms. At first glance there does not seem to be any significant differences between the two distribution. From this plot those 2 distributions seems that distributes equally._____no_output_____
<code>
plt.figure(figsize = (10,8))
data_lang[data_lang.language == "schinese"].weighted_vote_score.plot(kind = "hist", label = "Chinese",alpha = 0.3)
data_lang[data_lang.language == "russian"].weighted_vote_score.plot(kind = "hist", label = "Russian", color = "orange",alpha = 0.3)
plt.legend()_____no_output_____
</code>
So we can support the choice with a statistaical test.Let's check with the KS test_____no_output_____
<code>
k_smir_test = ks_2samp(data_lang[data_lang.language == "schinese"].weighted_vote_score,
data_lang[data_lang.language == "russian"].weighted_vote_score)
if k_smir_test.pvalue <= 0.1:
print("the two distributions are identical.")
else:
print(f"the 2 distributions are different with a pvalue of {k_smir_test.pvalue}")the two distributions are identical.
</code>
The Kolmogorov-Smirnov test is a non-parametric test that checks the shape of sample distributions. It can be used to compare two samples and It does not in itself require any assumptions about the sample distribution, like in our case. The acceptance of the H0 hypothesis predicts that the two distributions belong to the same population._____no_output_____### Can you find any significant relationship between the time that a user lets pass before he updates the review and the Weighted Vote Score? Use an appropriate statistical test or technique and support your choice._____no_output_____We'll discover if there is a relationship into 3 step:
* plot
* pearson correlations
* Linear Regression_____no_output_____
<code>
# step 1: plot
plt.figure(figsize = (10,8))
plt.scatter(dataset.difference_days, dataset.weighted_vote_score)
print("no relationship visible")no relationship visible
# step 2: pearson correlation
print(pearsonr(dataset.difference_days, dataset.weighted_vote_score))
print("no relations detected ")(0.07204700562113138, 0.0)
no relations detected
X = dataset[["difference_days"]]
X = sm.add_constant(X).values
model = sm.OLS(dataset.weighted_vote_score, X)
res = model.fit()_____no_output_____res.summary()_____no_output_____
</code>
using Simple Linear Regression (1 X variable) is the same that using pearsonr because
$R^{2}Score = (pearsonr)^2 $_____no_output_____
<code>
p = pearsonr(dataset.difference_days, dataset.weighted_vote_score)
print(f"pearsonr {p[0]}\npearsonr^2 = {p[0]**2} -> same as R-squared detected above")pearsonr 0.07204700562113138
pearsonr^2 = 0.005190771018971337 -> same as R-squared detected above
</code>
The second test is linear regression: also in this case there is no evidence that between two variables there is a sort of correlation._____no_output_____### Is there any change in the relationship of the variables mentioned in the previous literal if you include whether an application is recommended or not in the review? Use an appropriate statistical test or technique and support your choice._____no_output_____just adding another variable into Linear Regression_____no_output_____
<code>
X = dataset[["difference_days","recommended","weighted_vote_score"]].astype({"recommended":int})
model = smf.ols("weighted_vote_score ~ difference_days + C(recommended)", data=X)
res = model.fit()
res.summary()_____no_output_____
</code>
no changes in relationships_____no_output_____### What are histograms, bar plots, scatterplots and pie charts used for?_____no_output_____Histogram: This type of data visualization helps to interpret univariate analysis results. Simply put, it shows where data points are dense and where they are sparse in one dimension. However, instead of comparing the categorical data, it breaks down a numeric data into interval groups and shows the frequency of data fall into each group. Histogram is good at identifying the pattern of data distribution on a numeric spectrum.
Bar Chart: Bar chart compares the measure of categorical dimension. Bar chart is very similar to a histogram. The fundamental difference is that the x-axis of bar charts is categorical attribute instead of numeric interval in the histogram. Furthermore, bar chart is not just limited to plot one categorical data. An extension of bar chart, clustered bar chart (or group bar chart) compares two categorical attributes.
Scatterplot: It plots one numeric attribute against another numeric attribute and visualizes the correlation between axes. Scatter plot is commonly applied to identify regression type of relationships such as linear regression, logistic regression etc. It also provides a robust analysis of the correlation significance. We can estimate that the correlation relationship is stronger,linearly, when the data points lying on a line with a certaing degree, whereas the relationship is weak if the line is flat.
Piechart: It is used to represent the percentage and weight of components belonging to one categorical attribute. The size of the pie slice is proportional to the percentage, hence it intuitively depicts how much each component occupies the whole._____no_output_____### What insights can you extract from a Box Plot?_____no_output_____A boxplot shows the distribution of the data with more detailed information. from Box Plot we can "extract" information such as outliers, maximum, minimum, first quartile(Q1), third quartile(Q3), interquartile range(IQR), and median. It also gives you the information about the skewness of the data, how tightly closed the data is and the spread of the data._____no_output_____# TQ1
## Question 1
As known, given a random variable $X$, the Quantile function *Q($\cdot$)* with support $\{ p | p \in [0,1] \}$ is the function that computes:
\begin{equation}
Q(p)=s \hspace{0.2 cm} |\hspace{0.2 cm} \mathcal{P}(X<=s) = p
\end{equation}
Denoting with $A_i$ the i-th element of the vector $A$ of length $n$ and given $k \in [0,n]$, it is possible to see that our algorithm compute:<br>
\begin{equation}
alg(A,k)=s \hspace{0.2 cm} |\hspace{0.2 cm} \#\{A_i<=s\} = k
\end{equation}
It is then easily possible to perform some trasformations over our algorithm parameters in order to obtain the similarities with the quantile function, i.e.:
1. A shrinkage over our algorithm support space (i.e. $k'=k/n$);
2. A shrinkage over our cardinality measure (i.e. $\#\{A_i<=s \}'=\frac{\#\{A_i<=s \}}{n}$);
Substituting into our $alg(A,k)$ it becomes:
\begin{equation}
alg(A,k')=s\hspace{0.2 cm} |\hspace{0.2 cm} \frac{\#\{A_i<=s\}}{n} = k'
\end{equation}
In a frequentist approach (said $A_r$ a random sample of the vector $A$) we can equal $\frac{\#\{A_i<=s\}}{n}= \mathcal{P}(A_r <= s)$; In words, our algorithm is computing the value $s$ so that the number of elements in the array $A$ smaller or equal to $s$ will be equal to $k$: we can so somehow define our algorithm a "quantile function over a non-normalized support".
## Question 2
We initially note that the subdivision of the array $A$ (over which we are calling $alg()$) into $L$ and $R$ requires to scan the whole vector $A$ (i.e. requires $n=len(A)$ operations). Let consider the worst case scenario, i.e. imagine that $k=n$ and that at each iteration the random sample $s$ will always be equal to $A_1$: it basically means that the $s$ satisfying the condition over $k$ will be selected at the $n_{th}-1$ call of $alg()$ (iteration at which the vector $A$ over which we are calling $alg()$ has lenght equal to 2). We are so going to remove at each call of $alg()$ a single element, i.e. the smallest element in $A$. Due to this, the number of operations needed to scan the vector $A$ will decrease of one unit at each iteration of $alg()$. So we have that:
$$
T(n)=n+(n-1)+(n-2)+(n-3)+...+(n-(n-1)) = \sum_{i=0}^{i=n-1}(n-i)=\frac{1}{2}n(n-1)
$$
(We recall that the sum is executed over $n-1$ iteration because we need $n-1$ call of $alg()$ to reach the right $s$). We can so assume an asymptotical complexity in the worst case scenario (removing costant therms) equal to $\mathcal{O}(n^2)$.
## Question 3
In the best case scenario, the right $s$ will be picked up at the first iteration: we only need $n$=len($A$) operation to scan $A$ and divide it into $L$ and $R$ : the asymptotical complexity will then be equal to $\mathcal{O}(n)$._____no_output_____# TQ2
## Question 1
Let dive into the interpretation of the given recursive algorithm's complexity. It is clear that, given a particular $n$ and $\forall l$, and expressing with $T(n)$ the time needed to complete the algorithm called with parameter $n$:
\begin{equation}
T(n) = T\left(\frac{n}{2}\right)\cdot 2 + \left(\frac{n}{2}+1\right)\cdot 3
\end{equation}
Indeed, calling **splitSwap(a,l,n)** we will have to solve two times **splitSwap(a,l,n/2)** plus execute 3 operations for each of the $\left(\frac{n}{2}+1\right)$ iterations of the for loop into **swapList(a,l,n)**. Lets compute running times after the expression of $T(n)$:
\begin{equation}
T\left(\frac{n}{2}\right) = T\left(\frac{n}{2^2}\right)\cdot 2 + \left(\frac{n}{2^2}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^2}\right)\cdot 2^2 + \left(\frac{n}{2^2}+1\right)\cdot2 \cdot 3 +\left(\frac{n}{2}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^2}\right)\cdot 2^2 + \left(\frac{n}{2}+1\right)\cdot2 \cdot 3 +3
\end{equation}
\begin{equation}
T\left(\frac{n}{2^2}\right) = T\left(\frac{n}{2^3}\right)\cdot 2 + \left(\frac{n}{2^3}+1\right)\cdot 3
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^3}\right)\cdot 2^3 + \left(\frac{n}{2}+1\right)\cdot 3 \cdot 3 +7
\end{equation}
\begin{equation}
T(n) = T\left(\frac{n}{2^k}\right)\cdot 2^k + \left(\frac{n}{2}+1\right)\cdot k \cdot 3 +log_2(2^k)-1
\end{equation}
Setting $2^k=n \Leftrightarrow k =log_2(n)$ we obtain:
\begin{equation}
T(n) = T(1)\cdot n + \left(\frac{n}{2}+1\right)\cdot log_2(n) \cdot 3 +log_2(n)-1 \simeq n\cdot log_2(n)
\end{equation}
In the latter we have removed the dependency from factors, constant terms and considered only the term with the biggest growth rate w.r.t $n$. We can than say that the asymptotical complexity of the algorithm is $\mathcal{O}(n\cdot log_2(n))$.
## Question 2
Given an array **a**, an index **l** and a number **n** (considering the scenario where both **len(a)** and **n** are power of 2 numbers), the algorithm output the array **a'** built as follows:
\begin{equation}
a'[i]=a[i] \hspace{1cm}\forall i \in [0,1,...,l-1]\hspace{1cm}\mbox{if}\hspace{1cm} l \geq 1
\end{equation}
\begin{equation}
a'[l+i]=a[l+n-i]
\end{equation}
In words, starting from an index **l** of the original array **a**, the algorithm is reversing the position of the first **n** elements of the array. Because of this of course it is required that **l+n** $\leq$ **len(a)**, otherwise the subroutine **swapList()** will raise an error because of the out-of-range index it loops on. Let describe the algorithm's mechanism. Looking at the code, we can assess how the only part of the code actually changing the position of the array's elements is the subroutine **swapList()**. Given a triplet **(a,l,n)**, once **splitSwap()** is called, it will recursively call himself with an **n** halfed call by call (i.e. **n**$^{(1)}$ =**n/2**, **n**$^{(2)}$ =**n**$^{(1)}/2$, **n**$^{(3)}$ =**n**$^{(2)}/2$ and so on). As we can see in the (Fig.1), after $\text{log}_2(n)-1$ steps, the function **splitSwap(a,l,2)** will be called: in its execution both **splitSwap(a,l,1)** and **splitSwap(a,l+1,1)** will **return** (being **n**=1), finally allowing the execution of **swaplist(a,l,2)** (that we will call **final-node-subroutine** $\forall l$) that will exchange the position of the array's elements **a[l]** with **a[l+1]**. Being **splitSwap(a,l,2)** completed, **splitSwap(a,l+2,2)** will be called. Similary, at the end of the execution its **final-node-subroutine** will exchange the position of the array's elements **a[l+2]** with **a[l+3]**. Basically the **final-node-subroutines** consider the array (starting from the element $a[l]$) as a sequence of $\frac{n}{2}$ couples of elements and in each couple they exchange the 1st element with the 2nd one.
Recalling that **splitSwap(a,l,2)** and **splitSwap(a,l+2,2)** where called in **splitSwap(a,l,4)**, **swapList(a,l,4)** (that we will call **semi-final-node-subroutine**) will finally be executed, exchanging the position of the array's elements **a[l]** with **a[l+2]** and **a[l+1]** with **a[l+3]**. So the role of **semi-final-node-subroutines** is to consider the array (starting from the element $a[l]$) as a sequence of $\frac{n}{4}$ couples of couples and to exchange the position of the 1st element of the 1st couple with the 1st element of the 2nd couple, and the 2nd element of the 1st couple with the 2nd element of the 2nd couple. Basically, after the execution of all the **final-node-subroutines** and of the **semi-final-node-subroutines** the position of the 1st group of 4 elements of the original array will be reversed, the same for the 2nd group of 4 elements and so on. We can so climb our recursive function tree from the **final-node-subroutines** up to the top **first-final-node-subroutine** i.e. **swapList(a,l,n)**. We can see the effect of each kind of **subroutine** level over a test array in two examples at (Fig.2,3) recalling that the output of the **first-final-node-subroutine** will be equal to the algorithm's output.
Having assessed that the algorithm complexity is $\simeq O(n\cdot log_2(n))$, it is possible to confirm that the algorithm it's not optimal: infact it is easily possible to write some pseudo-code with a lower complexity than the given algorithm:
```python
def reverse(a,l,n):
reversed_array=a
for i in range(n):
reversed_array[i+l]=a[l+n-i]
return reversed_array
```
We can easily see that the **reverse()** algorithm complexity has now become (removing costant therms and factors) $O(n)$, proving that the **splitSwap()** algorithm was not optimal._____no_output_____In order:<br>
Fig.1 :Reaching the first final-node-subroutine<br>
Fig.2 :Test over a with len(a)=n=16, l=0<br>
Fig.3 :Test over a with len(a)=16, n=8, l=7<br>_____no_output_____
<figcaption align="center"> Fig.1 :Reaching the first final-node-subroutine</figcaption>_____no_output_____=n=16, l=0")
<figcaption align="center"> Fig.2 :Test over a with len(a)=n=16, l=0</figcaption>_____no_output_____=16, n=8, l=7")
<figcaption align="center"> Fig.3 :Test over a with len(a)=16, n=8, l=7</figcaption>_____no_output_____# TQ3: Knapsack
In this theoretical question we have to face with a NP-complete problem: the Knapsack one. To solve it generally we have to use heuristic solutions but in some cases they fail to provide the optimal solution.
* The first heuristic solution is a greedy algorithm in which we order the object in increasing order of weight and then visit them sequentially, adding them to the solution as long as the budget is not exceeded. This algorithm does not provide the optimal solution in every situation indeed in my counterexample this greedy algorithm fails: we fix the budget: **W** = 10 and we have three object.
|i |w_i| v_i|
|-----|---|----|
|1 |4 |3 |
|2 |6 |5 |
|3 |10 |9 |
We have to visit the object sequentially so we are going to pick the first two objects, but we cannot pick the third one because we will exceed the budget. This choice is not optimal because it would be better pick only the third object because its values (9) is greater of the sum of the first two (8).
* In the second heuristic solution we have to order the objects in decreasing order of values, and then visit them sequentially, adding them to the solution if the budget is not exceeded. This algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the last counterexample.
|i |w_i| v_i|
|-----|---|----|
|1 |9 |9 |
|2 |7 |7 |
|3 |3 |3 |
We have to visit the objects sequentially so we are going to pick the first object, but we cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9).
* In the third heuristic solution we have to order them in decreasing relative value ($v_1$/ $w_i$), and then visit them sequentially, adding them to the solution if the budget is not exceeded
This algorithm does not provide the optimal solution in each situation indeed in my counterexample this greedy algorithm fails: I have decided to choose the same budget **W** = 10 and the same number of object of the two last counterexamples.
|i |w_i| v_i|
|-----|---|----|
|1 |7 |9 |
|2 |6 |6 |
|3 |4 |4 |
We have to visit the objects sequentially so we are going to pick the first object whose relative value is 1.29 while the one of the other objects is 1. We cannot pick the last two because we will exceed the budget. This choice is not optimal because it would be better pick the second and the third objects because the sum of their values (10) is greater of the first object value (9)._____no_output_____
|
{
"repository": "michele1783/ADM-HW2",
"path": "main.ipynb",
"matched_keywords": [
"STAR",
"evolution"
],
"stars": null,
"size": 893783,
"hexsha": "cb51081df4c2740c5aee626da09f61c33f8e64b6",
"max_line_length": 219005,
"avg_line_length": 266.0860375112,
"alphanum_fraction": 0.8472459199
}
|
# Notebook from fedelopezar/nrpytutorial
Path: in_progress/Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# `GiRaFFE_NRPy`: Source Terms
## Author: Patrick Nelson
<a id='intro'></a>
**Notebook Status:** <font color=green><b> Validated </b></font>
**Validation Notes:** This code produces the expected results for generated functions.
## This module presents the functionality of [GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py).
## Introduction:
This writes and documents the C code that `GiRaFFE_NRPy` uses to compute the source terms for the right-hand sides of the evolution equations for the unstaggered prescription.
The equations themselves are already coded up in other functions; however, for the $\tilde{S}_i$ source term, we will need derivatives of the metric. It will be most efficient and accurate to take them using the interpolated metric values that we will have calculated anyway; however, we will need to write our derivatives in a nonstandard way within NRPy+ in order to take advantage of this, writing our own code for memory access._____no_output_____<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#stilde_source): The $\tilde{S}_i$ source term
1. [Step 2](#code_validation): Code Validation against original C code
1. [Step 3](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file_____no_output_____
<code>
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd
outdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_validation","RHSs")
cmd.mkdir(outdir)_____no_output_____
</code>
<a id='stilde_source'></a>
## Step 1: The $\tilde{S}_i$ source term \[Back to [top](#toc)\]
$$\label{stilde_source}$$
We start in the usual way - import the modules we need. We will also import the Levi-Civita symbol from `indexedexp.py` and use it to set the Levi-Civita tensor $\epsilon^{ijk} = [ijk]/\sqrt{\gamma}$._____no_output_____
<code>
# Step 1: The StildeD RHS *source* term
from outputC import outputC, outCfunction # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import GRHD.equations as GRHD # NRPy+: Generate general relativistic hydrodynamics equations
import GRFFE.equations as GRFFE # NRPy+: Generate general relativistic force-free electrodynamics equations
thismodule = "GiRaFFE_NRPy_Source_Terms"
def generate_memory_access_code(gammaDD,betaU,alpha):
# There are several pieces of C code that we will write ourselves because we need to do things
# a little bit outside of what NRPy+ is built for.
# First, we will write general memory access. We will read in values from memory at a given point
# for each quantity we care about.
global general_access
general_access = ""
for var in ["GAMMADD00", "GAMMADD01", "GAMMADD02",
"GAMMADD11", "GAMMADD12", "GAMMADD22",
"BETAU0", "BETAU1", "BETAU2","ALPHA",
"BU0","BU1","BU2",
"VALENCIAVU0","VALENCIAVU1","VALENCIAVU2"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("bU","BU").replace("valencia","Valencia")
# e.g.,
# const REAL gammaDD00dD0 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
general_access += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+var+"GF,i0,i1,i2)];\n"
# This quick function returns a nearby point for memory access. We need this because derivatives are not local operations.
def idxp1(dirn):
if dirn==0:
return "i0+1,i1,i2"
if dirn==1:
return "i0,i1+1,i2"
if dirn==2:
return "i0,i1,i2+1"
# Next we evaluate needed derivatives of the metric, based on their values at cell faces
global metric_deriv_access
metric_deriv_access = []
# for dirn in range(3):
# metric_deriv_access.append("")
# for var in ["GAMMA_FACEDDdD00", "GAMMA_FACEDDdD01", "GAMMA_FACEDDdD02",
# "GAMMA_FACEDDdD11", "GAMMA_FACEDDdD12", "GAMMA_FACEDDdD22",
# "BETA_FACEUdD0", "BETA_FACEUdD1", "BETA_FACEUdD2","ALPHA_FACEdD"]:
# lhsvar = var.lower().replace("dddd","DDdD").replace("udd","UdD").replace("dd","dD").replace("u","U").replace("_face","")
# rhsvar = var.replace("dD","")
# # e.g.,
# # const REAL gammaDDdD000 = (auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)]-auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)])/dxx0;
# metric_deriv_access[dirn] += "const REAL "+lhsvar+str(dirn)+" = (auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")]-auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)])/dxx"+str(dirn)+";\n"
# metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
# For this workaround, instead of taking the derivative of the metric components and then building the
# four-metric, we build the four-metric and then take derivatives. Do this at i and i+1
for dirn in range(3):
metric_deriv_access.append("")
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)];\n"
# Read in at the next grid point
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("_face","_facep1")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")];\n"
metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
four_metric_vars = [
AB4m.g4DD[0][0],
AB4m.g4DD[0][1],
AB4m.g4DD[0][2],
AB4m.g4DD[0][3],
AB4m.g4DD[1][1],
AB4m.g4DD[1][2],
AB4m.g4DD[1][3],
AB4m.g4DD[2][2],
AB4m.g4DD[2][3],
AB4m.g4DD[3][3]
]
four_metric_names = [
"g4DD00",
"g4DD01",
"g4DD02",
"g4DD03",
"g4DD11",
"g4DD12",
"g4DD13",
"g4DD22",
"g4DD23",
"g4DD33"
]
global four_metric_C, four_metric_Cp1
four_metric_C = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
for ii in range(len(four_metric_names)):
four_metric_names[ii] += "p1"
four_metric_Cp1 = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
four_metric_C = four_metric_C.replace("gamma","gamma_face").replace("beta","beta_face").replace("alpha","alpha_face").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_deriv")
four_metric_Cp1 = four_metric_Cp1.replace("gamma","gamma_facep1").replace("beta","beta_facep1").replace("alpha","alpha_facep1").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_derivp")
global four_metric_deriv
four_metric_deriv = []
for dirn in range(3):
four_metric_deriv.append("")
for var in ["g4DDdD00", "g4DDdD01", "g4DDdD02", "g4DDdD03", "g4DDdD11",
"g4DDdD12", "g4DDdD13", "g4DDdD22", "g4DDdD23", "g4DDdD33"]:
lhsvar = var + str(dirn+1)
rhsvar = var.replace("dD","")
rhsvarp1 = rhsvar + "p1"
# e.g.,
# const REAL g44DDdD000 = (g4DD00p1 - g4DD00)/dxx0;
four_metric_deriv[dirn] += "const REAL "+lhsvar+" = ("+rhsvarp1+" - "+rhsvar+")/dxx"+str(dirn)+";\n"
# This creates the C code that writes to the Stilde_rhs direction specified.
global write_final_quantity
write_final_quantity = []
for dirn in range(3):
write_final_quantity.append("")
write_final_quantity[dirn] += "rhs_gfs[IDX4S(STILDED"+str(dirn)+"GF,i0,i1,i2)] += Stilde_rhsD"+str(dirn)+";"
def write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi):
generate_memory_access_code(gammaDD,betaU,alpha)
# First, we declare some dummy tensors that we will use for the codegen.
gammaDDdD = ixp.declarerank3("gammaDDdD","sym01",DIM=3)
betaUdD = ixp.declarerank2("betaUdD","nosym",DIM=3)
alphadD = ixp.declarerank1("alphadD",DIM=3)
g4DDdD = ixp.declarerank3("g4DDdD","sym01",DIM=4)
# We need to rerun a few of these functions with the reset lists to make sure these functions
# don't cheat by using analytic expressions
GRHD.compute_sqrtgammaDET(gammaDD)
GRHD.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, ValenciavU)
GRFFE.compute_smallb4U(gammaDD, betaU, alpha, GRHD.u4U_ito_ValenciavU, BU, sqrt4pi)
GRFFE.compute_smallbsquared(gammaDD, betaU, alpha, GRFFE.smallb4U)
GRFFE.compute_TEM4UU(gammaDD,betaU,alpha, GRFFE.smallb4U, GRFFE.smallbsquared,GRHD.u4U_ito_ValenciavU)
# GRHD.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDDdD,betaUdD,alphadD)
GRHD.compute_S_tilde_source_termD(alpha, GRHD.sqrtgammaDET,g4DDdD, GRFFE.TEM4UU)
for i in range(3):
desc = "Adds the source term to StildeD"+str(i)+"."
name = "calculate_StildeD"+str(i)+"_source_term"
outCfunction(
outfile = os.path.join(outdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs, REAL *rhs_gfs",
body = general_access \
+metric_deriv_access[i]\
+four_metric_C\
+four_metric_Cp1\
+four_metric_deriv[i]\
+outputC(GRHD.S_tilde_source_termD[i],"Stilde_rhsD"+str(i),"returnstring",params=outCparams).replace("IDX4","IDX4S")\
+write_final_quantity[i],
loopopts ="InteriorPoints",
rel_path_to_Cparams=os.path.join("../"))
_____no_output_____
</code>
<a id='code_validation'></a>
# Step 2: Code Validation against original C code \[Back to [top](#toc)\]
$$\label{code_validation}$$
To validate the code in this tutorial we check for agreement between the files
1. that were written in this tutorial and
1. those that are stored in `GiRaFFE_NRPy/GiRaFFE_Ccode_library` or generated by `GiRaFFE_NRPy_A2B.py`
_____no_output_____
<code>
# Declare gridfunctions necessary to generate the C code:
gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01",DIM=3)
betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU",DIM=3)
alpha = gri.register_gridfunctions("AUXEVOL","alpha",DIM=3)
BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3)
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD",DIM=3)
# Declare this symbol:
sqrt4pi = par.Cparameters("REAL",thismodule,"sqrt4pi","sqrt(4.0*M_PI)")
# First, we generate the file using the functions written in this notebook:
outCparams = "outCverbose=False"
write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
# Define the directory that we wish to validate against:
valdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_library","RHSs")
cmd.mkdir(valdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source
source.write_out_functions_for_StildeD_source_term(valdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
import difflib
import sys
print("Printing difference between original C code and this code...")
# Open the files to compare
files = ["calculate_StildeD0_source_term.h","calculate_StildeD1_source_term.h","calculate_StildeD2_source_term.h"]
for file in files:
print("Checking file " + file)
with open(os.path.join(valdir,file)) as file1, open(os.path.join(outdir,file)) as file2:
# Read the lines of each file
file1_lines = file1.readlines()
file2_lines = file2.readlines()
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir+file), tofile=os.path.join(outdir+file)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with .py file. See differences above.")
sys.exit(1)Output C function calculate_StildeD0_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_validation\RHSs\calculate_StildeD0_source_term.h
Output C function calculate_StildeD1_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_validation\RHSs\calculate_StildeD1_source_term.h
Output C function calculate_StildeD2_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_validation\RHSs\calculate_StildeD2_source_term.h
Output C function calculate_StildeD0_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_library\RHSs\calculate_StildeD0_source_term.h
Output C function calculate_StildeD1_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_library\RHSs\calculate_StildeD1_source_term.h
Output C function calculate_StildeD2_source_term() to file GiRaFFE_NRPy\GiRaFFE_Ccode_library\RHSs\calculate_StildeD2_source_term.h
Printing difference between original C code and this code...
Checking file calculate_StildeD0_source_term.h
No difference. TEST PASSED!
Checking file calculate_StildeD1_source_term.h
No difference. TEST PASSED!
Checking file calculate_StildeD2_source_term.h
No difference. TEST PASSED!
</code>
<a id='latex_pdf_output'></a>
# Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFE_NRPy_C_code_library-Source_Terms](TTutorial-GiRaFFE_NRPy_C_code_library-Source_Terms.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)_____no_output_____
<code>
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy-Source_Terms",location_of_template_file=os.path.join(".."))Notebook output to PDF is only supported on Linux systems, with pdflatex installed.
</code>
|
{
"repository": "fedelopezar/nrpytutorial",
"path": "in_progress/Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 20239,
"hexsha": "cb513ceaddc0ca4cd8e08c18ba3900e05c80aad3",
"max_line_length": 439,
"avg_line_length": 52.432642487,
"alphanum_fraction": 0.5756707347
}
|
# Notebook from srmnitc/pyscal-webpage
Path: pyscal/part3/05_distinguishing_solid_liquid.ipynb
## Distinction of solid liquid atoms and clustering _____no_output_____In this example, we will take one snapshot from a molecular dynamics simulation which has a solid cluster in liquid. The task is to identify solid atoms and cluster them. More details about the method can be found [here](https://pyscal.readthedocs.io/en/latest/solidliquid.html).
The first step is, of course, importing all the necessary module. For visualisation, we will use [Ovito](https://www.ovito.org/)._____no_output__________no_output_____The above image shows a visualisation of the system using Ovito. Importing modules,_____no_output_____
<code>
import pyscal.core as pc_____no_output_____
</code>
Now we will set up a System with this input file, and calculate neighbors. Here we will use a cutoff method to find neighbors. More details about finding neighbors can be found [here](https://pyscal.readthedocs.io/en/latest/nearestneighbormethods.html#)._____no_output_____
<code>
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)_____no_output_____
</code>
Once we compute the neighbors, the next step is to find solid atoms. This can be done using [System.find_solids](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids) method. There are few parameters that can be set, which can be found in detail [here](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids)._____no_output_____
<code>
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=False)_____no_output_____
</code>
The above statement found all the solid atoms. Solid atoms can be identified by the value of the `solid` attribute. For that we first get the atom objects and select those with `solid` value as True._____no_output_____
<code>
atoms = sys.atoms
solids = [atom for atom in atoms if atom.solid]
len(solids)_____no_output_____
</code>
There are 202 solid atoms in the system. In order to visualise in Ovito, we need to first write it out to a trajectory file. This can be done with the help of [to_file](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.to_file) method of System. This method can help to save any attribute of the atom or ant Steinhardt parameter value. _____no_output_____
<code>
sys.to_file('sys.solid.dat', custom = ['solid'])_____no_output_____
</code>
We can now visualise this file in Ovito. After opening the file in Ovito, the modifier [compute property](https://ovito.org/manual/particles.modifiers.compute_property.html) can be selected. The `Output property` should be `selection` and in the expression field, `solid==0` can be selected to select all the non solid atoms. Applying a modifier [delete selected particles](https://ovito.org/manual/particles.modifiers.delete_selected_particles.html) can be applied to delete all the non solid particles. The system after removing all the liquid atoms is shown below._____no_output__________no_output_____### Clustering algorithm
You can see that there is a cluster of atom. The clustering functions that pyscal offers helps in this regard. If you used `find_clusters` with `cluster=True`, the clustering is carried out. Since we did used `cluster=False` above, we will rerun the function_____no_output_____
<code>
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=True)_____no_output_____
</code>
You can see that the above function call returned the number of atoms belonging to the largest cluster as an output. In order to extract atoms that belong to the largest cluster, we can use the `largest_cluster` attribute of the atom._____no_output_____
<code>
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)_____no_output_____
</code>
The value matches that given by the function. Once again we will save this information to a file and visualise it in Ovito. _____no_output_____
<code>
sys.to_file('sys.cluster.dat', custom = ['solid', 'largest_cluster'])_____no_output_____
</code>
The system visualised in Ovito following similar steps as above is shown below._____no_output__________no_output_____It is clear from the image that the largest cluster of solid atoms was successfully identified. Clustering can be done over any property. The following example with the same system will illustrate this._____no_output_____## Clustering based on a custom property_____no_output_____In pyscal, clustering can be done based on any property. The following example illustrates this. To find the clusters based on a custom property, the [System.clusters_atoms](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.cluster_atoms) method has to be used. The simulation box shown above has the centre roughly at (25, 25, 25). For the custom clustering, we will cluster all atoms within a distance of 10 from the the rough centre of the box at (25, 25, 25). Let us define a function that checks the above condition._____no_output_____
<code>
def check_distance(atom):
#get position of atom
pos = atom.pos
#calculate distance from (25, 25, 25)
dist = ((pos[0]-25)**2 + (pos[1]-25)**2 + (pos[2]-25)**2)**0.5
#check if dist < 10
return (dist <= 10)_____no_output_____
</code>
The above function would return True or False depending on a condition and takes the Atom as an argument. These are the two important conditions to be satisfied. Now we can pass this function to cluster. First, set up the system and find the neighbors. _____no_output_____
<code>
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)_____no_output_____
</code>
Now cluster_____no_output_____
<code>
sys.cluster_atoms(check_distance)_____no_output_____
</code>
There are 242 atoms in the cluster! Once again we can check this, save to a file and visualise in ovito._____no_output_____
<code>
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)_____no_output_____sys.to_file('sys.dist.dat', custom = ['solid', 'largest_cluster'])_____no_output_____
</code>
_____no_output_____This example illustrates that any property can be used to cluster the atoms!_____no_output_____
|
{
"repository": "srmnitc/pyscal-webpage",
"path": "pyscal/part3/05_distinguishing_solid_liquid.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": 2,
"size": 10485,
"hexsha": "cb537d5cb604295c4153ebc3252d472be9a37e22",
"max_line_length": 573,
"avg_line_length": 27.6649076517,
"alphanum_fraction": 0.5980925131
}
|
# Notebook from Wabinab/NLP_GroupProject_DG
Path: Week_9/cleaning data by re.ipynb
<code>
import os
import sys
import pandas as pd
import re
# pd.set_option('display.max_colwidth', -1)<ipython-input-365-74384648d893>:5: FutureWarning: Passing a negative integer is deprecated in version 1.0 and will not be supported in future version. Instead, use None to not limit the column width.
pd.set_option('display.max_colwidth', -1)
</code>
Read data and Spilit into texts and Labels_____no_output_____
<code>
BASE_DIR = ''
GLOVE_DIR = os.path.join(BASE_DIR, 'glove.6B')
TEXT_DATA_DIR = os.path.join(BASE_DIR, '20_newsgroups')_____no_output_____# This code from http#s://www.kaggle.com/mansijharia with some edit
# May take a few time
texts = []
labels_index = {}
labels = []
for name in sorted(os.listdir((BASE_DIR+'20_newsgroups'))):
path = os.path.join(BASE_DIR,'20_newsgroups', name)
if os.path.isdir(path):
label_id = len(labels_index)
labels_index[name] = label_id
for fname in sorted(os.listdir(path)):
if fname.isdigit():
fpath = os.path.join(path, fname)
args = {} if sys.version_info < (3,) else {'encoding': 'latin-1'}
with open(fpath, **args) as f:
t = f.read()
#Skip the matadata at 1st pragraph.
i = t.find('\n\n')
if 0 < i:
t = t[i:]
texts.append(t)
labels.append(label_id)_____no_output_____
</code>
print('Found %s texts.' % len(texts))
print('Found %s label.' % len(labels))_____no_output_____dict(labels_index.items())_____no_output_____- First remove the matadata by remove the frisr pragraph
- Some matadata contian two pragraphs we will with the expration
_____no_output_____
<code>
# for i in range(0,len(texts)):
# match = re.search(r'([\w\.-]+)@([\w\.-]+)', texts[i])
# if texts[i]!= None:
# print(match)# just to show result
# #so long output
#no need _____no_output_____for i in range(0,len(texts)):
texts[i]= texts[i].strip() #To remove spaces from the beginning and the end of a string_____no_output_____
</code>
# all these step will oreder and put it on the clwaning mathod on the process_data.py file._____no_output_____#### trye the i in (0,5,7644,5432, 567)_____no_output_____
<code>
#just one sample to show the work
# you can change the value of i
i=0
print('before:',texts[i])
print('the length is',len(texts[i]))before: Archive-name: atheism/resources
Alt-atheism-archive-name: resources
Last-modified: 11 December 1992
Version: 1.0
Atheist Resources
Addresses of Atheist Organizations
USA
FREEDOM FROM RELIGION FOUNDATION
Darwin fish bumper stickers and assorted other atheist paraphernalia are
available from the Freedom From Religion Foundation in the US.
Write to: FFRF, P.O. Box 750, Madison, WI 53701.
Telephone: (608) 256-8900
EVOLUTION DESIGNS
Evolution Designs sell the "Darwin fish". It's a fish symbol, like the ones
Christians stick on their cars, but with feet and the word "Darwin" written
inside. The deluxe moulded 3D plastic fish is $4.95 postpaid in the US.
Write to: Evolution Designs, 7119 Laurel Canyon #4, North Hollywood,
CA 91605.
People in the San Francisco Bay area can get Darwin Fish from Lynn Gold --
try mailing <[email protected]>. For net people who go to Lynn directly, the
price is $4.95 per fish.
AMERICAN ATHEIST PRESS
AAP publish various atheist books -- critiques of the Bible, lists of
Biblical contradictions, and so on. One such book is:
"The Bible Handbook" by W.P. Ball and G.W. Foote. American Atheist Press.
372 pp. ISBN 0-910309-26-4, 2nd edition, 1986. Bible contradictions,
absurdities, atrocities, immoralities... contains Ball, Foote: "The Bible
Contradicts Itself", AAP. Based on the King James version of the Bible.
Write to: American Atheist Press, P.O. Box 140195, Austin, TX 78714-0195.
or: 7215 Cameron Road, Austin, TX 78752-2973.
Telephone: (512) 458-1244
Fax: (512) 467-9525
PROMETHEUS BOOKS
Sell books including Haught's "Holy Horrors" (see below).
Write to: 700 East Amherst Street, Buffalo, New York 14215.
Telephone: (716) 837-2475.
An alternate address (which may be newer or older) is:
Prometheus Books, 59 Glenn Drive, Buffalo, NY 14228-2197.
AFRICAN-AMERICANS FOR HUMANISM
An organization promoting black secular humanism and uncovering the history of
black freethought. They publish a quarterly newsletter, AAH EXAMINER.
Write to: Norm R. Allen, Jr., African Americans for Humanism, P.O. Box 664,
Buffalo, NY 14226.
United Kingdom
Rationalist Press Association National Secular Society
88 Islington High Street 702 Holloway Road
London N1 8EW London N19 3NL
071 226 7251 071 272 1266
British Humanist Association South Place Ethical Society
14 Lamb's Conduit Passage Conway Hall
London WC1R 4RH Red Lion Square
071 430 0908 London WC1R 4RL
fax 071 430 1271 071 831 7723
The National Secular Society publish "The Freethinker", a monthly magazine
founded in 1881.
Germany
IBKA e.V.
Internationaler Bund der Konfessionslosen und Atheisten
Postfach 880, D-1000 Berlin 41. Germany.
IBKA publish a journal:
MIZ. (Materialien und Informationen zur Zeit. Politisches
Journal der Konfessionslosesn und Atheisten. Hrsg. IBKA e.V.)
MIZ-Vertrieb, Postfach 880, D-1000 Berlin 41. Germany.
For atheist books, write to:
IBDK, Internationaler B"ucherdienst der Konfessionslosen
Postfach 3005, D-3000 Hannover 1. Germany.
Telephone: 0511/211216
Books -- Fiction
THOMAS M. DISCH
"The Santa Claus Compromise"
Short story. The ultimate proof that Santa exists. All characters and
events are fictitious. Any similarity to living or dead gods -- uh, well...
WALTER M. MILLER, JR
"A Canticle for Leibowitz"
One gem in this post atomic doomsday novel is the monks who spent their lives
copying blueprints from "Saint Leibowitz", filling the sheets of paper with
ink and leaving white lines and letters.
EDGAR PANGBORN
"Davy"
Post atomic doomsday novel set in clerical states. The church, for example,
forbids that anyone "produce, describe or use any substance containing...
atoms".
PHILIP K. DICK
Philip K. Dick Dick wrote many philosophical and thought-provoking short
stories and novels. His stories are bizarre at times, but very approachable.
He wrote mainly SF, but he wrote about people, truth and religion rather than
technology. Although he often believed that he had met some sort of God, he
remained sceptical. Amongst his novels, the following are of some relevance:
"Galactic Pot-Healer"
A fallible alien deity summons a group of Earth craftsmen and women to a
remote planet to raise a giant cathedral from beneath the oceans. When the
deity begins to demand faith from the earthers, pot-healer Joe Fernwright is
unable to comply. A polished, ironic and amusing novel.
"A Maze of Death"
Noteworthy for its description of a technology-based religion.
"VALIS"
The schizophrenic hero searches for the hidden mysteries of Gnostic
Christianity after reality is fired into his brain by a pink laser beam of
unknown but possibly divine origin. He is accompanied by his dogmatic and
dismissively atheist friend and assorted other odd characters.
"The Divine Invasion"
God invades Earth by making a young woman pregnant as she returns from
another star system. Unfortunately she is terminally ill, and must be
assisted by a dead man whose brain is wired to 24-hour easy listening music.
MARGARET ATWOOD
"The Handmaid's Tale"
A story based on the premise that the US Congress is mysteriously
assassinated, and fundamentalists quickly take charge of the nation to set it
"right" again. The book is the diary of a woman's life as she tries to live
under the new Christian theocracy. Women's right to own property is revoked,
and their bank accounts are closed; sinful luxuries are outlawed, and the
radio is only used for readings from the Bible. Crimes are punished
retroactively: doctors who performed legal abortions in the "old world" are
hunted down and hanged. Atwood's writing style is difficult to get used to
at first, but the tale grows more and more chilling as it goes on.
VARIOUS AUTHORS
"The Bible"
This somewhat dull and rambling work has often been criticized. However, it
is probably worth reading, if only so that you'll know what all the fuss is
about. It exists in many different versions, so make sure you get the one
true version.
Books -- Non-fiction
PETER DE ROSA
"Vicars of Christ", Bantam Press, 1988
Although de Rosa seems to be Christian or even Catholic this is a very
enlighting history of papal immoralities, adulteries, fallacies etc.
(German translation: "Gottes erste Diener. Die dunkle Seite des Papsttums",
Droemer-Knaur, 1989)
MICHAEL MARTIN
"Atheism: A Philosophical Justification", Temple University Press,
Philadelphia, USA.
A detailed and scholarly justification of atheism. Contains an outstanding
appendix defining terminology and usage in this (necessarily) tendentious
area. Argues both for "negative atheism" (i.e. the "non-belief in the
existence of god(s)") and also for "positive atheism" ("the belief in the
non-existence of god(s)"). Includes great refutations of the most
challenging arguments for god; particular attention is paid to refuting
contempory theists such as Platinga and Swinburne.
541 pages. ISBN 0-87722-642-3 (hardcover; paperback also available)
"The Case Against Christianity", Temple University Press
A comprehensive critique of Christianity, in which he considers
the best contemporary defences of Christianity and (ultimately)
demonstrates that they are unsupportable and/or incoherent.
273 pages. ISBN 0-87722-767-5
JAMES TURNER
"Without God, Without Creed", The Johns Hopkins University Press, Baltimore,
MD, USA
Subtitled "The Origins of Unbelief in America". Examines the way in which
unbelief (whether agnostic or atheistic) became a mainstream alternative
world-view. Focusses on the period 1770-1900, and while considering France
and Britain the emphasis is on American, and particularly New England
developments. "Neither a religious history of secularization or atheism,
Without God, Without Creed is, rather, the intellectual history of the fate
of a single idea, the belief that God exists."
316 pages. ISBN (hardcover) 0-8018-2494-X (paper) 0-8018-3407-4
GEORGE SELDES (Editor)
"The great thoughts", Ballantine Books, New York, USA
A "dictionary of quotations" of a different kind, concentrating on statements
and writings which, explicitly or implicitly, present the person's philosophy
and world-view. Includes obscure (and often suppressed) opinions from many
people. For some popular observations, traces the way in which various
people expressed and twisted the idea over the centuries. Quite a number of
the quotations are derived from Cardiff's "What Great Men Think of Religion"
and Noyes' "Views of Religion".
490 pages. ISBN (paper) 0-345-29887-X.
RICHARD SWINBURNE
"The Existence of God (Revised Edition)", Clarendon Paperbacks, Oxford
This book is the second volume in a trilogy that began with "The Coherence of
Theism" (1977) and was concluded with "Faith and Reason" (1981). In this
work, Swinburne attempts to construct a series of inductive arguments for the
existence of God. His arguments, which are somewhat tendentious and rely
upon the imputation of late 20th century western Christian values and
aesthetics to a God which is supposedly as simple as can be conceived, were
decisively rejected in Mackie's "The Miracle of Theism". In the revised
edition of "The Existence of God", Swinburne includes an Appendix in which he
makes a somewhat incoherent attempt to rebut Mackie.
J. L. MACKIE
"The Miracle of Theism", Oxford
This (posthumous) volume contains a comprehensive review of the principal
arguments for and against the existence of God. It ranges from the classical
philosophical positions of Descartes, Anselm, Berkeley, Hume et al, through
the moral arguments of Newman, Kant and Sidgwick, to the recent restatements
of the classical theses by Plantinga and Swinburne. It also addresses those
positions which push the concept of God beyond the realm of the rational,
such as those of Kierkegaard, Kung and Philips, as well as "replacements for
God" such as Lelie's axiarchism. The book is a delight to read - less
formalistic and better written than Martin's works, and refreshingly direct
when compared with the hand-waving of Swinburne.
JAMES A. HAUGHT
"Holy Horrors: An Illustrated History of Religious Murder and Madness",
Prometheus Books
Looks at religious persecution from ancient times to the present day -- and
not only by Christians.
Library of Congress Catalog Card Number 89-64079. 1990.
NORM R. ALLEN, JR.
"African American Humanism: an Anthology"
See the listing for African Americans for Humanism above.
GORDON STEIN
"An Anthology of Atheism and Rationalism", Prometheus Books
An anthology covering a wide range of subjects, including 'The Devil, Evil
and Morality' and 'The History of Freethought'. Comprehensive bibliography.
EDMUND D. COHEN
"The Mind of The Bible-Believer", Prometheus Books
A study of why people become Christian fundamentalists, and what effect it
has on them.
Net Resources
There's a small mail-based archive server at mantis.co.uk which carries
archives of old alt.atheism.moderated articles and assorted other files. For
more information, send mail to [email protected] saying
help
send atheism/index
and it will mail back a reply.
mathew
ÿ
the length is 11518
texts[i]= texts[i].strip() #To remove spaces from the beginning and the end
print('after:',texts[i])
print('the length is',len(texts[i]))_____no_output_____texts[i] =re.sub(r'\=+','', texts[i])#To remove any == characters
print('after:',texts[i])
print('the length is',len(texts[i]))
after: Dean J. Falcione (posting from [email protected]) writes:
[I wrote:]
>>When the Pens got Mario, granted there was big publicity, etc, etc,
>>and interest was immediately generated. Gretzky did the same thing for LA.
>>However, imnsho, neither team would have seen a marked improvement in
>>attendance if the team record did not improve. In the year before Lemieux
>>came, Pittsburgh finished with 38 points. Following his arrival, the Pens
>>finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
>>Stanley Cups thrown in.
>It was at this point the Pens attendance was near capacity (34 out of 40
>sellouts) yet they hadn't made the playoffs since 1982. How do you explain
>a 6th place team breaking attendance records when they haven't been to the
>playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
>You could make a case that the *expectation* of an improving team that
>would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
>But I think the reason is Lemieux
>had a 168 point season and was the first non-Gretzky to win the Hart and
>Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
>Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
>They made the transaction to try and build a winner around Mario, that is
>true. But the improvement in attendance came before they started doing
>this (Coffey late in 1987) and before they even had a playoff bound team.
>A doubling of attendance occured in 1984-85 from the previous year. An
>increase from 38 points to 53 points is not going to do that. The arrival
>of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
> Similar thing happened in L.A. Before
>Gretzky's arrival, about 12000 per game. After, constant sellouts. They
>are STILL selling out every game despite showing little or no improvement
>since Gretzky's first year there. How do you explain it? People are going
>to see Gretzky. they certainly aren't going to see a winner, they haven't
>GOT a winner. They've had MUCH better teams in their past history than
>they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
>I think in the case of a Lemieux or Gretzky, the player can transcend
>winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
>But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
>This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? <couldn't resist...>
>getting a HUGE jump in productivity, yet they ARE getting a huge
>jump in attendance. This is due to the emergence of Teemu Selanne.
>They have the 17th best record in hockey, it sure as hell isn't because
>they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5868
texts[i] =re.sub(r'\|+','', texts[i])#To remove any | characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from [email protected]) writes:
[I wrote:]
>>When the Pens got Mario, granted there was big publicity, etc, etc,
>>and interest was immediately generated. Gretzky did the same thing for LA.
>>However, imnsho, neither team would have seen a marked improvement in
>>attendance if the team record did not improve. In the year before Lemieux
>>came, Pittsburgh finished with 38 points. Following his arrival, the Pens
>>finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
>>Stanley Cups thrown in.
>It was at this point the Pens attendance was near capacity (34 out of 40
>sellouts) yet they hadn't made the playoffs since 1982. How do you explain
>a 6th place team breaking attendance records when they haven't been to the
>playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
>You could make a case that the *expectation* of an improving team that
>would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
>But I think the reason is Lemieux
>had a 168 point season and was the first non-Gretzky to win the Hart and
>Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
>Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
>They made the transaction to try and build a winner around Mario, that is
>true. But the improvement in attendance came before they started doing
>this (Coffey late in 1987) and before they even had a playoff bound team.
>A doubling of attendance occured in 1984-85 from the previous year. An
>increase from 38 points to 53 points is not going to do that. The arrival
>of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
> Similar thing happened in L.A. Before
>Gretzky's arrival, about 12000 per game. After, constant sellouts. They
>are STILL selling out every game despite showing little or no improvement
>since Gretzky's first year there. How do you explain it? People are going
>to see Gretzky. they certainly aren't going to see a winner, they haven't
>GOT a winner. They've had MUCH better teams in their past history than
>they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
>I think in the case of a Lemieux or Gretzky, the player can transcend
>winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
>But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
>This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? <couldn't resist...>
>getting a HUGE jump in productivity, yet they ARE getting a huge
>jump in attendance. This is due to the emergence of Teemu Selanne.
>They have the 17th best record in hockey, it sure as hell isn't because
>they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5868
texts[i] =re.sub(r'\(\)+','', texts[i])#To remove any () empty parentheses
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from [email protected]) writes:
[I wrote:]
>>When the Pens got Mario, granted there was big publicity, etc, etc,
>>and interest was immediately generated. Gretzky did the same thing for LA.
>>However, imnsho, neither team would have seen a marked improvement in
>>attendance if the team record did not improve. In the year before Lemieux
>>came, Pittsburgh finished with 38 points. Following his arrival, the Pens
>>finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
>>Stanley Cups thrown in.
>It was at this point the Pens attendance was near capacity (34 out of 40
>sellouts) yet they hadn't made the playoffs since 1982. How do you explain
>a 6th place team breaking attendance records when they haven't been to the
>playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
>You could make a case that the *expectation* of an improving team that
>would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
>But I think the reason is Lemieux
>had a 168 point season and was the first non-Gretzky to win the Hart and
>Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
>Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
>They made the transaction to try and build a winner around Mario, that is
>true. But the improvement in attendance came before they started doing
>this (Coffey late in 1987) and before they even had a playoff bound team.
>A doubling of attendance occured in 1984-85 from the previous year. An
>increase from 38 points to 53 points is not going to do that. The arrival
>of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
> Similar thing happened in L.A. Before
>Gretzky's arrival, about 12000 per game. After, constant sellouts. They
>are STILL selling out every game despite showing little or no improvement
>since Gretzky's first year there. How do you explain it? People are going
>to see Gretzky. they certainly aren't going to see a winner, they haven't
>GOT a winner. They've had MUCH better teams in their past history than
>they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
>I think in the case of a Lemieux or Gretzky, the player can transcend
>winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
>But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
>This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? <couldn't resist...>
>getting a HUGE jump in productivity, yet they ARE getting a huge
>jump in attendance. This is due to the emergence of Teemu Selanne.
>They have the 17th best record in hockey, it sure as hell isn't because
>they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5868
texts[i] =re.sub(r'\[\]+','', texts[i])#To remove any [] characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from [email protected]) writes:
[I wrote:]
>>When the Pens got Mario, granted there was big publicity, etc, etc,
>>and interest was immediately generated. Gretzky did the same thing for LA.
>>However, imnsho, neither team would have seen a marked improvement in
>>attendance if the team record did not improve. In the year before Lemieux
>>came, Pittsburgh finished with 38 points. Following his arrival, the Pens
>>finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
>>Stanley Cups thrown in.
>It was at this point the Pens attendance was near capacity (34 out of 40
>sellouts) yet they hadn't made the playoffs since 1982. How do you explain
>a 6th place team breaking attendance records when they haven't been to the
>playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
>You could make a case that the *expectation* of an improving team that
>would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
>But I think the reason is Lemieux
>had a 168 point season and was the first non-Gretzky to win the Hart and
>Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
>Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
>They made the transaction to try and build a winner around Mario, that is
>true. But the improvement in attendance came before they started doing
>this (Coffey late in 1987) and before they even had a playoff bound team.
>A doubling of attendance occured in 1984-85 from the previous year. An
>increase from 38 points to 53 points is not going to do that. The arrival
>of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
> Similar thing happened in L.A. Before
>Gretzky's arrival, about 12000 per game. After, constant sellouts. They
>are STILL selling out every game despite showing little or no improvement
>since Gretzky's first year there. How do you explain it? People are going
>to see Gretzky. they certainly aren't going to see a winner, they haven't
>GOT a winner. They've had MUCH better teams in their past history than
>they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
>I think in the case of a Lemieux or Gretzky, the player can transcend
>winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
>But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
>This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? <couldn't resist...>
>getting a HUGE jump in productivity, yet they ARE getting a huge
>jump in attendance. This is due to the emergence of Teemu Selanne.
>They have the 17th best record in hockey, it sure as hell isn't because
>they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5868
texts[i] = re.sub("[<>]", " ",texts[i])#To remove < and > characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from [email protected]) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5868
texts[i] = re.sub('([\w\.-]+)@([\w\.-]+)','', texts[i])#To remove any emails
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winning/competitive/improving/butt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5858
texts[i] = re.sub(r"/*\\*/*",'', texts[i])#To remove \/ characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
^^
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winningcompetitiveimprovingbutt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5855
texts[i] = re.sub('\^+','', texts[i])#To remove ^ characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winningcompetitiveimprovingbutt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5853
texts[i] = re.sub("[__]+", " ", texts[i])#To remove lines
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winningcompetitiveimprovingbutt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5853
texts[i] =re.sub('--+', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winningcompetitiveimprovingbutt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5853
texts[i] =re.sub(r'\~\~+','', texts[i])#To remove any == characters
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes:
[I wrote:]
When the Pens got Mario, granted there was big publicity, etc, etc,
and interest was immediately generated. Gretzky did the same thing for LA.
However, imnsho, neither team would have seen a marked improvement in
attendance if the team record did not improve. In the year before Lemieux
came, Pittsburgh finished with 38 points. Following his arrival, the Pens
finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of
Stanley Cups thrown in.
It was at this point the Pens attendance was near capacity (34 out of 40
sellouts) yet they hadn't made the playoffs since 1982. How do you explain
a 6th place team breaking attendance records when they haven't been to the
playoffs in 7 years? Mario Lemieux is the explanation, IMHO.
You could make a case that the *expectation* of an improving team that
would make the playoffs is the reason.
Funny you should mention it...this is exactly the case I was going to make.
But I think the reason is Lemieux
had a 168 point season and was the first non-Gretzky to win the Hart and
Ross since 1980. People turned out to watch him play.
I will grant that a star like Mario will draw fans, even if the team sucks.
But this is short term only; I still do not think the attendance increase
will last, unless the team is a winningcompetitiveimprovingbutt-kicking
one. Pittsburgh was still getting better, so people continued to support
them. If they suddenly dropped to, say, 50 points, you'd have knee surgery
for some of the people jumping off the bandwagon.
Also, the following year (88-89) the Pens had 89 points not 87.
Ok. My numbers came from the NHL Guide and Record Book.
They made the transaction to try and build a winner around Mario, that is
true. But the improvement in attendance came before they started doing
this (Coffey late in 1987) and before they even had a playoff bound team.
A doubling of attendance occured in 1984-85 from the previous year. An
increase from 38 points to 53 points is not going to do that. The arrival
of Mario Lemieux is what did it.
You can give the credit to Mario since he deserves it. But my point is that
it wasn't Mario himself, but it was the *expectation* of things to come (i.e.
a winning team) that he created by being the next great hockey superstar. And
before anybody jumps in and says I'm nit-picking and mincing words, go back
and read from where this thread started...
It might help to think about what would go through a fan's mind who suddenly
found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is
amazing, I'll go watch him play", or was it "gee, now we've got a *kick*
*ass* guy on *our* side, I'll go watch him play". I think it was the latter.
Similar thing happened in L.A. Before
Gretzky's arrival, about 12000 per game. After, constant sellouts. They
are STILL selling out every game despite showing little or no improvement
since Gretzky's first year there. How do you explain it? People are going
to see Gretzky. they certainly aren't going to see a winner, they haven't
GOT a winner. They've had MUCH better teams in their past history than
they currently have, yet they didn't draw as well then.
I don't think this is accurate. The *tickets* sell, but people don't go to
the games. I think this thread has already been discussed...season ticket
holders in LA don't always use their tickets. So in effect, after the Kings
initial success following Gretzky's arrival (68 to 91 points, same source)
and corresponding attendance jump, there has been an effective drop in
attendance even though ticket sales may not have changed much.
Whether or not the Kings are a 'winner' is debatable. I claim that since
Gretzky's arrival they have at the very least been competitive...I also claim
that McNall has made a stupid move in trying to reassemble the Oiler
dynasty...but that's another story and included only because I don't like
McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and
that undoubtedly was also responsible for the attendance and merchandising
sales, etc. But as I said, when the Kings have been in there little
tailspins over the past couple of years there have been empty seats at the
Forum even if the tickets were sold.
I think in the case of a Lemieux or Gretzky, the player can transcend
winning as the major drawing power.
For the short term, IMO. Although I think that it's inevitable that the team
will improve with a player such as Lemieux or Gretzky, simply because they
make people around them better.
But winning sure as hell helps. ;-)
Well, at least we are in full agreement here!
This does not make Roger's point any more valid, but the Jets aren't
So are you saying Roger has ever had a valid point? couldn't resist...
getting a HUGE jump in productivity, yet they ARE getting a huge
jump in attendance. This is due to the emergence of Teemu Selanne.
They have the 17th best record in hockey, it sure as hell isn't because
they are winning.
Yes, but they are doing no worse than last year. I think the same type of
reasoning I applied to a new Pittsburgh fan applies to all the extra people
showing up at Winnipeg games. It's difficult to predict, but do you think
that if the Jets miss the playoffs next season that in the year after they
will maintain their attendance levels? I seriously doubt it, because in that
case the expectation of an improving team would be gone, with or without
Selanne.
I did provide the example of Rocket Ismail and the Toronto Argonauts of the
CFL...did you leave it out because you don't know much about the CFL? If
that's the case then fair enough, but if it isn't the case then I'm curious
to hear your explanation.
the length is 5853
texts[i] = re.sub("\n", " ", texts[i])#To remove lines
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes: [I wrote:] When the Pens got Mario, granted there was big publicity, etc, etc, and interest was immediately generated. Gretzky did the same thing for LA. However, imnsho, neither team would have seen a marked improvement in attendance if the team record did not improve. In the year before Lemieux came, Pittsburgh finished with 38 points. Following his arrival, the Pens finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of Stanley Cups thrown in. It was at this point the Pens attendance was near capacity (34 out of 40 sellouts) yet they hadn't made the playoffs since 1982. How do you explain a 6th place team breaking attendance records when they haven't been to the playoffs in 7 years? Mario Lemieux is the explanation, IMHO. You could make a case that the *expectation* of an improving team that would make the playoffs is the reason. Funny you should mention it...this is exactly the case I was going to make. But I think the reason is Lemieux had a 168 point season and was the first non-Gretzky to win the Hart and Ross since 1980. People turned out to watch him play. I will grant that a star like Mario will draw fans, even if the team sucks. But this is short term only; I still do not think the attendance increase will last, unless the team is a winningcompetitiveimprovingbutt-kicking one. Pittsburgh was still getting better, so people continued to support them. If they suddenly dropped to, say, 50 points, you'd have knee surgery for some of the people jumping off the bandwagon. Also, the following year (88-89) the Pens had 89 points not 87. Ok. My numbers came from the NHL Guide and Record Book. They made the transaction to try and build a winner around Mario, that is true. But the improvement in attendance came before they started doing this (Coffey late in 1987) and before they even had a playoff bound team. A doubling of attendance occured in 1984-85 from the previous year. An increase from 38 points to 53 points is not going to do that. The arrival of Mario Lemieux is what did it. You can give the credit to Mario since he deserves it. But my point is that it wasn't Mario himself, but it was the *expectation* of things to come (i.e. a winning team) that he created by being the next great hockey superstar. And before anybody jumps in and says I'm nit-picking and mincing words, go back and read from where this thread started... It might help to think about what would go through a fan's mind who suddenly found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is amazing, I'll go watch him play", or was it "gee, now we've got a *kick* *ass* guy on *our* side, I'll go watch him play". I think it was the latter. Similar thing happened in L.A. Before Gretzky's arrival, about 12000 per game. After, constant sellouts. They are STILL selling out every game despite showing little or no improvement since Gretzky's first year there. How do you explain it? People are going to see Gretzky. they certainly aren't going to see a winner, they haven't GOT a winner. They've had MUCH better teams in their past history than they currently have, yet they didn't draw as well then. I don't think this is accurate. The *tickets* sell, but people don't go to the games. I think this thread has already been discussed...season ticket holders in LA don't always use their tickets. So in effect, after the Kings initial success following Gretzky's arrival (68 to 91 points, same source) and corresponding attendance jump, there has been an effective drop in attendance even though ticket sales may not have changed much. Whether or not the Kings are a 'winner' is debatable. I claim that since Gretzky's arrival they have at the very least been competitive...I also claim that McNall has made a stupid move in trying to reassemble the Oiler dynasty...but that's another story and included only because I don't like McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and that undoubtedly was also responsible for the attendance and merchandising sales, etc. But as I said, when the Kings have been in there little tailspins over the past couple of years there have been empty seats at the Forum even if the tickets were sold. I think in the case of a Lemieux or Gretzky, the player can transcend winning as the major drawing power. For the short term, IMO. Although I think that it's inevitable that the team will improve with a player such as Lemieux or Gretzky, simply because they make people around them better. But winning sure as hell helps. ;-) Well, at least we are in full agreement here! This does not make Roger's point any more valid, but the Jets aren't So are you saying Roger has ever had a valid point? couldn't resist... getting a HUGE jump in productivity, yet they ARE getting a huge jump in attendance. This is due to the emergence of Teemu Selanne. They have the 17th best record in hockey, it sure as hell isn't because they are winning. Yes, but they are doing no worse than last year. I think the same type of reasoning I applied to a new Pittsburgh fan applies to all the extra people showing up at Winnipeg games. It's difficult to predict, but do you think that if the Jets miss the playoffs next season that in the year after they will maintain their attendance levels? I seriously doubt it, because in that case the expectation of an improving team would be gone, with or without Selanne. I did provide the example of Rocket Ismail and the Toronto Argonauts of the CFL...did you leave it out because you don't know much about the CFL? If that's the case then fair enough, but if it isn't the case then I'm curious to hear your explanation.
the length is 5853
texts[i] =re.sub('\t', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))
after: Dean J. Falcione (posting from jrmst+) writes: [I wrote:] When the Pens got Mario, granted there was big publicity, etc, etc, and interest was immediately generated. Gretzky did the same thing for LA. However, imnsho, neither team would have seen a marked improvement in attendance if the team record did not improve. In the year before Lemieux came, Pittsburgh finished with 38 points. Following his arrival, the Pens finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of Stanley Cups thrown in. It was at this point the Pens attendance was near capacity (34 out of 40 sellouts) yet they hadn't made the playoffs since 1982. How do you explain a 6th place team breaking attendance records when they haven't been to the playoffs in 7 years? Mario Lemieux is the explanation, IMHO. You could make a case that the *expectation* of an improving team that would make the playoffs is the reason. Funny you should mention it...this is exactly the case I was going to make. But I think the reason is Lemieux had a 168 point season and was the first non-Gretzky to win the Hart and Ross since 1980. People turned out to watch him play. I will grant that a star like Mario will draw fans, even if the team sucks. But this is short term only; I still do not think the attendance increase will last, unless the team is a winningcompetitiveimprovingbutt-kicking one. Pittsburgh was still getting better, so people continued to support them. If they suddenly dropped to, say, 50 points, you'd have knee surgery for some of the people jumping off the bandwagon. Also, the following year (88-89) the Pens had 89 points not 87. Ok. My numbers came from the NHL Guide and Record Book. They made the transaction to try and build a winner around Mario, that is true. But the improvement in attendance came before they started doing this (Coffey late in 1987) and before they even had a playoff bound team. A doubling of attendance occured in 1984-85 from the previous year. An increase from 38 points to 53 points is not going to do that. The arrival of Mario Lemieux is what did it. You can give the credit to Mario since he deserves it. But my point is that it wasn't Mario himself, but it was the *expectation* of things to come (i.e. a winning team) that he created by being the next great hockey superstar. And before anybody jumps in and says I'm nit-picking and mincing words, go back and read from where this thread started... It might help to think about what would go through a fan's mind who suddenly found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is amazing, I'll go watch him play", or was it "gee, now we've got a *kick* *ass* guy on *our* side, I'll go watch him play". I think it was the latter. Similar thing happened in L.A. Before Gretzky's arrival, about 12000 per game. After, constant sellouts. They are STILL selling out every game despite showing little or no improvement since Gretzky's first year there. How do you explain it? People are going to see Gretzky. they certainly aren't going to see a winner, they haven't GOT a winner. They've had MUCH better teams in their past history than they currently have, yet they didn't draw as well then. I don't think this is accurate. The *tickets* sell, but people don't go to the games. I think this thread has already been discussed...season ticket holders in LA don't always use their tickets. So in effect, after the Kings initial success following Gretzky's arrival (68 to 91 points, same source) and corresponding attendance jump, there has been an effective drop in attendance even though ticket sales may not have changed much. Whether or not the Kings are a 'winner' is debatable. I claim that since Gretzky's arrival they have at the very least been competitive...I also claim that McNall has made a stupid move in trying to reassemble the Oiler dynasty...but that's another story and included only because I don't like McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and that undoubtedly was also responsible for the attendance and merchandising sales, etc. But as I said, when the Kings have been in there little tailspins over the past couple of years there have been empty seats at the Forum even if the tickets were sold. I think in the case of a Lemieux or Gretzky, the player can transcend winning as the major drawing power. For the short term, IMO. Although I think that it's inevitable that the team will improve with a player such as Lemieux or Gretzky, simply because they make people around them better. But winning sure as hell helps. ;-) Well, at least we are in full agreement here! This does not make Roger's point any more valid, but the Jets aren't So are you saying Roger has ever had a valid point? couldn't resist... getting a HUGE jump in productivity, yet they ARE getting a huge jump in attendance. This is due to the emergence of Teemu Selanne. They have the 17th best record in hockey, it sure as hell isn't because they are winning. Yes, but they are doing no worse than last year. I think the same type of reasoning I applied to a new Pittsburgh fan applies to all the extra people showing up at Winnipeg games. It's difficult to predict, but do you think that if the Jets miss the playoffs next season that in the year after they will maintain their attendance levels? I seriously doubt it, because in that case the expectation of an improving team would be gone, with or without Selanne. I did provide the example of Rocket Ismail and the Toronto Argonauts of the CFL...did you leave it out because you don't know much about the CFL? If that's the case then fair enough, but if it isn't the case then I'm curious to hear your explanation.
the length is 5853
texts[i] =re.sub(' +', ' ',texts[i])#To remove multiple spaces
print('after:',texts[i])
print('the length is',len(texts[i]))after: Dean J. Falcione (posting from jrmst+) writes: [I wrote:] When the Pens got Mario, granted there was big publicity, etc, etc, and interest was immediately generated. Gretzky did the same thing for LA. However, imnsho, neither team would have seen a marked improvement in attendance if the team record did not improve. In the year before Lemieux came, Pittsburgh finished with 38 points. Following his arrival, the Pens finished with 53, 76, 72, 81, 87, 72, 88, and 87 points, with a couple of Stanley Cups thrown in. It was at this point the Pens attendance was near capacity (34 out of 40 sellouts) yet they hadn't made the playoffs since 1982. How do you explain a 6th place team breaking attendance records when they haven't been to the playoffs in 7 years? Mario Lemieux is the explanation, IMHO. You could make a case that the *expectation* of an improving team that would make the playoffs is the reason. Funny you should mention it...this is exactly the case I was going to make. But I think the reason is Lemieux had a 168 point season and was the first non-Gretzky to win the Hart and Ross since 1980. People turned out to watch him play. I will grant that a star like Mario will draw fans, even if the team sucks. But this is short term only; I still do not think the attendance increase will last, unless the team is a winningcompetitiveimprovingbutt-kicking one. Pittsburgh was still getting better, so people continued to support them. If they suddenly dropped to, say, 50 points, you'd have knee surgery for some of the people jumping off the bandwagon. Also, the following year (88-89) the Pens had 89 points not 87. Ok. My numbers came from the NHL Guide and Record Book. They made the transaction to try and build a winner around Mario, that is true. But the improvement in attendance came before they started doing this (Coffey late in 1987) and before they even had a playoff bound team. A doubling of attendance occured in 1984-85 from the previous year. An increase from 38 points to 53 points is not going to do that. The arrival of Mario Lemieux is what did it. You can give the credit to Mario since he deserves it. But my point is that it wasn't Mario himself, but it was the *expectation* of things to come (i.e. a winning team) that he created by being the next great hockey superstar. And before anybody jumps in and says I'm nit-picking and mincing words, go back and read from where this thread started... It might help to think about what would go through a fan's mind who suddenly found an interest in Mario and the Pens. Was it "gee, Mario Lemieux is amazing, I'll go watch him play", or was it "gee, now we've got a *kick* *ass* guy on *our* side, I'll go watch him play". I think it was the latter. Similar thing happened in L.A. Before Gretzky's arrival, about 12000 per game. After, constant sellouts. They are STILL selling out every game despite showing little or no improvement since Gretzky's first year there. How do you explain it? People are going to see Gretzky. they certainly aren't going to see a winner, they haven't GOT a winner. They've had MUCH better teams in their past history than they currently have, yet they didn't draw as well then. I don't think this is accurate. The *tickets* sell, but people don't go to the games. I think this thread has already been discussed...season ticket holders in LA don't always use their tickets. So in effect, after the Kings initial success following Gretzky's arrival (68 to 91 points, same source) and corresponding attendance jump, there has been an effective drop in attendance even though ticket sales may not have changed much. Whether or not the Kings are a 'winner' is debatable. I claim that since Gretzky's arrival they have at the very least been competitive...I also claim that McNall has made a stupid move in trying to reassemble the Oiler dynasty...but that's another story and included only because I don't like McNall:-). Anyway, McNall did do some heavy marketing around Gretzky, and that undoubtedly was also responsible for the attendance and merchandising sales, etc. But as I said, when the Kings have been in there little tailspins over the past couple of years there have been empty seats at the Forum even if the tickets were sold. I think in the case of a Lemieux or Gretzky, the player can transcend winning as the major drawing power. For the short term, IMO. Although I think that it's inevitable that the team will improve with a player such as Lemieux or Gretzky, simply because they make people around them better. But winning sure as hell helps. ;-) Well, at least we are in full agreement here! This does not make Roger's point any more valid, but the Jets aren't So are you saying Roger has ever had a valid point? couldn't resist... getting a HUGE jump in productivity, yet they ARE getting a huge jump in attendance. This is due to the emergence of Teemu Selanne. They have the 17th best record in hockey, it sure as hell isn't because they are winning. Yes, but they are doing no worse than last year. I think the same type of reasoning I applied to a new Pittsburgh fan applies to all the extra people showing up at Winnipeg games. It's difficult to predict, but do you think that if the Jets miss the playoffs next season that in the year after they will maintain their attendance levels? I seriously doubt it, because in that case the expectation of an improving team would be gone, with or without Selanne. I did provide the example of Rocket Ismail and the Toronto Argonauts of the CFL...did you leave it out because you don't know much about the CFL? If that's the case then fair enough, but if it isn't the case then I'm curious to hear your explanation.
the length is 5692
</code>
## Note the change the order of the these stpe will change the result ^_____no_output_____# I think need to remove the long word and spastic but this step will be in the tokenize process_____no_output_____
<code>
####################### Dosen't work #######################
# match=re.match('Version: ', texts[1])
# if match:
# index = match.start()
# # print(texts[0:index])
# texts[i]=texts[0:index]
# _____no_output_____
</code>
|
{
"repository": "Wabinab/NLP_GroupProject_DG",
"path": "Week_9/cleaning data by re.ipynb",
"matched_keywords": [
"STAR",
"evolution"
],
"stars": null,
"size": 122010,
"hexsha": "cb53f4f1f05deb19400c4e9bb853f4293f94ec19",
"max_line_length": 5875,
"avg_line_length": 59.8088235294,
"alphanum_fraction": 0.6421194984
}
|
# Notebook from rmulton/dl_project
Path: pose_estimatition_updated.ipynb
<code>
import os
from pycocotools.coco import COCO
import numpy as np
import torch.utils.data as data
import torch
from heatmap import heatmaps_from_keypoints
from imageio import imread
from skimage.transform import resize
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from torch.nn import init
from torch.autograd.variable import Variable
import matplotlib.pyplot as plt
import pickle_____no_output_____MAIN_FOLDER = "/Volumes/TOSHIBA EXT/data/"
IMAGES_FOLDER = os.path.join(MAIN_FOLDER, "train2017")
IMAGES_FOLDER_TEST = os.path.join(MAIN_FOLDER, "val2017")
ANNOTATION_FILE = os.path.join(MAIN_FOLDER, "annotations/person_keypoints_train2017.json")
ANNOTATION_FILE_TEST = os.path.join(MAIN_FOLDER, "annotations/person_keypoints_val2017.json")
CHECKPOINTS_FOLDER = "./cktp/"_____no_output_____
</code>
### Heatmap_____no_output_____
<code>
def gaussian_heatmap(shape, keypoint_coordinates, std = 1.5):
"""
Computes a square gaussian kernel
:param shape: Shape of the output heatmap
:param keypoint_coordinates: Location of the keypoint
:param std: Standard deviation
:return: Heatmap of shape (1,shape,shape)
"""
# Get the coordinates
x = keypoint_coordinates[0]
y = keypoint_coordinates[1]
a = np.arange(0, shape, 1, float)
b = a[:,np.newaxis]
# Generate the heatmap
heatmap_raw = np.exp(-(((a-x)**2)/(2*std**2) + ((b-y)**2)/(2*std**2)))
# Normalize
heatmap_max = np.amax(heatmap_raw)
heatmap_normalized = heatmap_raw/heatmap_max
# Get it in the accurate format
heatmap = np.expand_dims(heatmap_raw, axis=0)
return heatmap
def gaussian_heatmaps(xs, ys, vs, shape=32, image_height=512, image_width=640, std=1.):
"""
Computes heatmaps from the keypoints
:param xs: Array of x coordinates for the keypoints
:param ys: Array of y coordinates for the keypoints
:param shape: shape of the heatmaps
:param image_height: Height of the images the keypoints are for
:param image_width: Width of the images the keypoints are for
:param std: Standard deviation of the gaussion function used
:return: Heatmaps as numpy arrays of shape (shape, shape, n_keypoints)
"""
# Rescale keypoints coordinates to the heatmaps scale
# ys
height_scale = shape/image_height
ys = ys*height_scale
# xs
width_scale = shape/image_width
xs = xs*width_scale
# Render a heatmap for each joint
heatmaps = gaussian_heatmap(shape, (xs[0],ys[0]))
for i, v in enumerate(vs):
if i!=0:
# If the joint is visible, generate a heatmaps
if v!=0:
new_heatmap = gaussian_heatmap(shape, (xs[i],ys[i]))
# Otherwise the heatmaps is composed of zeros
else:
new_heatmap = np.zeros((1, shape, shape))
heatmaps = np.append(heatmaps, new_heatmap, axis=0)
return heatmaps
def keypoints_from_heatmap(heatmap):
"""Get the coordinates of the max value heatmap - it is the keypoint"""
max_heatmap = np.amax(heatmap)
keypoints = np.where(heatmap == max_heatmap)
if len(keypoints) == 2:
return keypoints[1][0], keypoints[0][0], max_heatmap
elif len(keypoints) == 3:
return keypoints[2][0], keypoints[1][0], max_heatmap
def keypoints_from_heatmaps(heatmaps, shape=32, image_height=512, image_width=640):
"""Get the coordinates of the keypoints from the 17 heatmaps"""
keypoints = []
for i, heatmap in enumerate(heatmaps):
x, y, max_heatmap = keypoints_from_heatmap(heatmap)
if max_heatmap == 0:
keypoints += [0,0,0]
else:
x = x*image_width/shape
y = y*image_height/shape
keypoints += [x,y,2]
return keypoints
def get_xs_ys_vs(keypoints):
""" Splits MSCOCO keypoints notations from [x0, y0, v0, ...] to [x0, ...], [y0, ...] and [v0, ...] """
keypoints_array = np.asarray(keypoints)
xs = np.take(keypoints_array, [3*i for i in range(17)])
ys = np.take(keypoints_array, [3*i+1 for i in range(17)])
vs = np.take(keypoints_array, [3*i+2 for i in range(17)])
return xs, ys, vs
def heatmaps_from_keypoints(keypoints):
xs, ys, vs = get_xs_ys_vs(keypoints)
heatmaps = gaussian_heatmaps(xs, ys, vs)
return heatmaps_____no_output_____
</code>
### Dataset_____no_output_____
<code>
class MSCOCO(data.Dataset):
""" Represents a MSCOCO Keypoints dataset """
def __init__(self, images_folder, annotations_json, train=False, evalu=False, input_type=0):
""" Instantiate a MSCOCO dataset """
super().__init__()
self.images_folder = images_folder
#Input type indicates if the input is the original image or a combination of original image with filtered image
#O : original image
#1 : original image + skin filtered
#2 : original image + edge filter
#3 : original image + clustering filter
#4 : orignal image + skin filter + edge filter
#5 : orignal image + skin filter + clustering filter
self.input_type = input_type
# Load the annotations
self.annotations = COCO(annotations_json)
imgs_id = self.annotations.getImgIds()
if train:
self.img_ids = imgs_id[:int(len(imgs_id)*2/3)]
elif evalu:
self.img_ids = imgs_id[int(len(imgs_id)*2/3)+1:]
else:
self.img_ids = imgs_id
def __len__(self):
return len(self.img_ids)
def __getitem__(self, index):
""" Returns the index-th image with keypoints annotations, both as tensors """
try:
#L is the list of the input's path for a single image
L = []
input_imgs = []
# Get the image informations
img_id = self.img_ids[index]
img = self.annotations.loadImgs(img_id)[0]
# Load the image from the file
img_path = os.path.join(self.images_folder, img['file_name'])
L.append(img_path)
#Need to adapt it depending on the path of the filtered image
if self.input_type == 1 or self.input_type == 4 or self.input_type == 5:
L.append(img_path) #Need to change with skin filtered image
if self.input_type == 2 or self.input_type == 4:
L.append(img_path) #Need to change with edge filtered image
if self.input_type == 3 or self.input_type == 5:
L.append(img_path) #Need to change with clustering filtered image
for image in L:
img_array = load_image(image)
img_array = MSCOCO.transformGreyImage(img_array)
img_tensor = torch.from_numpy(img_array)
img_tensor = img_tensor.float() # Pytorch needs a float tensor
input_imgs.append(img_tensor)
# Get the keypoints
annIds = self.annotations.getAnnIds(imgIds=img['id'])
anns = self.annotations.loadAnns(annIds)
# Some images do not contain any coco object, so anns = []
if len(anns)>0:
keypoints = anns[0]['keypoints'] # anns is a list with only one element
else:
# keypoints are not visible so
keypoints = [0 for i in range(3*17)]
# Check to avoid errors
if len(keypoints)!=3*17:
print('Warning: Keypoints list for image {} has length {} instead of 17'.format(img_id, len(keypoints)))
# Generate the heatmaps
heatmaps_array = heatmaps_from_keypoints(keypoints)
#img_tensor_input = torch.cat((img_tensor,img_tensor_filtered),0)
keypoints_tensor = torch.from_numpy(heatmaps_array).float() # Pytorch needs a float tensor
img_tensor = torch.cat(input_imgs,0)
return img_tensor, keypoints_tensor
except:
#L is the list of the input's path for a single image
L = []
input_imgs = []
# Get the image informations
img_id = 391895
img = self.annotations.loadImgs(img_id)[0]
# Load the image from the file
img_path = os.path.join(self.images_folder, img['file_name'])
L.append(img_path)
#Need to adapt it depending on the path of the filtered image
if self.input_type == 1 or self.input_type == 4 or self.input_type == 5:
L.append(img_path) #Need to change with skin filtered image
if self.input_type == 2 or self.input_type == 4:
L.append(img_path) #Need to change with edge filtered image
if self.input_type == 3 or self.input_type == 5:
L.append(img_path) #Need to change with clustering filtered image
for image in L:
img_array = load_image(image)
img_array = MSCOCO.transformGreyImage(img_array)
img_tensor = torch.from_numpy(img_array)
img_tensor = img_tensor.float() # Pytorch needs a float tensor
input_imgs.append(img_tensor)
# Get the keypoints
annIds = self.annotations.getAnnIds(imgIds=img['id'])
anns = self.annotations.loadAnns(annIds)
# Some images do not contain any coco object, so anns = []
if len(anns)>0:
keypoints = anns[0]['keypoints'] # anns is a list with only one element
else:
# keypoints are not visible so
keypoints = [0 for i in range(3*17)]
# Check to avoid errors
if len(keypoints)!=3*17:
print('Warning: Keypoints list for image {} has length {} instead of 17'.format(img_id, len(keypoints)))
# Generate the heatmaps
heatmaps_array = heatmaps_from_keypoints(keypoints)
#img_tensor_input = torch.cat((img_tensor,img_tensor_filtered),0)
keypoints_tensor = torch.from_numpy(heatmaps_array).float() # Pytorch needs a float tensor
img_tensor = torch.cat(input_imgs,0)
return img_tensor, keypoints_tensor
@staticmethod
def transformGreyImage(img_array):
# Black and white images
if len(img_array.shape)==2:
# Add a channel axis
img_array = np.expand_dims(img_array, axis=2)
# Fill all the axes with the black&white image
img_array = np.concatenate((img_array, img_array, img_array), axis=2)
img_array = np.transpose(img_array, (2,1,0))
return img_array
# Homemade image loader
def load_image(image_path):
image = imread(image_path)
image = resize(image, (256, 256))
return image_____no_output_____
</code>
### Model_____no_output_____
<code>
class ConvRelu(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, training=True, padding=1, stride=1):
super().__init__()
self.conv = nn.Conv2d(in_channels,
out_channels,
kernel_size,
padding=padding,
stride=stride)
self.relu = nn.ReLU()
self.batch_norm = nn.BatchNorm2d(out_channels)
self.training = training
def forward(self, x):
x = self.relu(self.conv(x))
if self.training:
x = self.batch_norm(x)
return x
class Model(nn.Module):
def __init__(self, input_type=0):
super().__init__()
self.pool = nn.MaxPool2d(2)
#1 image
if input_type == 0:
input_size = 3
#2 images
elif input_type == 1 or input_type == 2 or input_type == 3:
input_size = 6
#3 images
elif input_type == 4 or input_type == 5:
input_size = 9
self.feature_extraction = nn.Sequential(
ConvRelu(input_size, 64, 3),
ConvRelu(64, 64, 3),
self.pool,
ConvRelu(64, 128, 3),
#ConvRelu(128, 128, 3),
self.pool,
ConvRelu(128, 128, 3),
#ConvRelu(128, 128, 3),
self.pool,
ConvRelu(128, 512, 3),
#ConvRelu(512, 512, 3),
)
self.features_to_heatmaps = nn.Conv2d(512, 17, 1) # 17 kind of joints, 17 heatmaps
def forward(self, x):
x = self.feature_extraction(x)
heatmaps = self.features_to_heatmaps(x)
return heatmaps
def plotKeypointsOverOutputModel(index,dataset,model,img_folder):
"""Forward a img to the model and display the output keypoints over the image.
It enables us to see the loss evolution over the model visually over the image
index is the index of the img in the dataset argument"""
# Get an image
imgId = dataset.img_ids[index]
img, keypoints = dataset[index]
# Transform into a pytorch model input and Forward pass
y = model(Variable(img.unsqueeze(0)))
#Get the coordinates of the keypoints
keypoints = keypoints_from_heatmaps(y[0].data.numpy())
# Plot the image
img_anno = dataset.annotations.loadImgs(imgId)[0]
img_path = os.path.join(img_folder, img_anno['file_name'])
img_array = load_image(img_path)
img_array_resized = resize(img_array, (512, 640))
plt.figure()
plt.title('Original image')
plt.imshow(img_array_resized)
xs,ys,vs = get_xs_ys_vs(keypoints)
plt.plot(xs,ys,'ro',color='c')
plt.show()_____no_output_____
</code>
### Configuration of the training_____no_output_____
<code>
def conf_training(resuming=False, input_type=0, *args):
"""Function that initiates the configuration of the model depending if a last model
is loaded or if it's the beginning of a new model"""
#Data
trainset = MSCOCO(IMAGES_FOLDER, ANNOTATION_FILE, train=True, input_type=input_type)
evalset = MSCOCO(IMAGES_FOLDER, ANNOTATION_FILE, evalu=True, input_type=input_type)
# Loss
criterion = nn.MSELoss()
#criterion = nn.CrossEntropyLoss()
# Number of epochs
epochs = 10
# Batch sizes
batch_size_train = 1
batch_size_val = 1
if not resuming:
# Model
net = Model(input_type=input_type)
# Optimizer
optimizer = torch.optim.Adam(net.parameters())
#First epoch
current_epoch = -1
else:
#Load the last saved model with its configurations
checkpoint = torch.load(os.path.join(MAIN_FOLDER,"model_"+args[0]))
#Model
net = Model(input_type=input_type)
net.load_state_dict(checkpoint['state_dict'])
#Current_epoch
current_epoch = checkpoint['epoch']
#Optimizer
optimizer = torch.optim.Adam(net.parameters())
#Data loaders
trainloader = torch.utils.data.DataLoader(trainset,
batch_size=batch_size_train,
shuffle=True,
num_workers=4
)
evaloader = torch.utils.data.DataLoader(evalset,
batch_size=batch_size_val,
shuffle=True,
num_workers=4
)
evalset_length = len(evalset)
return epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset_____no_output_____
</code>
### Running the model_____no_output_____
<code>
def training(epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset):
plt.ion()
if current_epoch == -1:
#If not resuming a model, creating the loss file
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
pickle.dump({"loss_train":{}, "loss_val":{}},lossFile)
lossFile.close()
start_epoch = current_epoch + 1
for epoch in range(start_epoch, epochs): # loop over the dataset multiple times
print("Epoch number {}".format(epoch))
#plotKeypointsOverOutputModel(0,evalset,net,IMAGES_FOLDER)#Displaying the result over the first element of the evalset
running_loss = 0.0
#For each epoch, we keep the loss under a dictionnary with epoch_nb as key and list of loss as value
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_train'][epoch] = []
loss_dic['loss_val'][epoch] = []
pickle.dump(loss_dic,lossFile)
lossFile.close()
for i, data in enumerate(trainloader, 0):
print("Batch number {}".format(i))
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('Trainset loss[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
#Save the loss_train in disk for each batch
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_train'][epoch] += [loss.data[0]]
pickle.dump(loss_dic,lossFile)
lossFile.close()
#Save the model
#net.cpu()
state = {
'epoch': epoch,
'state_dict': net.state_dict()
}
torch.save(state, os.path.join(MAIN_FOLDER,"model_"+str(epoch))) #Save the torch model after each epoch
#net.cuda()
running_loss_eval = 0.0
print("Starting Eval for Epoch {}".format(epoch))
for i, data in enumerate(evaloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# forward
outputs = net(inputs)
loss = criterion(outputs, labels)
# print statistics
running_loss_eval += loss.data[0]
#Save the loss_val in disk for each batch
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'rb')
loss_dic = pickle.load(lossFile)
lossFile.close()
lossFile = open(os.path.join(MAIN_FOLDER,"loss"),'wb')
loss_dic['loss_val'][epoch] += [loss.data[0]]
pickle.dump(loss_dic,lossFile)
lossFile.close()
print("Evalset Loss for Epoch {0} : {1}".format(epoch,running_loss_eval/evalset_length))
#loss_val[epoch] += [running_loss_eval/evalset_length] #Stock the loss on evalset for each epoch
print('Finished Training')
def launch_training(resuming=False, input_type=0, *args):
"""Function that configurates the model from init or a last model ; and then it trains the model"""
epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset = conf_training(resuming=resuming,input_type=input_type, *args)
training(epochs, trainloader, evaloader, optimizer, net, current_epoch, criterion, evalset_length, evalset)
def launch_testing(model_epoch, input_type=0):
"""Function that launches a model over the test dataset"""
testset = MSCOCO(IMAGES_FOLDER_TEST, ANNOTATION_FILE_TEST,input_type=input_type)
#Load the training model
checkpoint = torch.load(os.path.join(MAIN_FOLDER, model_epoch))
net = Model(input_type=input_type)
net.load_state_dict(checkpoint['state_dict'])
# Loss
criterion = nn.MSELoss()
# Batch sizes
batch_size_test = 1
#TestLoader
evaloader = torch.utils.data.DataLoader(testset,
batch_size=batch_size_test,
shuffle=True,
num_workers=4
)
loss_test = 0.0
for i, data in enumerate(evaloader):
inputs, labels = data[0], data[1]
inputs, labels = Variable(inputs), Variable(labels)
outputs = net(inputs)
loss = criterion(y, outputs)
loss_test += loss.data[0]
if i % 500 ==0:
print("Current loss over the test dataset: {0} after {1}ème iteration".format(loss_test/(i+1),i+1))
loss_test = loss_test/len(testset)
print("Average loss over the test dataset: {}".format(loss_test))_____no_output_____#Launch a training over a new model with inputSize = 0
launch_training(False,0)loading annotations into memory...
Done (t=21.31s)
creating index...
index created!
loading annotations into memory...
Done (t=38.47s)
creating index...
index created!
Epoch number 0
#Launch a training over a model currently trained with inputSize = 0
#launch_training(True,0,path_model)_____no_output_____#Launch a trained model over the test dataset, with inputSize = 0
#launch_testing(path_model,0)_____no_output_____%cd cocoapi
!ls/Users/alexandresioufi/Documents/Projets infos/deeplearning/dl_project/cocoapi
[34mLuaAPI[m[m [34mPythonAPI[m[m [34mcommon[m[m [34mresults[m[m
[34mMatlabAPI[m[m README.txt license.txt
</code>
|
{
"repository": "rmulton/dl_project",
"path": "pose_estimatition_updated.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 53329,
"hexsha": "cb55153daf3c53b9146817e1f7cf189b4d98d642",
"max_line_length": 2109,
"avg_line_length": 57.0363636364,
"alphanum_fraction": 0.5830598736
}
|
# Notebook from ee2110/Natural_Language_Processing-NLP-TensorFlow
Path: Text_Sentiment_Analysis/TextVectorization_layer.ipynb
**General Work Process**
1. Import dataset and preprocess
2. Train model
3. Test model_____no_output_____
<code>
import io
import os
import re
import shutil
import string
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import Sequential, layers, losses
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers import TextVectorization_____no_output_____url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
84131840/84125825 [==============================] - 258s 3us/step
84140032/84125825 [==============================] - 258s 3us/step
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)_____no_output_____# view train data files
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)_____no_output_____# clean unnecessary empty folder
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)_____no_output_____batch_size = 1024
seed = 10
train_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
val_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)Found 25000 files belonging to 2 classes.
Using 20000 files for training.
Found 25000 files belonging to 2 classes.
Using 5000 files for validation.
# sample batch from train data
for text_batch, label_batch in train_data.take(1):
# view the first 5 samples
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])1 b"This film is more about how children make sense of the world around them, and how they (and we) use myth to make sense of it all. I think it's been misperceived, everyone going in expecting a stalkfest won't enjoy it but if you want a deeper story, it's here......."
0 b'God, I was bored out of my head as I watched this pilot. I had been expecting a lot from it, as I\'m a huge fan of James Cameron (and not just since "Titanic", I might add), and his name in the credits I thought would be a guarantee of quality (Then again, he also wrote the leaden Strange Days..). But the thing failed miserably at grabbing my attention at any point of its almost two hours of duration. In all that time, it barely went beyond its two line synopsis, and I would be very hard pressed to try to figure out any kind of coherent plot out of all the mess of strands that went nowhere. On top of that, I don\'t think the acrobatics outdid even those of any regular "A-Team" episode. As for Alba, yes, she is gorgeous, of course, but the fact that she only displays one single facial expression the entire movie (pouty and surly), makes me also get bored of her "gal wit an attitude" schtick pretty soon. You can count me out of this one, Mr. Cameron!'
0 b'me, my boyfriend, and our friend watched this "movie" if thats what u wanna call it, and we agree with the last person, but we were stupid and bought the damn thing, we thought it really was about diablo so we bought it.<br /><br />we hate it Really SUXZ!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! so beware: DO NOT BUY THIS THING THEY CALL A MOVIE!!!!!!!!!!!!!!!!!!!!!!!<br /><br />we would return it, but don\'t no if anybody would want this stupid movie.<br /><br />oh and another thing, the shouldn\'t call it "The Legend of Diablo" they should of called it "Legend of Azar".<br /><br />and this movie is rated R????? this should not of even been not rated.<br /><br />we think that diablo would be crying his eyes out laughing at this stupid movie.<br /><br />this is a movie that would have been done by a Church.<br /><br />theses "actors" are never gonna become nothing because this movie.'
0 b"SPOILERS THROUGHOUT: <br /><br />The Gettaway is mostly an action movie. And what action there is to!! Shootouts, chases, dumpsters and much much more. It stars Kim Bassenger and Alec Baldwin as the Mc Coy's.<br /><br />This is a remake and I have not seen the original but really didn't care for this one at all although Bassenger and Baldwin have some nice screen chemistry. But the movie itself didn't do it for me.<br /><br />The Gettaway became really tiresome really quickly. The plot is overshadowed by one fight/chase after another and as the violence keeps piling up, Bassenger and Baldwin retain their great looks no matter what perils they maybe in. In fact, by the end of the movie they almost look BETTER then in the beginning. I don't think Bassenger's eye makeup moves once during the whole picture.<br /><br />This isn't the worst movie I've ever seen, certainly not, but it isn't very good and unless one is an action movie purist I can't see really enjoying this movie because there's just not a lot here. The Gettaway isn't terribly original either, and goes every way from unnecessarily brutal to rather dull. It really could have been better I think.<br /><br />Bassenger and Baldwin give OK performances but they don't have a lot to do except get chased and run for their lives. Sometimes less is more, after seeing the same thing over and over again it gets stale. Didn't enjoy this one to much."
0 b'This was a "cute" movie at first, then then got too sappy and featured mediocre songs, at best.<br /><br />There is too much King James English spoken with is not only annoying in today\'s world but not always easy to interpret. Can you imagine young people of today trying to listen to this film? Forget it.<br /><br />Bing Crosby has some good lines in here and is likable as "Hank Martin." Rhonda Fleming ("Alisande La Carteloise") was, too, in addition to her good looks and beautiful, long red hair. <br /><br />It\'s a nice movie with a feel-good ending, and I can\'t knock that. Maybe this is worthy of a rental, for historical sake or if you\'re a big Crosby fan but, overall, it\'s not that much.'
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_data.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_data.cache().prefetch(buffer_size=AUTOTUNE)_____no_output_____# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)_____no_output_____embedding_dim=16
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(32, activation='relu'),
Dense(1)
])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])_____no_output_____model.fit(
train_ds,
validation_data=val_ds,
epochs=20,
callbacks=[tensorboard_callback])Epoch 1/20
20/20 [==============================] - 47s 2s/step - loss: 0.6920 - accuracy: 0.5003 - val_loss: 0.6898 - val_accuracy: 0.4986
Epoch 2/20
20/20 [==============================] - 4s 200ms/step - loss: 0.6863 - accuracy: 0.5003 - val_loss: 0.6818 - val_accuracy: 0.4986
Epoch 3/20
20/20 [==============================] - 4s 198ms/step - loss: 0.6749 - accuracy: 0.5004 - val_loss: 0.6677 - val_accuracy: 0.4986
Epoch 4/20
20/20 [==============================] - 4s 195ms/step - loss: 0.6559 - accuracy: 0.5052 - val_loss: 0.6462 - val_accuracy: 0.5212
Epoch 5/20
20/20 [==============================] - 4s 198ms/step - loss: 0.6286 - accuracy: 0.5525 - val_loss: 0.6175 - val_accuracy: 0.5982
Epoch 6/20
20/20 [==============================] - 4s 197ms/step - loss: 0.5940 - accuracy: 0.6482 - val_loss: 0.5839 - val_accuracy: 0.6822
Epoch 7/20
20/20 [==============================] - 4s 185ms/step - loss: 0.5548 - accuracy: 0.7211 - val_loss: 0.5487 - val_accuracy: 0.7316
Epoch 8/20
20/20 [==============================] - 4s 188ms/step - loss: 0.5145 - accuracy: 0.7621 - val_loss: 0.5152 - val_accuracy: 0.7544
Epoch 9/20
20/20 [==============================] - 4s 186ms/step - loss: 0.4762 - accuracy: 0.7897 - val_loss: 0.4857 - val_accuracy: 0.7698
Epoch 10/20
20/20 [==============================] - 4s 193ms/step - loss: 0.4418 - accuracy: 0.8087 - val_loss: 0.4611 - val_accuracy: 0.7836
Epoch 11/20
20/20 [==============================] - 4s 195ms/step - loss: 0.4115 - accuracy: 0.8239 - val_loss: 0.4411 - val_accuracy: 0.7928
Epoch 12/20
20/20 [==============================] - 4s 196ms/step - loss: 0.3853 - accuracy: 0.8367 - val_loss: 0.4250 - val_accuracy: 0.7992
Epoch 13/20
20/20 [==============================] - 4s 204ms/step - loss: 0.3624 - accuracy: 0.8468 - val_loss: 0.4120 - val_accuracy: 0.8046
Epoch 14/20
20/20 [==============================] - 4s 201ms/step - loss: 0.3422 - accuracy: 0.8565 - val_loss: 0.4018 - val_accuracy: 0.8096
Epoch 15/20
20/20 [==============================] - 4s 194ms/step - loss: 0.3244 - accuracy: 0.8640 - val_loss: 0.3938 - val_accuracy: 0.8144
Epoch 16/20
20/20 [==============================] - 4s 194ms/step - loss: 0.3086 - accuracy: 0.8712 - val_loss: 0.3877 - val_accuracy: 0.8166
Epoch 17/20
20/20 [==============================] - 4s 194ms/step - loss: 0.2945 - accuracy: 0.8773 - val_loss: 0.3832 - val_accuracy: 0.8194
Epoch 18/20
20/20 [==============================] - 4s 195ms/step - loss: 0.2817 - accuracy: 0.8824 - val_loss: 0.3800 - val_accuracy: 0.8228
Epoch 19/20
20/20 [==============================] - 4s 194ms/step - loss: 0.2701 - accuracy: 0.8877 - val_loss: 0.3779 - val_accuracy: 0.8252
Epoch 20/20
20/20 [==============================] - 4s 196ms/step - loss: 0.2595 - accuracy: 0.8931 - val_loss: 0.3767 - val_accuracy: 0.8262
%load_ext tensorboard
%tensorboard --logdir logs_____no_output_____# get the trained word embeddings
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()_____no_output_____vocab[:10]_____no_output_____out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()_____no_output_____
</code>
## Test model_____no_output_____
<code>
# view test data files
test_dir = os.path.join(dataset_dir, 'test')
os.listdir(test_dir)_____no_output_____test_data = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/test')Found 25000 files belonging to 2 classes.
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label_____no_output_____# sample batch from test data
for test_text_batch, test_label_batch in test_data.take(1):
# view the first 5 samples
for i in range(5):
print(test_label_batch[i].numpy(), test_text_batch.numpy()[i])0 b"An insult to both poker and cinema, this movie manages to make the most dynamic, brilliant, and fascinating figure in poker history into an utter bore. Still a fun film to make jokes about, from the lame gangster movie clich\xc3\xa9s of the first half to the incomprehensible nonsense of that second hour. Hilariously, Stu Ungar wins all three of his World Series titles without playing a single hand on screen. His infamous dealer abuse? 1 scene. His coke habit? 1 scene. His incredible memory? 0 scenes. They couldn't even get any real poker players. What did they cover? A lot of high angle shots from inside a house in the suburbs. Oh, and a montage of Stu waking up every day and shopping for meat which doesn't come anywhere close to making sense. Why do I care so much about this little Sopranos summer camp trying to cash in on the poker craze? Because I think there's still a great film to be made about Stu Ungar waiting for someone willing to do it right."
0 b'(SMALL SPOILERS) I just bought the DVD of this movie yesterday. I saw it with my friends and I couldn\'t believe what had happened.<br /><br />In the first 3 movies, the critters at least had a sense of humor (especially the 3rd movie), but not only did the critters barely ever make an appearance, they weren\'t funny! They never made me laugh. I must admit that the story did start off nicely. After an hour had gone by I remembered that the Critters movies were always very short. So I thought to myself, "Where the $^%#$ are the critters?!?!" They were barely in this movie! If that didn\'t make me mad enough, the boy named Ethan was sitting on his bed after Charlie had "murdered the ship" and he knew that the critters were still on board! In the first movie the Brown family was scared out of their minds. But here, Ethan didn\'t even care! It was as if the critters weren\'t even a threat!<br /><br />Now what I\'m about to say next may ruin the ending, but I\'m going to say it anyways. In the first movie, at the end, they had to face the giant critter for a final battle. In the second one, there was the great ball of critter. In the third movie, the critter with his fave burned did a spindash (from Sonic the Hedgehog) and was going to attack the little kid. But at the end of the fourth one (which is what made me the angriest) the bald critter charges toward Ethan, and Ethan kills it as if it were nothing.<br /><br />Now something that I really don\'t understand was what happened to Ug. He was one of my favorite characters in the first two. Then after 50 years, he\'s evil. That was very disappointing. Not only that, but wasn\'t he a faceless bounty hunter? Why was he still "Johnny Steele?" Plus he seemed to have a different personality. He seemed much smarter and not as monotone like in the first two.<br /><br />Being someone who actually enjoyed the first two critters movies, and loved the third one, I give Critters 4 a 2/10'
0 b"Very disappointing 7th chapter of this slowly dying series. Very evident that the budget was extremely low. This movie was made for one reason and one reason alone. To sell Puppet Master Toys! Fans, such as myself of the series have decided, from what I have read and heard that the only one in the series worse than this is Curse of the Puppetmaster. In turn, turning us away from the series. <br /><br />Opting to make this a PG-13 film, for whatever reason, did not work in the films favor. The plot seemed almost to be there, but was easily lost in the steady stream of nonsense. <br /><br />The only film in the series worth watching, also directed by Decoteau is part 3 - Toulon's Revenge.<br /><br />Granted, I do favor the scenery in the film. <br /><br />Yuck!"
0 b'Stay away from this movie! It is terrible in every way. Bad acting, a thin recycled plot and the worst ending in film history. Seldom do I watch a movie that makes my adrenaline pump from irritation, in fact the only other movie that immediately springs to mind is another "people in an aircraft in trouble" movie (Airspeed). Please, please don\'t watch this one as it is utterly and totally pathetic from beginning to end. Helge Iversen'
0 b"This film is BORING, BORING, BORING, BORING, and BORING!!! It's not the worse film I ever saw, on the contrary, but.......how shall I put this.......IT'S BORING! There is some very nice scenery and some clever dry wit but that's about it. If it was advertised as a travelogue I would rate it a 7 but it's supposed to be a film with a plot, some drama, and for god's sake a point or a satisfying conclusion.<br /><br />I read some of the comments on this board about this films and I wondered if they saw the same movie as I did.<br /><br />See this film (yawn) at your own risk........one thing for sure- it really is rated correctly= G RATING! (Which most stand for GOD AWFUL BORING!)"
text_batch, label_batch = next(iter(test_data))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label", test_data.class_names[first_label])
print("Vectorized review", vectorize_text(first_review, first_label))Review tf.Tensor(b'This film biography of early rock and roll star Buddy Holly (1936-1959) is a tour de force for Gary Busey. The movie\'s highlights are Busey\'s stage performances where he plays guitar and sings Holly songs. He brings such energy to the performances that Holly\'s own filmed performances almost pale in comparison. Busey\'s infectious toothy grin lights up the screen, he creates a totally believable and winning personality and his Oscar nomination for best actor was well deserved.<br /><br />The film follows Holly\'s career from growing up in Lubbock, Texas, to stardom and New York and his untimely death in a plane crash. One thing I found interesting, if true, was Buddy\'s driving ambition--he had great plans to go beyond recording and performance to producing. As young as he was he was already establishing himself as a shrewd businessman and definitely wanted to take things to a higher level. We will never know if he would have ultimately catapulted his early success into a business brand like The Rolling Stones.<br /><br />The lyrics of many of Holly\'s songs are pretty adolescent; read the lyrics for "Peggy Sue" or "Oh Boy!" and you will see what I mean. Maybe to a great extent this explains his popularity with adolescent audiences, but his instrumentation and stage performances surely account for his influence on groups to follow--both The Rolling Stones and The Beatles have acknowledged his importance.<br /><br />Clearly some liberties were taken for dramatic effect. For example, I doubt that Holly ever punched out a producer in Nashville or that the audience at New York\'s Apollo theater was so immediately responsive as to be wildly dancing in the aisles. If you are interested in getting closer to the truth, see the documentary "The Real Buddy Holly Story" (1985) that is produced and hosted by a very relaxed and engaging Paul McCartney. This contains interviews with Holly\'s family, friends, and band-mates (Holly\'s musical brothers are not even mentioned in "The Buddy Holly Story"). Members of other bands like Keith Richards and Don Everly also offer opinions and stories and there is footage of old Holly performances. The McCartney production can stand on its own, but it makes an excellent companion piece to "The Buddy Holly Story" and perhaps should be required viewing for anyone who watches the fictionalized story.', shape=(), dtype=string)
Label pos
Vectorized review (<tf.Tensor: shape=(1, 100), dtype=int64, numpy=
array([[ 11, 19, 4980, 5, 410, 860, 4, 2072, 355, 1752, 3086,
1, 7, 3, 2918, 1017, 1079, 16, 1864, 5468, 2, 91,
3255, 23, 1, 1025, 367, 116, 28, 295, 4303, 4, 3209,
3086, 761, 28, 969, 137, 1668, 6, 2, 367, 12, 1,
197, 704, 367, 208, 4786, 8, 1716, 1, 9627, 1, 9236,
2363, 55, 2, 270, 28, 2181, 3, 423, 785, 4, 2238,
1556, 4, 24, 980, 4788, 16, 117, 299, 13, 70, 1875,
2, 19, 1039, 1, 640, 35, 1928, 55, 8, 1, 1709,
6, 6158, 4, 172, 962, 4, 24, 1, 316, 8, 3,
1373]], dtype=int64)>, <tf.Tensor: shape=(), dtype=int32, numpy=1>)
# the vectorize function is not required to process the test data
# if the vectorize layer included in model
# test_ds = test_data.map(vectorize_text)
# # sample batch from test data
# for test_text_batch, test_label_batch in test_ds.take(1):
# for i in range(1):
# print(test_label_batch[i].numpy(), test_text_batch.numpy()[i])_____no_output_____loss, accuracy = model.evaluate(test_data)
print("Loss: ", loss)
print("Accuracy: ", accuracy)782/782 [==============================] - 26s 34ms/step - loss: 0.4029 - accuracy: 0.8025
Loss: 0.40294232964515686
Accuracy: 0.8024799823760986
export_model = tf.keras.Sequential([
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(test_data)
print(accuracy)782/782 [==============================] - 20s 24ms/step - loss: 0.9002 - accuracy: 0.5179
0.5178800225257874
text_batch, label_batch = next(iter(test_data))
first_review, first_label = text_batch[0], label_batch[0]_____no_output_____pred_label = export_model.predict(test_data)_____no_output_____pred_label_____no_output_____pred_label.shape_____no_output_____pred_y = []
for i in range(len(pred_label)):
pred_y.append(round(pred_label[i][0]))_____no_output_____len(pred_y)_____no_output_____actual_y = []
for tt, ll in test_data:
for l in ll:
actual_y.append(l.numpy())_____no_output_____correct = 0
for i in range(len(pred_y)):
if pred_y[i] == actual_y[i]:
correct+=1_____no_output_____correct/len(pred_y)*100_____no_output_____
</code>
**Analyze my own review**_____no_output_____
<code>
my_reviews =["The new movie is popular and awesome",
"The background music is annoying and too loud",
"We are very enjoy the movie",
"Negative comment in internent is hurt people",
"The smile is very sweat and cute!",
"The view is so beautiful and attrative",
]_____no_output_____export_model.predict(my_reviews)_____no_output_____
</code>
|
{
"repository": "ee2110/Natural_Language_Processing-NLP-TensorFlow",
"path": "Text_Sentiment_Analysis/TextVectorization_layer.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 33494,
"hexsha": "cb55323af59e4d2184dd1835c6b1fa832f4f1b4e",
"max_line_length": 2440,
"avg_line_length": 41.6592039801,
"alphanum_fraction": 0.5805218845
}
|
# Notebook from keesterbrugge/python-causality-handbook
Path: causal-inference-for-the-brave-and-true/16-Regression-Discontinuity-Design.ipynb
# 16 - Regression Discontinuity Design
We don't stop to think about it much, but it is impressive how smooth nature is. You can't grow a tree without first getting a bud, you can't teleport from one place to another, a wound takes its time to heal. Even in the social realm, smoothness seems to be the norm. You can't grow a business in one day, consistency and hard work are required to build wealth and it takes years before you learn how linear regression works. Under normal circumstances, nature is very cohesive and doesn't jump around much.
> When the intelligent and animal souls are held together in one embrace, they can be kept from separating.
\- Tao Te Ching, Lao Tzu.
Which means that **when we do see jumps and spikes, they are probably artificial** and often man-made situations. These events are usually accompanied by counterfactuals to the normal way of things: if a weird thing happens, this gives us some insight into what would have happened if nature was to work in a different way. Exploring these artificial jumps is at the core of Regression Discontinuity Design.

The basic setup goes like this. Imagine that you have a treatment variable $T$ and potential outcomes $Y_0$ and $Y_1$. The treatment T is a discontinuous function of an observed running variable $R$ such that
$
D_i = \mathcal{1}\{R_i>c\}
$
In other words, this is saying that treatment is zero when $R$ is below a threshold $c$ and one otherwise. This means that we get to observe $Y_1$ when $R>c$ and $Y_0$ when $R<c$. To wrap our head around this, think about the potential outcomes as 2 functions that we can't observe entirely. Both $Y_0(R)$ and $Y_1(R)$ are there, we just can't see that. The threshold acts as a switch that allows us to see one or the other of those function, but never both, much like in the image below:

The idea of regression discontinuity is to compare the outcome just above and just below the threshold to identify the treatment effect at the threshold. This is called a **sharp RD** design, since the probability of getting the treatment jumps from 0 to 1 at the threshold, but we could also think about a **fuzzy RD** design, where the probability also jumps, but is a less dramatic manner.
## Is Alcohol Killing You?
A very relevant public policy question is what should be the minimal drinking age. Most countries, Brazil included, set it to be 18 year, but in the US (most states) it is currently 21. So, is it the case that the US is being overly prudent and that they should lower their minimal drinking age? Or is it the case that other countries should make their legal drinking age higher?
One way to look at this question is from a [mortality rate perspective (Carpenter and Dobkin, 2009)](https://www.aeaweb.org/articles?id=10.1257/app.1.1.164). From the public policy standpoint, one could argue that we should lower the mortality rate as much as possible. If alcohol consumption increases the mortality rate by a lot, we should avoid lowering the minimum drinking age. This would be consistent with the objective of lowering deaths caused by alcohol consumption.
To estimate the impacts of alcohol on death, we could use the fact that legal drinking age imposes a discontinuity on nature. In the US, those just under 21 years don't drink (or drink much less) while those just older than 21 do drink. This means that the probability of drinking jumps at 21 years and that is something we can explore with an RDD._____no_output_____
<code>
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
style.use("fivethirtyeight")_____no_output_____
</code>
To do so we can grab some mortality data aggregated by age. Each row is the average age of a group of people and the average mortality by all causes (`all`), by moving vehicle accident (`mva`) and by suicide (`suicide`). _____no_output_____
<code>
drinking = pd.read_csv("./data/drinking.csv")
drinking.head()[["agecell", "all", "mva", "suicide"]]_____no_output_____
</code>
Just to aid visibility (and for another important reason we will see later) we will centralize the running variable `agecell` at the threshold 21._____no_output_____
<code>
drinking["agecell"] -= 21_____no_output_____
</code>
If we plot the multiple outcome variables (`all`, `mva`, `suicide`) with the runing variable on the x axis, we get some visual cue about some sort of jump in mortality as we cross the legal drinking age._____no_output_____
<code>
plt.figure(figsize=(8,8))
ax = plt.subplot(3,1,1)
drinking.plot.scatter(x="agecell", y="all", ax=ax)
plt.title("Death Cause by Age (Centered at 0)")
ax = plt.subplot(3,1,2, sharex=ax)
drinking.plot.scatter(x="agecell", y="mva", ax=ax)
ax = plt.subplot(3,1,3, sharex=ax)
drinking.plot.scatter(x="agecell", y="suicide", ax=ax);
_____no_output_____
</code>
There are some cues, but we need more than that. What exactly is the effect of drinking on mortality at the threshold? And what is the standard error on that estimate?
## RDD Estimation
The key assumption that RDD relies on is the smoothness of the potential outcome at the threshold. Formally, the limits of the potential outcomes as the running variable approaches the threshold from the right and from the left should be the same.
$$
\lim_{r \to c^-} E[Y_{ti}|R_i=r] = \lim_{r \to c^+} E[Y_{ti}|R_i=r]
$$
If this holds true, we can find the causal effect at the threshold
$$
\begin{align}
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=&\lim_{r \to c^+} E[Y_{1i}|R_i=r] - \lim_{r \to c^-} E[Y_{0i}|R_i=r] \\
=& E[Y_{1i}|R_i=r] - E[Y_{0i}|R_i=r] \\
=& E[Y_{1i} - Y_{0i}|R_i=r]
\end{align}
$$
This is, in its own way, a sort of Local Average Treatment Effect (LATE), since we can only know it at the threshold. In this setting, we can think of RDD as a local randomized trial. For those at the threshold, the treatment could have gone either way and, by chance, some people fell below the threshold, and some people fell above. In our example, at the same point in time, some people are just above 21 years and some people are just below 21. What determines this is if someone was born some days later or not, which is pretty random. For this reason, RDD provides a very compelling causal story. It is not the golden standard of RCT, but it is close.
Now, to estimate the treatment effect at the threshold, all we need to do is estimate both of the limits in the formula above and compare them. The simplest way to do that is by running a linear regression

To make it work, we interact a dummy for being above the threshold with the running variable
$
y_i = \beta_0 + \beta_1 r_i + \beta_2 \mathcal{1}\{r_i>c\} + \beta_3 \mathcal{1}\{r_i>c\} r_i
$
Essentially, this is the same as fitting a linear regression above the threshold and another below it. The parameter $\beta_0$ is the intercept of the regression below the threshold and $\beta_0+\beta_2$ is the intercept for the regression above the threshold.
Here is where the trick of centering the running variable at the threshold comes into play. After this pre-processing step, the threshold becomes zero. This causes the intercept $\beta_0$ to be the predicted value at the threshold, for the regression below it. In other words, $\beta_0=\lim_{r \to c^-} E[Y_{ti}|R_i=r]$. By the same reasoning, $\beta_0+\beta_2$ is the limit of the outcome from above. Wich means, that
$
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=\beta_2=E[ATE|R=c]
$
Here is what this looks like in code for the case where we want to estimate the effect of alcohol consumption on death by all causes at 21 years._____no_output_____
<code>
rdd_df = drinking.assign(threshold=(drinking["agecell"] > 0).astype(int))
model = smf.wls("all~agecell*threshold", rdd_df).fit()
model.summary().tables[1]_____no_output_____
</code>
This model is telling us that mortality increases by 7.6627 points with the consumption of alcohol. Another way of putting this is that alcohol increases the chance of death by all causes by 8% ((7.6627+93.6184)/93.6184). Notice that this also gives us standard errors for our causal effect estimate. In this case, the effect is statistically significant, since the p-value is below 0.01.
If we want to verify this model visually, we can show the predicted values on the data that we have. You can see that it is as though we had 2 regression models: one for those above the threshold and one for below it._____no_output_____
<code>
ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity");_____no_output_____
</code>
If we do the same for the other causes, this is what we get._____no_output_____
<code>
plt.figure(figsize=(8,8))
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()_____no_output_____
</code>
RDD is telling us that alcohol increases the chance of death by suicide and car accidents by 15%, which is a pretty significant amount. These results are compelling arguments to not lower the drinking age, if we want to minimize mortality rates.
### Kernel Weighting
Regression Discontinuity relies heavily on the extrapolations properties of linear regression. Since we are looking at the values at the beginning and end of 2 regression lines, we better get those limits right. What can happen is that regression might focus too much on fitting the other data points at the cost of a poor fit at the threshold. If this happens, we might get the wrong measure of the treatment effect.
One way to solve this is to give higher weights for the points that are closer to the threshold. There are many ways to do this, but a popular one is to reweight the samples with the **triangular kernel**
$
K(R, c, h) = \mathcal{1}\{|R-c| \leq h\} * \bigg(1-\frac{|R-c|}{h}\bigg)
$
The first part of this kernel is an indicator function to whether we are close to the threshold. How close? This is determined by a bandwidth parameter $h$. The second part of this kernel is a weighting function. As we move away from the threshold, the weights get smaller and smaller. These weights are divided by the bandwidth. If the bandwidth is large, the weights get smaller at a slower rate. If the bandwidth is small, the weights quickly go to zero.
To make it easier to understand, here is what the weights look like for this kernel applied to our problem. I've set the bandwidth to be 1 here, meaning we will only consider data from people that are no older than 22 years and no younger than 20 years._____no_output_____
<code>
def kernel(R, c, h):
indicator = (np.abs(R-c) <= h).astype(float)
return indicator * (1 - np.abs(R-c)/h)_____no_output_____plt.plot(drinking["agecell"], kernel(drinking["agecell"], c=0, h=1))
plt.xlabel("agecell")
plt.ylabel("Weight")
plt.title("Kernel Weight by Age");_____no_output_____
</code>
If we apply these weights to our original problem, the impact of alcohol gets bigger, at least for all causes. It jumps from 7.6627 to 9.7004. The result remains very significant. Also, notice that I'm using `wls` instead of `ols`_____no_output_____
<code>
model = smf.wls("all~agecell*threshold", rdd_df,
weights=kernel(drinking["agecell"], c=0, h=1)).fit()
model.summary().tables[1]_____no_output_____ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity (Local Regression)");_____no_output_____
</code>
And here is what it looks like for the other causes of death. Notice how the regression on the right is more negatively sloped since it disconsiders the right most points. _____no_output_____
<code>
plt.figure(figsize=(8,8))
weights = kernel(drinking["agecell"], c=0, h=1)
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df, weights=weights).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()_____no_output_____
</code>
With the exception of suicide, it looks like adding the kernel weight made the negative impact on alcohol bigger. Once again, if we want to minimize the death rate, we should NOT recommend lowering the legal drinking age, since there is a clear impact of alcohol on the death rates.
This simple case covers what happens when regression discontinuity design works perfectly. Next, we will see some diagnostics that we should run in order to check how much we can trust RDD and talk about a topic that is very dear to our heart: the effect of education on earnings.
## Sheepskin Effect and Fuzzy RDD
When it comes to the effect of education on earnings, there are two major views in economics. The first one is the widely known argument that education increases human capital, increasing productivity and thus, earnings. In this view, education actually changes you for the better. Another view is that education is simply a signaling mechanism. It just puts you through all these hard tests and academic tasks. If you can make it, it signals to the market that you are a good employee. In this way, education doesn't make you more productive. It only tells the market how productive you have always been. What matters here is the diploma. If you have it, you will be paid more. We refer to this as the **sheepskin effect**, since diplomas were printed in sheepskin in the past.
To test this hypothesis, [Clark and Martorell](https://faculty.smu.edu/millimet/classes/eco7321/papers/clark%20martorell%202014.pdf) used regression discontinuity to measure the effect of graduating 12th grade on earnings. In order to do that, they had to think about some running variable where students that fall above it graduate and those who fall below it, don't. They found such data in the Texas education system.
In order to graduate in Texas, one has to pass an exam. Testing starts at 10th grade and students can do it multiple times, but eventually, they face a last chance exam at the end of 12th grade. The idea was to get data from students who took those last chance exams and compare those that had barely failed it to those that barely passed it. These students will have very similar human capital, but different signaling credentials. Namely, those that barely passed it, will receive a diploma. _____no_output_____
<code>
sheepskin = pd.read_csv("./data/sheepskin.csv")[["avgearnings", "minscore", "receivehsd", "n"]]
sheepskin.head()_____no_output_____
</code>
Once again, this data is grouped by the running variable. It contains not only the running variable (minscore, already centered at zero) and the outcome (avgearnings), but it also has the probability of receiving a diploma in that score cell and the size of the call (n). So, for example, out of the 12 students in the cell -30 below the score threshold, only 5 were able to get the diploma (12 * 0,416).
This means that there is some slippage in the treatment assignment. Some students that are below the passing threshold managed to get the diploma anyway. Here, the regression discontinuity is **fuzzy**, rather than sharp. Notice how the probability of getting the diploma doesn't jump from zero to one at the threshold. But it does jump from something like 50% to 90%._____no_output_____
<code>
sheepskin.plot.scatter(x="minscore", y="receivehsd", figsize=(10,5))
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Fraction Receiving Diplomas")
plt.title("Last-chance Exams");_____no_output_____
</code>
We can think of fuzzy RD as a sort of non compliance. Passing the threshold should make everyone receive the diploma, but some students, the never takers, don’t get it. Likewise, being below the threshold should prevent you from getting a diploma, but some students, the always takers, manage to get it anyway.
Just like when we have the potential outcome, we have the potential treatment status in this situation. $T_1$ is the treatment everyone would have received had they been above the threshold. $T_0$ is the treatment everyone would have received had they been below the threshold. As you've might have noticed, we can think of the **threshold as an Instrumental Variable**. Just as in IV, if we naively estimate the treatment effect, it will be biased towards zero.

The probability of treatment being less than one, even above the threshold, makes the outcome we observe less than the true potential outcome $Y_1$. By the same token, the outcome we observe below the threshold is higher than the true potential outcome $Y_0$. This makes it look like the treatment effect at the threshold is smaller than it actually is and we will have to use IV techniques to correct for that.
Just like when we've assumed smoothness on the potential outcome, we now assume it for the potential treatment. Also, we need to assume monotonicity, just like in IV. In case you don't remember, it states that $T_{i1}>T_{i0} \ \forall i$. This means that crossing the threshold from the left to the right only increases your chance of getting a diploma (or that there are no defiers). With these 2 assumptions, we have a Wald Estimator for LATE.
$$
\dfrac{\lim_{r \to c^+} E[Y_i|R_i=r] - \lim_{r \to c^-} E[Y_i|R_i=r]}{\lim_{r \to c^+} E[T_i|R_i=r] - \lim_{r \to c^-} E[T_i|R_i=r]} = E[Y_{1i} - Y_{0i} | T_{1i} > T_{0i}, R_i=c]
$$
Notice how this is a local estimate in two senses. First, it is local because it only gives the treatment effect at the threshold $c$. This is the RD locality. Second, it is local because it only estimates the treatment effect for the compliers. This is the IV locality.
To estimate this, we will use 2 linear regression. The numerator can be estimated just like we've done before. To get the denominator, we simply replace the outcome with the treatment. But first, let's talk about a sanity check we need to run to make sure we can trust our RDD estimates.
### The McCrary Test
One thing that could break our RDD argument is if people can manipulate where they stand at the threshold. In the sheepskin example this could happen if students just below the threshold found a way around the system to increase their test score by just a bit. Another example is when you need to be below a certain income level to get a government benefit. Some families might lower their income on purpose, just to be just eligible for the program.
In these sorts of situations, we tend to see a phenomenon called bunching on the density of the running variable. This means that we will have a lot of entities just above or just below the threshold. To check for that, we can plot the density function of the running variable and see if there are any spikes around the threshold. For our case, the density is given by the `n` column in our data._____no_output_____
<code>
plt.figure(figsize=(8,8))
ax = plt.subplot(2,1,1)
sheepskin.plot.bar(x="minscore", y="n", ax=ax)
plt.title("McCrary Test")
plt.ylabel("Smoothness at the Threshold")
ax = plt.subplot(2,1,2, sharex=ax)
sheepskin.replace({1877:1977, 1874:2277}).plot.bar(x="minscore", y="n", ax=ax)
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Spike at the Threshold");_____no_output_____
</code>
The first plot shows how our data density looks like. As we can see, there are no spikes around the threshold, meaning there is no bunching. Students are not manipulating where they fall on the threshold. Just for illustrative purposes, the second plot shows what bunching would look like if students could manipulate where they fall on the threshold. We would see a spike in the density for the cells just above the threshold, since many students would be on that cell, barely passing the exam.
Getting this out of the way, we can go back to estimate the sheepskin effect. As I've said before, the numerator of the Wald estimator can be estimated just like we did in the Sharp RD. Here, we will use as weight the kernel with a bandwidth of 15. Since we also have the cell size, we will multiply the kernel by the sample size to get a final weight for the cell. _____no_output_____
<code>
sheepsking_rdd = sheepskin.assign(threshold=(sheepskin["minscore"]>0).astype(int))
model = smf.wls("avgearnings~minscore*threshold",
sheepsking_rdd,
weights=kernel(sheepsking_rdd["minscore"], c=0, h=15)*sheepsking_rdd["n"]).fit()
model.summary().tables[1]_____no_output_____
</code>
This is telling us that the effect of a diploma is -97.7571, but this is not statistically significant (P-value of 0.5). If we plot these results, we get a very continuous line at the threshold. More educated people indeed make more money, but there isn't a jump at the point where they receive the 12th grade diploma. This is an argument in favor of the view that says that education increases earnings by making people more productive, rather than being just a signal to the marker. In other words, there is no sheepskin effect. _____no_output_____
<code>
ax = sheepskin.plot.scatter(x="minscore", y="avgearnings", color="C0")
sheepskin.assign(predictions=model.fittedvalues).plot(x="minscore", y="predictions", ax=ax, color="C1", figsize=(8,5))
plt.xlabel("Test Scores Relative to Cutoff")
plt.ylabel("Average Earnings")
plt.title("Last-chance Exams");_____no_output_____
</code>
However, as we know from the way non compliance bias works, this result is biased towards zero. To correct for that, we need to scale it by the first stage and get the Wald estimator. Unfortunately, there isn't a good Python implementation for this, so we will have to do it manually and use bootstrap to get the standard errors.
The code below runs the numerator of the Wald estimator just like we did before and also constructs the denominator by replacing the target variable with the treatment variable `receivehsd`. The final step just divides the numerator by the denominator. _____no_output_____
<code>
def wald_rdd(data):
weights=kernel(data["minscore"], c=0, h=15)*data["n"]
denominator = smf.wls("receivehsd~minscore*threshold", data, weights=weights).fit()
numerator = smf.wls("avgearnings~minscore*threshold", data, weights=weights).fit()
return numerator.params["threshold"]/denominator.params["threshold"]_____no_output_____from joblib import Parallel, delayed
np.random.seed(45)
bootstrap_sample = 1000
ates = Parallel(n_jobs=4)(delayed(wald_rdd)(sheepsking_rdd.sample(frac=1, replace=True))
for _ in range(bootstrap_sample))
ates = np.array(ates)_____no_output_____
</code>
With the bootstrap samples, we can plot the distribution of ATEs and see where the 95% confidence interval is._____no_output_____
<code>
sns.distplot(ates, kde=False)
plt.vlines(np.percentile(ates, 2.5), 0, 100, linestyles="dotted")
plt.vlines(np.percentile(ates, 97.5), 0, 100, linestyles="dotted", label="95% CI")
plt.title("ATE Bootstrap Distribution")
plt.xlim([-10000, 10000])
plt.legend();_____no_output_____
</code>
As you can see, even when we scale the effect by the first stage, it is still not statistically different from zero. This means that education doesn't increase earnings by a simple sheepskin effect, but rather by increasing one's productivity.
## Key Ideas
We learned how to take advantage of artificial discontinuities to estimate causal effects. The idea is that we will have some artificial threshold that makes the probability of treatment jump. One example that we saw was how age makes the probability of drinking jump at 21 years. We could use that to estimate the impact of drinking on mortality rate. We use the fact that very close to the threshold, we have something close to a randomized trial. Entities very close to the threshold could have gone either way and what determines where they've landed is essentially random. With this, we can compare those just above and just below to get the treatment effect. We saw how we could do that with weighted linear regression using a kernel and how this even gave us, for free, standard errors for our ATE.
Then, we look at what would happen in the fuzzy RD design, where we have non compliance. We saw how we could approach the situation much like we did with IV.
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
Other important reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers)._____no_output_____
|
{
"repository": "keesterbrugge/python-causality-handbook",
"path": "causal-inference-for-the-brave-and-true/16-Regression-Discontinuity-Design.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 468282,
"hexsha": "cb567b8a3794cd9a9213b5178f51712d3531fa1e",
"max_line_length": 78064,
"avg_line_length": 488.8121085595,
"alphanum_fraction": 0.9307682123
}
|
# Notebook from lwt852/MangroveConservation
Path: models/ODE_Mimi.ipynb
# <center>Using Ordinary Differential Equations (ODEs) in development studies</center>
<center>by Mimi Gong</center>_____no_output_____---
## Definition
An ordinary differential equation is an expression which relates function to the ordinary derivatives.
One the most common differential equations used in physical application is Newton’s law: F = ma. Acceleration is the second derivative of a displacement function x(t).
## Applications
Population model is a common application of ordinary differential equations in my field:conservation studies, and has been widely studied in ecology to model the population growth of many species including species in mangrove forests. More broadly, population models has been widely adopted in development studies, which is to depict development changes over time by ODEs.
In a dynamic system, the 'dynamics' is characterized by constant change of progress.These developments can happen in individual, between two individuals, or among a group of people(such as a family system). Moreover, the development can be measured on short or long time scales, depending on the phenomenon of interest. They can occurs over long spans of time(decades), short time spans(seconds or less) or time scales in between.
To quantitatively measure the 'dynamics', we need to be specific on how the system changes and how these interrelationships are defined. Therefore, mathematicaly form to the nature of the changes, such as ODEs can be assigned to achieve the goal.Theoretically, we conceptualize that developmental changes occur in a lawful form, and are initiated, modelrated or regulated by forces within and outside of an individual. This is where and how differential equations are applied to dynamical system theory.
In brief, a differential equation is a function to describe how a variable changes over a period of time relative to itself and/or other parameters. This is in contrast to traditional growth modeling, where the growth function describes the overall shape(or functional form) of the growth curve. _____no_output_____---
# References
1. Price, G. J., Louys, J., Faith, J. T., Lorenzen, E., & Westaway, M. C. (2018). Big data little help in megafauna mysteries. Nature, 558(7708), 23–25. https://doi.org/10.1038/d41586-018-05330-7
2. Introductory ODEs | Quantdev. (n.d.). Retrieved March 29, 2020, from https://quantdev.ssri.psu.edu/tutorials/introductory-odes
3.Luo, H. (n.d.). Population Modeling by Differential Equations. 31.
_____no_output_____
|
{
"repository": "lwt852/MangroveConservation",
"path": "models/ODE_Mimi.ipynb",
"matched_keywords": [
"ecology"
],
"stars": null,
"size": 3343,
"hexsha": "cb56853c1dee4b6e569c5875704fa870009b3745",
"max_line_length": 514,
"avg_line_length": 49.1617647059,
"alphanum_fraction": 0.6900987137
}
|
# Notebook from vasudev-sharma/course-content
Path: tutorials/W0D3_LinearAlgebra/W0D3_Tutorial3.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D3_LinearAlgebra/W0D3_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____
# Bonus Tutorial: Discrete Dynamical Systems
**Week 0, Day 3: Linear Algebra**
**By Neuromatch Academy**
__Content creators:__ Name Surname, Name Surname
__Content reviewers:__ Name Surname, Name Surname.
__Content editors:__ Name Surname, Name Surname.
__Production editors:__ Name Surname, Name Surname. _____no_output_____---
#Tutorial Objectives
In this tutorial, we will start to gain an intuition for how eigenvalues and eigenvectors can be helpful for understanding dynamical systems. We will focus on a discrete dynamical system consisting of two neurons.
By the end of the tutorial, you will:
* Predict whether the firing rates of interconnected model neurons will explode or decay based on the eigenvalues of the weight matrix.
* Apply ideas from previous tutorials (linear combination, basis vectors, etc) to understand a new concept
_____no_output_____---
# Setup_____no_output_____
<code>
# Imports
# Import only the libraries/objects that you use in this tutorial.
# If any external library has to be installed, !pip install library --quiet
# follow this order: numpy>matplotlib.
# import widgets in hidden Figure settings cell
import numpy as np
import matplotlib
import matplotlib.pyplot as plt_____no_output_____#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____#@title Plotting functions
def plot_circuit_responses(u, W, eigenstuff = False, xlim='default', ylim='default'):
fig, ax = plt.subplots(1, 1, figsize=(10,10))
# Set up axis limits
if xlim =='default':
extreme = np.maximum(np.abs(np.min(u)), np.max(u))
xlim = [- extreme, extreme]
if ylim == 'default':
extreme = np.maximum(np.abs(np.min(u)), np.max(u))
ylim = [- extreme, extreme]
# Set up look
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
cs = plt.rcParams['axes.prop_cycle'].by_key()['color']*10
ax.set_xlim(xlim)
ax.set_ylim(ylim)
# Set up tracking textz
tracker_text = ax.text(.5, .9, "", color='w', fontsize=20, verticalalignment='top', horizontalalignment='left', transform=ax.transAxes)
# Plot eigenvectors
if eigenstuff:
eigvals, eigvecs = np.linalg.eig(W)
if np.abs(eigvals[0]) < np.abs(eigvals[1]):
lc1 = 'c'
lc2 = 'g'
else:
lc1 = 'g'
lc2 = 'c'
ax.plot(np.arange(-10000, 10000)*eigvecs[0, 0], np.arange(-10000, 10000)*eigvecs[1, 0],lc1, alpha=.5, label = r'$\mathbf{v}_1$')
ax.plot(np.arange(-10000, 10000)*eigvecs[0, 1], np.arange(-10000, 10000)*eigvecs[1, 1], lc2, alpha=.5, label = r'$\mathbf{v}_2$')
ax.legend()
# Set up scatter
cmap = plt.cm.Blues_r
norm = plt.Normalize(vmin=0, vmax=u.shape[1])
scatter = ax.scatter(u[0, :], u[1, :], alpha=1, c = cmap(norm(np.arange(u.shape[1]))))
ax.set(xlabel = 'Neuron 1 Firing Rate', ylabel = 'Neuron 2 Firing Rate', title = 'Neural firing over time')
fig.colorbar(matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap),
ax=ax, label = 'Time step')_____no_output_____#@title Helper functions
def get_eigval_specified_matrix(target_eig):
"""Generates matrix with specified eigvals
Args:
target_eig (list): list of target eigenvalues, can be real or complex,
should be length 2 unless you desire repeated eigenvalues
with the same eigenvector, in which case length 1
Returns:
ndarray: 2 x 2 matrix with target eigvals
"""
# Set up two eigenvectors
V = np.array([[1, 1], [-1, 1]]).astype('float')
for i in range(2):
V[:,i] = V[:,i]/np.linalg.norm(V[:,i])
# Get matrix with target eigenvalues
if type(target_eig[0]) == int or type(target_eig[0]) == float:
if len(target_eig) == 2: # distinct eigvecs (not necessarily distinct eigvals)
D = np.diag(target_eig)
A = V @ D @ np.linalg.inv(V)
else: # repeated with same vec
summed = 2*target_eig[0]
a = summed-3
d = 3
bc = target_eig[0]**2 - a*d
factors = [n for n in range(1, bc+ 1) if bc % n == 0]
b = factors[int(np.floor(len(factors)/2))]
c = bc/-b
A = np.array([[a, b], [c, d]])
elif type(target_eig[0]) == complex:
C = [np.real(V[:,0]), np.real(V[:,1])]
B = np.array([[np.real(target_eig[0]), np.imag(target_eig[0])], [-np.imag(target_eig[0]), np.real(target_eig[0])]]).squeeze()
A = C @ B @ np.linalg.inv(C)
return A_____no_output_____
</code>
---
# Section 1: Defining a neural circuit
In previous tutorials, we have looked at static models of postsynaptic neurons based on the responses of presynaptic neurons.
Let's now introduce the concept of time. We will chop time up into little bins and look at the activity of neurons in each bin. That is, we will work in a **discrete** time framework. For example, if each bin is 1 second long, we will look at the firing rate of each neuron at intervals of a second.
Instead of examining pre- and post- synaptic neurons, we will examine at two neurons in one area that are connected. In our model, the activity of neuron 1 at one time bin depends on the activities of both neurons during the previous time bin multiplied by the respective weights from itself and neuron 2. It might seem weird for a neuron to have a weight to itself - this is abstracting away some biological details but basically conveys how much the neural activity depends on its history. (Throughout this course, we'll see lots of neuron models and how some model biological detail more faithfully while others abstract.)
We will refer to the activity of neuron i during time bin j as $a_{i, j}$. The weight from neuron x to neuron y will be $w_{y, x}$. With this helpful notation, we can write an equation for the activity of neuron 1 at time bin t:
$$a_{1, t} = w_{1, 1}a_{1, t-1} + w_{1, 2}a_{2, t-1} $$
And the symmetric model is true of neuron 2:
$$a_{2, t} = w_{2, 1}a_{1, t-1} + w_{2, 2}a_{2, t-1} $$
This is already a mess of subscript numbers - luckily we can use matrices and vectors once again and our model becomes:
$$\mathbf{a}_{t} = \mathbf{W}\mathbf{a}_{t-1} $$
where:
$$\mathbf{W} = \begin{bmatrix} w_{1, 1} & w_{1, 2} \\ w_{2, 1} & w_{2, 2} \end{bmatrix}, \mathbf{a}_{t} = \begin{bmatrix} a_{1, t} \\ a_{2, t} \end{bmatrix}$$
It turns out that this is a **discrete dynamical system**. Dynamical systems are concerned with how quantities evolve other time, in this case our neural firing rates. When we model the evolution of quantities over time using a discrete time framework, it is, unsurprisingly, a discrete dynamical system. We will see continuous dynamical systems (where we embrace the full continuity of time) tomorrow and later in the comp neuro course during W2D2: Linear Dynamics.
_____no_output_____## Coding Exercise 1: Implementing the circuit
In this exercise, you will implement the function `circuit_implementation`. Given a weight matrix, initial activities at time 0, and a number of time bins to model, this function calculates the neural firing rates at each time bin.
We will use initial firing rates of 1 for both neurons:
$$\mathbf{a}_0 = \begin{bmatrix}
1 \\
1 \\
\end{bmatrix}$$
and the weight matrix:
$$\mathbf{W} = \begin{bmatrix} 1 & 0.2 \\
0.1 & 1 \\ \end{bmatrix}$$
We will look at activity over 30 time steps. As before, we will allow our firing rates to be negative, despite this not being possible biologically.
_____no_output_____
<code>
def circuit_implementation(W, u0, T):
""" Simulate the responses of N neurons over time given their connections
Args:
W (ndarray): weight matrix of synaptic connections, should be N x N
u0 (ndarray): initial condition or input vector, should be N,
T (scalar): number of time steps to run simulation for
Returns:
u (ndarray): the neural responses over time, should be N x T
"""
# Compute the number of neurons
N = W.shape[0]
# Initialize empty response array and initial condition
u = np.zeros((N, T))
u[:, 0] = u0
#################################################
## TODO for students ##
# Fill out function and remove
raise NotImplementedError("Student exercise: Complete circuit_implementation")
#################################################
# Loop over time steps and compute u(t+1)
for i_t in range(1, T):
u[:, i_t] = ...
return u
# Define W, u0, T
W = np.array([[1, .2], [.1, 1]])
u0 = np.array([1, 1])
T = 30
# Get neural activities
u = circuit_implementation(W, u0, T)
# Visualize neural activities
plot_circuit_responses(u, W)_____no_output_____# to_remove solution
def circuit_implementation(W, u0, T):
""" Simulate the responses of N neurons over time given their connections
Args:
W (ndarray): weight matrix of synaptic connections, should be N x N
u0 (ndarray): initial condition or input vector, should be N,
T (scalar): number of time steps to run simulation for
Returns:
u (ndarray): the neural responses over time, should be N x T
"""
# Compute the number of neurons
N = W.shape[0]
# Initialize empty response array and initial condition
u = np.zeros((N, T))
u[:, 0] = u0
# Loop over time steps and compute u(t+1)
for i_t in range(1, T):
u[:, i_t] = W @ u[:, i_t-1]
return u
# Define W, u0, T
W = np.array([[1, .2], [.1, 1]])
u0 = np.array([1, 1])
T = 30
# Get neural activities
u = circuit_implementation(W, u0, T)
# Visualize neural activities
with plt.xkcd():
plot_circuit_responses(u, W)_____no_output_____
</code>
The firing rates of both neurons are exploding to infinity over time. Let's now see what happens with a different weight matrix:
$$\mathbf{W} = \begin{bmatrix} 0.2 & 0.1 \\
1 & 0.2 \\ \end{bmatrix}$$_____no_output_____
<code>
# @markdown Execute this cell to visualize activity over time
# Define W, u0, T
W = np.array([[.2, .1], [1, .2]])
u0 = np.array([1, 1])
T = 30
# Get neural activities
u = circuit_implementation(W, u0, T)
# Visualize neural activities
with plt.xkcd():
plot_circuit_responses(u, W)_____no_output_____
</code>
We can see that with this weight matrix, the firing rates are decaying towards zero. It turns out that we could have predicted this by looking at the eigenvalues of the weight matrices, as we'll see in the next section._____no_output_____---
# Section 2: Understanding dynamics using eigenstuff
As we'll see in this section, eigenvectors and eigenvalues are incredibly useful for understanding the evolution of the neural firing rates, and discrete dynamical systems in general.
_____no_output_____## Section 2.1: Rewriting our circuit equation
In our neural circuit, we are modeling the activities at a time step as:
$$\mathbf{a}_{t} = \mathbf{W}\mathbf{a}_{t-1} $$
Let's start at time step 1:
$$\mathbf{a}_{1} = \mathbf{W}\mathbf{a}_{0} $$
And move on to time step 2:
$$\mathbf{a}_{2} = \mathbf{W}\mathbf{a}_{1} $$
In the above equation, we can subsitute in $\mathbf{a}_{1} = \mathbf{W}\mathbf{a}_{0}$:
$$\mathbf{a}_{2} = \mathbf{W}\mathbf{W}\mathbf{a}_{0} = \mathbf{W}^2 \mathbf{a}_{0}$$
We can keep doing this with subsequent time steps:
$$\mathbf{a}_{3} = \mathbf{W}\mathbf{a}_{2} = \mathbf{W}\mathbf{W}^2 \mathbf{a}_{0} = \mathbf{W}^3\mathbf{a}_{0} $$
$$\mathbf{a}_{4} = \mathbf{W}\mathbf{a}_{3} = \mathbf{W}\mathbf{W}^3 \mathbf{a}_{0} = \mathbf{W}^4\mathbf{a}_{0} $$
This means that we can write the activity at any point as:
$$\mathbf{a}_{i} = \mathbf{W}^i\mathbf{a}_{0} $$_____no_output_____## Section 2.2: Initial firing rates along an eigenvector
Remember from the last tutorial, that an eigenvector of matrix $\mathbf{W}$ is a vector that becomes a scalar multiple (eigenvalue) of itself when multiplied by that matrix:
$$\mathbf{W}\mathbf{v} = \lambda\mathbf{v}$$
Let's look at what happens if the initial firing rates in our neural circuit lie along that eigenvector, using the same substitution method as in the previous section:
$$\mathbf{a}_{0} = \mathbf{v} $$
$$\mathbf{a}_{1} = \mathbf{W}\mathbf{a}_0 = \mathbf{W}\mathbf{v} = \lambda\mathbf{v} $$
$$\mathbf{a}_{2} = \mathbf{W}\mathbf{a}_1 = \mathbf{W}\lambda\mathbf{v} = \lambda\mathbf{W}\mathbf{v} = \lambda^2\mathbf{v}$$
$$\mathbf{a}_{3} = \mathbf{W}\mathbf{a}_2 = \mathbf{W}\lambda^2\mathbf{v} = \lambda^2\mathbf{W}\mathbf{v} = \lambda^3\mathbf{v}$$
$$...$$
$$\mathbf{a}_i = \lambda^i\mathbf{v}$$
The activities at any time step equal a scalar times the initial activities. In other words, if the initial activities lie along an eigenvector, the activities will only evolve along that eigenvector. _____no_output_____### Interactive demo 2.2: Changing the eigenvalue
Let's visualize what happens if the initial activities of the neurons lie along an eigenvector and think about how this depends on the eigenvalue.
The interactive demo below is the same visualization you saw in Section 1, but now we also plot the eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$.
Questions:
1. What happens if the eigenvalue is large (2)?
2. What happens if you move the eigenvalue from 2 to towards 0?
3. What happens with negative eigenvalues?_____no_output_____
<code>
# @markdown Execute this cell to enable the widget
@widgets.interact(eigenvalue = widgets.FloatSlider(value=0.5, min=-2, max=2, step=0.2))
def plot_system(eigenvalue):
# Get weight matrix with specified eigenvalues
W = get_eigval_specified_matrix([eigenvalue, eigenvalue])
# Get initial condition
u0 = np.array([1, 1])
# Get neural activities
u = circuit_implementation(W, u0, 10)
# Visualize neural activities
plot_circuit_responses(u, W, eigenstuff = True, xlim = [-15, 15], ylim = [-15, 15])_____no_output_____# to_remove explanation
# 1) With the eigenvalue = 2, the activities of the neurons explode towards infinity, along
#. the eigenvector.
# 2) At eigenvalue = 1, there is a shift in what happens. With the eigenvalue above 1,
#. the activites always explode. Once the eigenvalue is below 1, the activities decay to 0.
#. If the eigenvalue equals 1, the activities never differ from the initial condition.
#. This makes sense with the equation above. Lambda is raised to a power when computing activities:
#. if it's a fraction, this term will get smaller so the activities will. If above 1, this term
#. will explore so the activities will.
# 3) If the eigenvalue is between -1 and 0, the neural activities jump across the
#. origin repeatedly along the eigenvector but eventually decay to 0. If the eigenvalue is below -1, the
#. activities jump across the origin repeatedly along the eigenvector but explode to
#. positive or negative infinity. Once again, this makes sense if you think through the equation above._____no_output_____
</code>
## Section 2.3: Other initial conditions
We now know that if our initial activities (or initial condition) fall on an eigenvector of $\mathbf{W}$, the activities will evolve along that line, either exploding to infinity if the absolute value of the eigenvalue is above 1 or decaying to the origin it it is below 1. What if our initial condition doesn't fall along the eigenvector though?
To understand what will happen, we will use the ideas of basis vectors and linear combinations from Tutorial 1.
Let's assume for now that our weight matrix has two distinct eigenvectors ($\mathbf{v}_1$ and $\mathbf{v}_2$) with corresponding eigenvalues $\lambda_1$ and $\lambda_2$, and that these eigenvectors form a basis for 2D space. That means we can write any vector in 2D space as a linear combination of our eigenvectors, including our initial activity vector:
$$\mathbf{a}_0 = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 $$
Let's compute the next time step, using our previous strategy of substitution:
$$\begin{align}
\mathbf{a}_1 &= \mathbf{W}\mathbf{a}_0
\\ &= \mathbf{W}(c_1\mathbf{v}_1 + c_2\mathbf{v}_2) \\ &= c_1\mathbf{W}\mathbf{v}_1 + c_2\mathbf{W}\mathbf{v}_2 \\ &= c_1\lambda_1\mathbf{v}_1 + c_2\lambda_2\mathbf{v}_2 \end{align} $$
All activities can be written as:
$$\mathbf{a}_i = c_1\lambda_1^i\mathbf{v}_1 + c_2\lambda_2^i\mathbf{v}_2 $$
We'll see what this means for our system in the next demo._____no_output_____### Interactive demo 2.3: Changing both eigenvalues
In the demo below, you can now change both eigenvalues and the initial condition (with `a0_1` setting neuron 1 initial activity and `a0_2` setting neuron 2 initial activity). We will only look at positive eigenvalues to keep things a little more simple.
Think each of the following questions through based on the equation we just arrived at and then play with the demo to see if you are correct.
$$\mathbf{a}_i = c_1\lambda_1^i\mathbf{v}_1 + c_2\lambda_2^i\mathbf{v}_2 $$
1. What will happen when both eigenvalues are greater than 1? Does this depend on initial condition?
2. What will happen when both eigenvalues are less than 1?
3. Set eigenvalue1 to 2 and eigenvalue2 to 1.2 and try out different initial conditions. What do you see? Why are you seeing this?
4. What happens if one eigenvalue is below 1 and the other is above 1?_____no_output_____
<code>
# @markdown Execute this cell to enable the widget
@widgets.interact(eigenvalue1 = widgets.FloatSlider(value=0.5, min=0.2, max=2, step=0.2),
eigenvalue2 = widgets.FloatSlider(value=0.5, min=0.2, max=2, step=0.2),
a0_1 = widgets.FloatSlider(value=1, min=-5, max=5, step=0.2),
a0_2 = widgets.FloatSlider(value=2, min=-5, max=5, step=0.2), )
def plot_system(eigenvalue1, eigenvalue2, a0_1, a0_2):
# Get initial condition
a0 = np.array([a0_1, a0_2])
# Get weight matrix with specified eigenvalues
W = get_eigval_specified_matrix([eigenvalue1, eigenvalue2])
# Get neural activities
u = circuit_implementation(W, a0, 10)
# Visualize neural activities
plot_circuit_responses(u, W, eigenstuff = True, xlim = [-15, 15], ylim = [-15, 15])_____no_output_____# to_remove explanation
# 1) If both eigenvalues are above 1, the neural activity will eventually explode
#. to infinity or negative infinity, depending on initial conditions.
# 2) If both eigenvalues are below 1, the neural activity will eventually decay to 0.
# 3) The activities will explode to positive or negative infinity, but the exact trajectory
#. is drawn towards the eigenvector with the larger eigenvalue. This is because the larger eigenvalue
#. will increasingly dominate the other one as it is raised to increasingly larger powers.
#. 4) The activities will eventually explode to positive or negative infinity, unless
#. the initial condition lies exactly on the eigenvector with the small eigenvalue. If the
#. initial condition is near to that eigenvector, the trajectory will first go towards
#. the origin before exploding._____no_output_____
</code>
## Section 2.4: Complex eigenvalues
We've been hiding some complexity from you up until now, namely that eigenvalues can be complex. Complex eigenvalues result in a very specific type of dynamics: rotations.
We will not delve into the proof or intuition behind this here as you'll encounter complex eigenvalues in dynamical systems in W2D2: Linear Dynamics.
Instead, we will simply demonstrate how the nature of the rotations depends on the complex eigenvalues in the animation below. We plot a 3-neuron circuit to better show the rotations. We illustrate each of the following:
* Complex eigenvalues with an absolute value equal to 1 result in a sustained rotation in 3D space.
* Complex eigenvalues with an absolute value below 1 result in a rotation towards the origin.
* Complex eigenvalues with an absolute value above 1 result in a rotation towards the positive/negative infinity.
_____no_output__________no_output_____---
# Summary
You have seen how we can predict what happens in a discrete dynamical system with an update rule of:
$$ \mathbf{a}_t = \mathbf{W}\mathbf{a}_{t-1}$$
The most important takeaway is that inspecting eigenvalues and eigenvectors enables you to predict how discrete dybamical systems evolve. Specifically:
* If all eigenvalues are real and have absolute values above 1, the neural activities explode to infinity or negative infinity.
* If all eigenvalues are real and have absolute values above 1, the neural activities decay to 0.
* If all eigenvalues are real and at least one has an absolute value above 1, the neural activities explode to infinity or negative infinity, except for special cases where the initial condition lies along an eigenvector with an eigenvalue whose absolute value is below 1.
* If eigenvalues are complex, the neural activities rotate in space and decay or explode depending on the amplitude of the complex eigenvalues.
* Even finer details of the trajectories can be predicted by examining the exact relationship of eigenvalues and eigenvectors.
Importantly, these ideas extend far beyond our toy neural circuit. Discrete dynamical systems with the same structure of update rule are common. While the exact dependencies on eigenvalues will change, we will see that we can still use eigenvalues/vectors to understand continuous dynamical systems in W2D2: Linear Dynamics.
_____no_output_____
|
{
"repository": "vasudev-sharma/course-content",
"path": "tutorials/W0D3_LinearAlgebra/W0D3_Tutorial3.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 765371,
"hexsha": "cb57a10b167a863c060b0c84d2a8fe94efbfccdf",
"max_line_length": 733680,
"avg_line_length": 978.7352941176,
"alphanum_fraction": 0.9606269378
}
|
# Notebook from eneskemalergin/OldBlog
Path: _oldnotebooks/Basic_Sequence_Analysis.ipynb
# Performing Basic Sequence Analysis
Now I am continuing to my bioinformatics cookbook tutorial series. Today's topic is to perform basic sequence analysis which is the basics of Next Generation Sequencing.
We will do some basic sequence analysis on DNA sequences. FASTA files are our main target on this, also Biopython as a main library of Python.
Let's first download a FASTA sequence_____no_output_____
<code>
from Bio import Entrez, SeqIO
# Using my email
Entrez.email = "[email protected]"
# Get the FASTA file
hdl = Entrez.efetch(db='nucleotide', id=['NM_002299'],rettype='fasta') # Lactase gene
# Read it and store it in seq
seq = SeqIO.read(hdl, 'fasta')
print "First 10 and last 10: " + seq.seq[:10] + "..." + seq.seq[-10:]First 10 and last 10: GTTCCTAGAA...CTGTCCTTTC
</code>
- Let's save the Biopython object in FASTA file;_____no_output_____
<code>
from Bio import SeqIO
# Open a new fasta file and make it ready to write on
w_hdl = open('example.fasta', 'w')
# specify the part to write
w_seq = seq[11:5795]
# Write it
SeqIO.write([w_seq], w_hdl, 'fasta')
# And of course close it
w_hdl.close()_____no_output_____
</code>
> If you want to write many sequences (easily millions with NGS), do not use a list, as shown in the preceding code because this will allocate massive amounts of memory.Either use an iterator or use the ```SeqIO.write``` function several times with a subset of sequence on each write.
- We need to read the sequence of course to be able to use it_____no_output_____
<code>
# Parse the fasta file and store it in recs
recs = SeqIO.parse('example.fasta', 'fasta')
# Iterate over each records
for rec in recs:
# Get the sequences of each rec
seq = rec.seq
# Show the desription
print(rec.description)
# Show the first 10 letter in sequence
print(seq[:10])
#
print(seq.alphabet)gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA
ATGGAGCTGT
SingleLetterAlphabet()
</code>
In our example code we have only 1 sequence in 1 FASTA file so we did not have to iterate through each record. Since we won't know each time how many records we will have in FASTA the code above is suitable for most cases.
> The first line of FASTA file is description of the gene, in this case : ```gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA```
> The second line is the first 10 lettern in sequence
> The last line is shows how the sequence represented
- Now let's change the alphabet of the sequence we got:
> We create a new sequence with a more informative alphabet._____no_output_____
<code>
from Bio import Seq
from Bio.Alphabet import IUPAC
seq = Seq.Seq(str(seq), IUPAC.unambiguous_dna)_____no_output_____
</code>
- Now have an unambiguous DNA, we can transcribe it as follows:_____no_output_____
<code>
rna = Seq.Seq(str(seq), IUPAC.unambiguous_dna)
rna = seq.transcribe() # Changing DNA into RNA
print "some of the rna variable: "+rna[:10]+"..."+rna[-10:]some of the rna variable: AUGGAGCUGU...UUCAUUCUGA
</code>
> Note that the ```Seq``` constructor takes a string, not a sequence. You will see that the alphabet of the ```rna``` variable is now ```IUPACUnambigousRNA```.
- Finally let's translate it into Protein:_____no_output_____
<code>
prot = seq.translate() # Changing RNA into corresponding Protein
print "some of the resulting protein sequence: "+prot[:10]+"..."+prot[-10:]some of the resulting protein sequence: MELSWHVVFI...QELSPVSSF*
</code>
Now, we have a protein alphabet with the annotation that there is a stop codon (so, our protein is complete).
---
There are other files to store and represent sequences and we talked about some of them in the [first blog post of the series](http://eneskemalergin.github.io/2015/10/11/Getting_Started_NGS/). Now I will show you how to work with modern file formats such as FASTQ format.
FASTQ files are the standard format output by modern sequencers. The purpose of the following content is to make you comfortable with quality scores and how to work with them. To be able to explain the concept we will use real big data from "1000 Genomes Project"
> Next-generation datasets are generally very large like 1000 Genomes Project. You will need to download some stuff so, get ready to wait :)
Let's Start by downloading the dataset: (BTW the following snippet is for IPython NB so if you are following this from my blog go ahead and [click here](ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz))
_____no_output_____
<code>
!wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz--2015-10-26 08:21:31-- ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz
=> 'SRR003265.filt.fastq.gz.1'
Resolving ftp.1000genomes.ebi.ac.uk... 193.62.192.8
Connecting to ftp.1000genomes.ebi.ac.uk|193.62.192.8|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /vol1/ftp/phase3/data/NA18489/sequence_read ... done.
==> SIZE SRR003265.filt.fastq.gz ... 28919712
==> PASV ... done. ==> RETR SRR003265.filt.fastq.gz ... done.
Length: 28919712 (28M) (unauthoritative)
SRR003265.filt.fast 100%[=====================>] 27.58M 1.43MB/s in 15s
2015-10-26 08:21:49 (1.88 MB/s) - 'SRR003265.filt.fastq.gz.1' saved [28919712]
</code>
Now we have file "SRR003265.filt.fastq.gz" which has 3 extensions, 1 is fastq so we are fine. The last one ```gz``` is the thing we will solve with Pyhton Library while we are opening it.
- First we need to open the file:_____no_output_____
<code>
import gzip # This is the library we need to unzip .gz
from Bio import SeqIO # The usual SeqIO
# Unzip and read the fastq file at the end store it in recs
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
rec = next(recs)
# Print the id, description and sequence of the record
print(rec.id, rec.description, rec.seq)
# Print the letter_annotations
# Biopython will convert all the Phred encoding letters to logarithmic scores
print(rec.letter_annotations)('SRR003265.31', 'SRR003265.31 3042NAAXX:3:1:1252:1819 length=51', Seq('GGGAAAAGAAAAACAAACAAACAAAAACAAAACACAGAAACAAAAAAACCA', SingleLetterAlphabet()))
{'phred_quality': [40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 23, 40, 32, 35, 29, 40, 16, 40, 40, 32, 35, 31, 40, 40, 39, 22, 40, 24, 20, 28, 31, 12, 31, 10, 22, 28, 13, 26, 20, 23, 23]}
</code>
> You should usually store your FASTQ files in a compressed format, for space saving and processing time saving's sake.
> Don't use list(recs), if you don't want to sacrife a lot of memory, since FASTQ files are usualy big ones.
- Then, let's take a look at the distribution of nucleotide reads:_____no_output_____
<code>
from collections import defaultdict
# Unzip and read the fastq file
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
# Make integer dictionary
cnt = defaultdict(int)
# Iterate over records
for rec in recs:
# In each letter of the sequence
for letter in rec.seq:
# Count the letters and store the number of count in dictionary cnt
cnt[letter] += 1
# Find the total of cnt counts
tot = sum(cnt.values())
# Iterate over the dictionary cnt
for letter, cnt_value in cnt.items():
print('%s: %.2f %d' % (letter, 100. * cnt_value / tot, cnt_value))
# Prints the following
# For each Letter inside
# Print the percentage of apperance in sequences
# and the total number of letter
# Do this for each letter (even for NONE(N))A: 28.60 7411965
C: 21.00 5444053
T: 29.58 7666885
G: 20.68 5359334
N: 0.14 37289
</code>
> Note that there is a residual number for N calls. These are calls in which a sequencer reports an unknown base.
- Now, let's plot the distribution of Ns according to its read position:_____no_output_____
<code>
%matplotlib inline
# Plot it in IPython Directly
# Calling libraries
import seaborn as sns
import matplotlib.pyplot as plt
# Again unzip, read the fastq file
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'), 'fastq')
# Make a dictionary
n_cnt = defaultdict(int)
# The same code as before until here
# iterate through the file and get the position of any references to N.
for rec in recs:
for i, letter in enumerate(rec.seq):
pos = i + 1
if letter == 'N':
n_cnt[pos] += 1
seq_len = max(n_cnt.keys())
positions = range(1, seq_len + 1)
fig, ax = plt.subplots()
ax.plot(positions, [n_cnt[x] for x in positions])
ax.set_xlim(1, seq_len)_____no_output_____
</code>
> Until position 25, there are no errors. This is not what you will get from a typical sequencer output, because Our example file is already filtered and the 1000 genomes filtering rules enforce that no N calls can occur before position 25.
> the quantity of uncalled bases is positiondependent.
- So, what about the quality of reads?
- Let's study the distribution of Phred scores and plot the distribution of qualities according to thei read position:_____no_output_____
<code>
# Reopen and read
recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq')
# default dictionary
qual_pos = defaultdict(list)
for rec in recs:
for i, qual in enumerate(rec.letter_annotations['phred_quality']):
if i < 25 or qual == 40:
continue
pos = i + 1
qual_pos[pos].append(qual)
vps = []
poses = qual_pos.keys()
poses.sort()
for pos in poses:
vps.append(qual_pos[pos])
fig, ax = plt.subplots()
ax.boxplot(vps)
ax.set_xticklabels([str(x) for x in range(26, max(qual_pos.keys()) + 1)])_____no_output_____
</code>
> We will ignore both positions sequenced 25 base pairs from start (again, remove this rule if you have unfiltered sequencer data) and the maximum quality score for this file (40). However, in your case, you can consider starting your plotting analysis also with the maximum. You may want to check the maximum possible value for your sequencer hardware. Generally, as most calls can be performed with maximum quality, you may want to remove them if you are trying to understand where quality problems lie.
---_____no_output_____
|
{
"repository": "eneskemalergin/OldBlog",
"path": "_oldnotebooks/Basic_Sequence_Analysis.ipynb",
"matched_keywords": [
"BioPython",
"RNA",
"bioinformatics"
],
"stars": null,
"size": 51397,
"hexsha": "cb580fe6bf7f836db51f43f71ee1665c601fa17d",
"max_line_length": 19912,
"avg_line_length": 97.1587901701,
"alphanum_fraction": 0.8292507345
}
|
# Notebook from biocore/tcga
Path: jupyter_notebooks/TCGA Batch Correction -- Final Analysis.ipynb
<code>
import os, numpy, warnings
import pandas as pd_____no_output_____os.environ['R_HOME'] = '/home/gdpoore/anaconda3/envs/tcgaAnalysisPythonR/lib/R'
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format = 'retina'_____no_output_____%reload_ext rpy2.ipython_____no_output_____%%R
require(ggplot2)
require(snm)
require(limma)
require(edgeR)
require(dplyr)
require(edgeR)
require(pvca)
require(lme4)
require(ggsci)
require(cowplot)
require(doMC)
numCores <- detectCores()
registerDoMC(cores=numCores)_____no_output_____%%R
load("tcgaVbDataAndMetadataAndSNM.RData")_____no_output_____%%R
print(dim(vbDataBarnDFReconciled))
print(dim(vbDataBarnDFReconciledQC))
print(dim(metadataSamplesAllQC))_____no_output_____%%R
metadataSamplesAllQCAML <- droplevels(metadataSamplesAll[! (is.na(metadataSamplesAll$race) |
is.na(metadataSamplesAll$portion_is_ffpe) |
is.na(metadataSamplesAll$age_at_diagnosis)),])
# metadataSamplesAllQCAML <- droplevels(metadataSamplesAllQCAML[metadataSamplesAllQCAML$disease_type == "Acute Myeloid Leukemia",])
vbDataBarnDFReconciledQCAML <- vbDataBarnDFReconciled[rownames(metadataSamplesAllQCAML),]
print(dim(metadataSamplesAllQCAML))
print(dim(vbDataBarnDFReconciledQCAML))_____no_output_____%%R
qcMetadata <- metadataSamplesAllQC # metadataSamplesAllQCAML
qcData <- vbDataBarnDFReconciledQC # vbDataBarnDFReconciledQCAML
# Set up design matrix
covDesignNorm <- model.matrix(~0 + sample_type +
data_submitting_center_label +
platform +
experimental_strategy +
tissue_source_site_label +
portion_is_ffpe,
data = qcMetadata)
print(colnames(covDesignNorm))
colnames(covDesignNorm) <- gsub('([[:punct:]])|\\s+','',colnames(covDesignNorm))
print(colnames(covDesignNorm))
# Set up counts matrix
counts <- t(qcData) # DGEList object from a table of counts (rows=features, columns=samples)
# Normalize using edgeR and then plug into voom
dge <- DGEList(counts = counts)
keep <- filterByExpr(dge, covDesignNorm)
dge <- dge[keep,,keep.lib.sizes=FALSE]
print("Now normalizing data...")
dge <- calcNormFactors(dge, method = "TMM")
print("Now applying voom on normalized data...")
vdge <- voom(dge, design = covDesignNorm, plot = TRUE, save.plot = TRUE, normalize.method="none")_____no_output_____%%R
print(table(metadataSamplesAllQCAML$sample_type))_____no_output_____%%R
# Apply
bio.var.sample.type <- model.matrix(~sample_type, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail
data=qcMetadata)
bio.var.gender <- model.matrix(~gender, #sample_type, # histological_diagnosis_label and disease_type tried but cause function to fail
data=qcMetadata)
adj.var <- model.matrix(~data_submitting_center_label +
platform +
experimental_strategy +
tissue_source_site_label +
portion_is_ffpe,
data=qcMetadata)
colnames(bio.var.sample.type) <- gsub('([[:punct:]])|\\s+','',colnames(bio.var.sample.type))
colnames(bio.var.gender) <- gsub('([[:punct:]])|\\s+','',colnames(bio.var.gender))
colnames(adj.var) <- gsub('([[:punct:]])|\\s+','',colnames(adj.var))
print(dim(adj.var))
print(dim(bio.var.sample.type))
print(dim(bio.var.gender))
print(dim(t(vdge$E)))
print(dim(covDesignNorm))_____no_output_____%%R
snmDataObjSampleTypeWithExpStrategyFA <- snm(raw.dat = vdge$E,
bio.var = bio.var.sample.type,
adj.var = adj.var,
rm.adj=TRUE,
verbose = TRUE,
diagnose = TRUE)
snmDataSampleTypeWithExpStrategyFA <- t(snmDataObjSampleTypeWithExpStrategyFA$norm.dat)
print(dim(snmDataSampleTypeWithExpStrategyFA))_____no_output_____%%R
save(snmDataSampleTypeWithExpStrategyFA, file = "snmDataSampleTypeWithExpStrategyFA.RData")_____no_output_____
</code>
# PCA plotting to visually examine batch effects and batch correction_____no_output_____
<code>
%%R
pcaPlotting <- function(pcaObject,pcChoices, dataLabels, factorString, titleString){
require(ggbiplot)
theme_update(plot.title = element_text(hjust = 0.5))
g <- ggbiplot(pcaObject,pcChoices, obs.scale = 1, var.scale = 1,
groups = dataLabels, ellipse = TRUE,
alpha = 0.2,
circle = TRUE,var.axes=FALSE) +
scale_color_nejm(name = factorString) +
theme_bw() +
#theme(legend.direction = "horizontal", legend.position = "top") +
ggtitle(titleString) + theme(plot.title = element_text(hjust = 0.5))
print(g)
}_____no_output_____%%R
unnormalizedPCAPlotFA <- pcaPlotting(pcaObject = prcomp(t(vdge$E)),
pcChoices = c(1,2),
dataLabels = qcMetadata$data_submitting_center_label,
factorString = "Batch",
titleString = "PCA w/o Batch Correction")_____no_output_____%%R
snmPCAPlotSampleTypeFA <- pcaPlotting(pcaObject = prcomp(snmDataSampleTypeWithExpStrategyFA),
pcChoices = c(1,2),
dataLabels = qcMetadata$data_submitting_center_label,
factorString = "Sequencing Center",
titleString = "PCA w/ SNM Correction\n(Target: Sample Type)")_____no_output_____# %%R
# snmPCAPlotGender <- pcaPlotting(pcaObject = prcomp(snmDataGenderWithAML),
# pcChoices = c(1,2),
# dataLabels = qcMetadata$data_submitting_center_label,
# factorString = "Sequencing Center",
# titleString = "PCA w/ SNM Correction\n(Target: Gender)")_____no_output_____%%R
ggsave(plot = unnormalizedPCAPlotFA,
filename = "unnormalizedPCAPlotFA_DecreasedOpacity_NEJM.png",
width = 16.2,
height = 5.29,
units = "in",
dpi = "retina")
ggsave(plot = snmPCAPlotSampleTypeFA,
filename = "snmPCAPlotSampleTypeFA_DecreasedOpacity_NEJM.png",
width = 16.2,
height = 5.29,
units = "in",
dpi = "retina")
# save(snmDataGenderWithAML, metadataSamplesAllQCAML,
# vbDataBarnDFReconciledQCAML,
# file = "amlVbDataAndMetadataAndSNMByGender.RData")_____no_output_____# %%R
# snmDataObjGenderWithAML <- snm(raw.dat = vdge$E,
# bio.var = bio.var.gender,
# adj.var = adj.var,
# rm.adj=TRUE,
# verbose = TRUE,
# diagnose = TRUE)
# snmDataGenderWithAML <- t(snmDataObjGenderWithAML$norm.dat)
# print(dim(snmDataGenderWithAML))_____no_output_____
</code>
# PVCA using key filtered metadata features (i.e. narrowing down the extended version of this)_____no_output_____
<code>
%%R
# Implement PVCA
# From extended model, remove variables that contribute very little if at all:
# ethnicity, gender, reference_genome
pct_threshold <- 0.8
metaPVCAExtendedFiltered <- metadataSamplesAllQC[,c("sample_type",
"disease_type",
"data_submitting_center_label",
"platform",
"experimental_strategy",
"tissue_source_site_label",
"portion_is_ffpe")]
print(dim(metaPVCAExtendedFiltered))
print(dim(snmDataSampleTypeWithExpStrategy))
print(dim(vbDataBarnDFReconciledQC))_____no_output_____%%R
pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = t(vbDataBarnDFReconciledQC),
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, file = "pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaVbRawNoVoomNoSNM_ExtendedFiltered_FA, "Raw count data")_____no_output_____%%R
pvcaVoomNoSNM_ExtendedFiltered_FA <- PVCA(counts = vdge$E,
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaVoomNoSNM_ExtendedFiltered_FA, file = "pvcaVoomNoSNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaVoomNoSNM_ExtendedFiltered_FA, "Voom Normalized")_____no_output_____%%R
pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA <- PVCA(counts = t(snmDataSampleTypeWithExpStrategyFA),
meta = metaPVCAExtendedFiltered,
threshold = pct_threshold,
inter = FALSE)
save(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA,
file = "pvcnoaSampleWithExpStrategySNM_ExtendedFiltered_FA.RData")
PlotPVCA(pvcaSampleWithExpStrategySNM_ExtendedFiltered_FA,
"Voom Normalized & SNM Corrected Plus Exp Strategy (Target is Sample Type)")_____no_output_____%%R
1+2_____no_output_____
</code>
# Examining sample and taxa ratio changes due to batch correction_____no_output_____
<code>
%%R
require(ggplot2)
require(matrixStats)
divSNMDataSampleType <- snmDataSampleType / t(snmDataObjSampleType$raw.dat)
taxaMedians <- data.frame(Medians = colMedians(divSNMDataSampleType),
Taxa = colnames(divSNMDataSampleType),
pval = factor(ifelse(snmDataObjSampleType$pval <=0.05,
yes = "P-value <= 0.05", no = "P-value > 0.05")))
sampleMedians <- data.frame(Medians = rowMedians(divSNMDataSampleType),
Samples = rownames(divSNMDataSampleType),
SeqCenter = metadataSamplesAllQC$data_submitting_center_label,
SampleType = metadataSamplesAllQC$sample_type,
CancerType = metadataSamplesAllQC$disease_type)
gt <- ggplot(taxaMedians, aes(x = reorder(Taxa, -Medians), y = Medians, fill = pval)) +
geom_bar(stat = "identity") +
theme(axis.title.x=element_blank(), axis.text.x=element_blank(), axis.ticks.x=element_blank()) +
labs(y = "Median of Normalizing Ratios Per Taxa", x = "Samples", fill = "ANOVA Result Per Taxa")
gs <- ggplot(sampleMedians, aes(x = reorder(Samples, -Medians), y = Medians, fill = CancerType)) +
geom_bar(stat = "identity") + coord_flip() +
theme(axis.text.y=element_blank(), axis.ticks.y=element_blank()) +
scale_y_log10() + labs(y = "Median of Normalizing Ratios Per Sample", x = "Samples", fill='Cancer Type') _____no_output_____%%R
gt_____no_output_____%%R
ggsave(plot = gt,
filename = "snmNormMedianPerTaxaPval.png",
width = 8.5,
height = 6,
units = "in",
dpi = "retina")_____no_output_____%%R
require(pheatmap)
pheatmap(snmDataSampleTypeLMFit$coefficients,
clustering_distance_rows = "correlation",
clustering_distance_cols = "correlation",
show_rownames = FALSE,
show_colnames = FALSE,
filename = "snmLMFitCoefCorr.png")_____no_output_____# %%R
# save(snmDataObjPathStage, snmDataPathStage, metadataSamplesAllQCPath, file = "snmResultsPathBinned.RData")_____no_output_____
</code>
|
{
"repository": "biocore/tcga",
"path": "jupyter_notebooks/TCGA Batch Correction -- Final Analysis.ipynb",
"matched_keywords": [
"limma",
"edgeR"
],
"stars": 60,
"size": 383804,
"hexsha": "cb5832e71d849204f50f4b8fd700b31d73cffbd9",
"max_line_length": 66647,
"avg_line_length": 340.5536823425,
"alphanum_fraction": 0.880480662
}
|
# Notebook from theislab/AutoGeneS
Path: tests_jupyter/special_weights.ipynb
<code>
#import scanpy as sc
import anndata
import numpy as np
import pandas as pd
import importlib
#import pickle
import sys
sys.path.append("..")
import autogenes_____no_output_____data = pd.read_csv('../datasets/GSE75748_bulk_data.csv',index_col='index')
data = data.T.iloc[:,:100].values
ag = autogenes.AutoGeneS(data)_____no_output_____ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,),objectives=('distance',))gen nevals pareto distance
0 100 1 8.27 - 237.81
1 100 1 68.86 - 237.81
2 100 1 141.96 - 237.81
3 100 1 159.75 - 237.81
4 100 1 166.6 - 237.81
5 100 1 230.26 - 237.81
6 100 1 233.67 - 237.81
7 100 1 233.67 - 237.81
8 100 1 237.81 - 237.81
9 100 1 237.81 - 237.81
10 100 1 237.81 - 237.81
ag.fitness_matrix_____no_output_____ag.plot(objectives=(0,0))_____no_output_____ag.run(ngen=10,offspring_size=100,seed=0,weights=(-1,),objectives=('correlation',))gen nevals pareto correlation
0 100 1 3.56 - 14.24
1 100 1 3.56 - 8.08
2 100 1 3.56 - 6.18
3 100 1 3.56 - 5.2
4 100 1 3.56 - 4.4
5 100 1 3.56 - 4.23
6 100 1 3.56 - 4.0
7 100 1 3.56 - 4.0
8 100 1 3.56 - 3.56
9 100 1 3.56 - 3.56
10 100 1 3.56 - 3.56
ag.fitness_matrix_____no_output_____ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,0),objectives=('distance','correlation'))../autogenes/core.py:84: UserWarning: Ignoring objective 'correlation'
warnings.warn(f"Ignoring objective '{str(objectives[i])}'")
ag.fitness_matrix_____no_output_____ag.pareto[0].fitness.wvalues_____no_output_____def num_genes(data): return data.shape[0]
ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,-1,0),objectives=('distance',num_genes,'correlation'))gen nevals pareto distance num_genes
0 100 1 8.27 - 237.81 6.0 - 6.0
1 100 1 68.86 - 237.81 6.0 - 6.0
2 100 1 141.96 - 237.81 6.0 - 6.0
3 100 1 159.75 - 237.81 6.0 - 6.0
4 100 1 166.6 - 237.81 6.0 - 6.0
5 100 1 230.26 - 237.81 6.0 - 6.0
6 100 1 233.67 - 237.81 6.0 - 6.0
7 100 1 233.67 - 237.81 6.0 - 6.0
8 100 1 237.81 - 237.81 6.0 - 6.0
9 100 1 237.81 - 237.81 6.0 - 6.0
10 100 1 237.81 - 237.81 6.0 - 6.0
ag.select()_____no_output_____ag.run(ngen=10,offspring_size=100,seed=0,weights=(1,-1,0.5),objectives=('distance',num_genes,'correlation'),verbose=False)_____no_output_____ag.select()_____no_output_____
</code>
|
{
"repository": "theislab/AutoGeneS",
"path": "tests_jupyter/special_weights.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": 46,
"size": 11305,
"hexsha": "cb584b5e5653520a36002fe302a583cfd81ced40",
"max_line_length": 956,
"avg_line_length": 31.5782122905,
"alphanum_fraction": 0.4993365767
}
|
# Notebook from SalishSeaCast/analysis-keegan
Path: notebooks/Tools/full_model_timeseries.ipynb
This notebook contains a prototype for a workflow that would allow you to compare observations that were sampled in dicrete time to the model output in continuous time. Only the first 14 cells work, and even then they are so unbelievably slow as to be almost entirely useless. _____no_output_____
<code>
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')_____no_output_____import numpy as np
import numpy.polynomial.polynomial as poly
import matplotlib.pyplot as plt
import os
import math
import pandas as pd
from erddapy import ERDDAP
import netCDF4 as nc
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import pickle
import cmocean
import json
import f90nml
import xarray as xr
import datetime as dt
import Keegan_eval_tools as ket
from collections import OrderedDict
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline_____no_output_____
</code>
<code>
year=2010
modelversion='nowcast-green.201905'
PATH= '/results2/SalishSea/nowcast-green.201905/'
datadir='/ocean/eolson/MEOPAR/obs/WADE/ptools_data/ecology'_____no_output_____##### Loading in pickle file data
saveloc='/ocean/kflanaga/MEOPAR/savedData/WADE_nutribot_pickles'
with open(os.path.join(saveloc,f'data_WADE_{modelversion}_{year}.pkl'),'rb') as hh:
data=pickle.load(hh)_____no_output_____#creating new dictionaries that make it easy to call on specific years.
datstat=dict()
for ind, istation in enumerate(data.Station.unique()):
datstat[istation]=data.loc[data.Station == istation]_____no_output_____%%time
start= dt.datetime(2010,1,1)
end=dt.datetime(2010,12,31) # the code called below (evaltools.index_model_files) includes the end date
# in the values returned
basedir='/results2/SalishSea/nowcast-green.201905/'
nam_fmt='nowcast'
flen=1 # files contain 1 day of data each
ftype= 'ptrc_T' # load bio files
tres=24 # 1: hourly resolution; 24: daily resolution <- try changing to 1 and loading hourly data
flist=et.index_model_files(start,end,basedir,nam_fmt,flen,ftype,tres)
# flist contains paths: file pathes; t_0 timestemp of start of each file; t_n: timestamp of start of next file
CPU times: user 18.7 ms, sys: 16.4 ms, total: 35.1 ms
Wall time: 797 ms
# get model i,j of location S3 from places
ij,ii=places.PLACES['S3']['NEMO grid ji']
ik=2 # choose surface level_____no_output_____ii=data[data.Station == 'BUD005'].i.unique()[0]
ij=data[data.Station == 'BUD005'].j.unique()[0]
ik=2_____no_output_____bio=xr.open_mfdataset(flist['paths'])_____no_output_____%%time
tt=bio.time_counter
NO23=bio.nitrate.isel(deptht=ik,y=ij,x=ii) #.cell will give closest to two meters
#this is where we have the depth problem. CPU times: user 2.43 ms, sys: 327 µs, total: 2.76 ms
Wall time: 2.76 ms
def TsByStation_ind2 (df,datstat,regions,obsvar,modvar,year,ylim,figsize=(14,40),loc='lower left',depth=5):
stations=[]
for r in regions:
sta0=df[df['Basin']==r].Station.unique()
stations.append(sta0)
stations = [val for sublist in stations for val in sublist]
fig,ax=plt.subplots(math.ceil(len(stations)/2),2,figsize=figsize)
new_stat = [stations[i:i+2] for i in range(0, len(stations), 2)]
for si,axi in zip(new_stat,ax):
for sj,axj in zip(si,axi):
#The creation of the observed data points
ps=[]
obs0=et._deframe(df.loc[(df['dtUTC'] >= dt.datetime(year,1,1))&(df['dtUTC']<= dt.datetime(year,12,31))&(df['Station']==sj)&(df['Z']<=depth),[obsvar]])
time0=et._deframe(df.loc[(df['dtUTC'] >= dt.datetime(year,1,1))&(df['dtUTC']<= dt.datetime(year,12,31))&(df['Station']==sj)&(df['Z']<=depth),['dtUTC']])
p0,=axj.plot(time0,obs0,'.',color='blue',label=f'Observed {obsvar}',marker='o',fillstyle='none')
ps.append(p0)
# The creation of the model data line
ii=data[data.Station == sj].i.unique()[0]
ij=data[data.Station == sj].j.unique()[0]
ik=0
tt=bio.time_counter
NO23=bio[modvar].isel(deptht=ik,y=ij,x=ii)
p0,=axj.plot(tt,NO23,'-',color='darkorange',label='Nitrate')
ps.append(p0)
#labeling and formatting
axj.set_ylabel('Concentration ($\mu$M)')
axj.set_xlim(tt[0],tt[-1])
axj.legend(handles=ps,prop={'size': 10},loc=loc)
axj.set_xlabel(f'Date',fontsize=13)
axj.set_ylabel(f'{obsvar} ($\mu$M)',fontsize=13)
axj.set_title(f'{df[df.Station==sj].Basin.unique()[0]} ({sj})', fontsize=13)
axj.set_ylim(ylim)
yearsFmt = mdates.DateFormatter('%d %b')
axj.xaxis.set_major_formatter(yearsFmt)
for tick in axj.xaxis.get_major_ticks():
tick.label.set_fontsize(13)
for tick in axj.yaxis.get_major_ticks():
tick.label.set_fontsize(13)
plt.tight_layout()
plt.setp(axj.get_xticklabels(), rotation=30, horizontalalignment='right')_____no_output_____obsvar='NO23'
modvar='nitrate'
regions=['Hood Canal Basin']
lims=(0,40)
TsByStation_ind2(data,datstat,regions,obsvar,modvar,year,lims,figsize=(14,14),loc='lower left')_____no_output_____bio.close()_____no_output_____
</code>
Hmmm The fact that there are multiple different points at different depths make this technique mostly useless. Even if I fix it so that there are multiple lines or something, it will take so long it will be almost useless. Perhaps If I only look at observations at a certain depth it can be at least a little helpful. _____no_output_____
<code>
# Now we are actually loading everything from a website/ online database instead of from our own results storage.
server = "https://salishsea.eos.ubc.ca/erddap"
protocol = "griddap"
dataset_id = "ubcSSg3DBiologyFields1hV19-05"
response = "nc"
variables = [
"nitrate",
"time",
]
fourkmlat = 4/110.574
fourkmlon = 4/(111.320*np.cos(50*np.pi/180.))
lon, lat = places.PLACES['S3']['lon lat']
constraints = {
"time>=": "2015-02-01T00:00:00Z",
"time<=": "2015-04-01T00:00:00Z",
}
print(constraints){'time>=': '2015-02-01T00:00:00Z', 'time<=': '2015-04-01T00:00:00Z'}
obs = ERDDAP(server=server, protocol=protocol,)
obs.dataset_id = dataset_id
obs.variables = variables
obs.constraints = constraints_____no_output_____obs
print(obs.get_download_url())https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DBiologyFields1hV19-05.html?nitrate,time&time>=1422748800.0&time<=1427846400.0
obs_pd = obs.to_pandas(index_col="time (UTC)", parse_dates=True,).dropna()
obs_pd_____no_output_____server = "https://salishsea.eos.ubc.ca/erddap"
protocol = "tabledap"
dataset_id = "ubcONCTWDP1mV18-01"
response = "nc"
variables = [
"latitude",
"longitude",
"chlorophyll",
"time",
]
fourkmlat = 4/110.574
fourkmlon = 4/(111.320*np.cos(50*np.pi/180.))
lon, lat = places.PLACES['S3']['lon lat']
constraints = {
"time>=": "2015-02-01T00:00:00Z",
"time<=": "2015-04-01T00:00:00Z",
"latitude>=": lat - fourkmlat,
"latitude<=": lat + fourkmlat,
"longitude>=": lon - fourkmlon,
"longitude<=": lon + fourkmlon,
}
print(constraints)_____no_output_____obs = ERDDAP(server=server, protocol=protocol,)
obs.dataset_id = dataset_id
obs.variables = variables
obs.constraints = constraints_____no_output_____obs_pd = obs.to_pandas(index_col="time (UTC)", parse_dates=True,).dropna()_____no_output_____obs_pd_____no_output_____
</code>
|
{
"repository": "SalishSeaCast/analysis-keegan",
"path": "notebooks/Tools/full_model_timeseries.ipynb",
"matched_keywords": [
"ecology"
],
"stars": null,
"size": 140037,
"hexsha": "cb5aac5d03124a6a929312ef8343d5774c72caed",
"max_line_length": 116360,
"avg_line_length": 281.7645875252,
"alphanum_fraction": 0.9023829417
}
|
# Notebook from bryansho/PCOS_WGS_16S_metabolome
Path: Revision/ANCOM/WGS/WGS_ANCOM.ipynb
# ANCOM: WGS_____no_output_____
<code>
library(tidyverse)
library(magrittr)
source("/Users/Cayla/ANCOM/scripts/ancom_v2.1.R")_____no_output_____
</code>
## T2_____no_output_____
<code>
t2 <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T2/T2_filtered_greater_00001.csv')
head(t2,n=1)Warning message:
“Missing column names filled in: 'X1' [1]”
[36m──[39m [1m[1mColumn specification[1m[22m [36m──────────────────────────────────────────────────[39m
cols(
.default = col_double(),
X1 = [31mcol_character()[39m
)
[36mℹ[39m Use [30m[47m[30m[47m`spec()`[47m[30m[49m[39m for the full column specifications.
t2.meta <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T2/Deseq2_T2_mapping.csv')
head(t2.meta,n=1)
[36m──[39m [1m[1mColumn specification[1m[22m [36m──────────────────────────────────────────────────[39m
cols(
Sample = [31mcol_character()[39m,
Treatment = [31mcol_character()[39m,
Timepoint = [32mcol_double()[39m
)
# subset data
t2.meta.PvL <- t2.meta %>% filter(Treatment == 'Placebo' | Treatment == 'Let')
t2.PvL <- t2 %>% select(X1, any_of(t2.meta.PvL$Sample)) %>% column_to_rownames('X1')
t2.meta.LvLCH <- t2.meta %>% filter(Treatment == 'Let' | Treatment == 'CoL')
t2.LvLCH <- t2 %>% select(X1, any_of(t2.meta.LvLCH$Sample)) %>% column_to_rownames('X1')_____no_output_____
</code>
### Placebo vs. Let_____no_output_____
<code>
# Data Preprocessing
# feature_table is a df/matrix with features as rownames and samples in columns
feature_table <- t2.PvL
# character vector/column containing sample IDs
sample_var <- "Sample"
# grouping variable to detect structural zeros and outliers
group_var <- "Treatment"
# 0 < fraction < 1. For each feature, observations with proportion of mixture
# distribution < out_cut will be detected as outlier zeros;
# > (1 - out_cut) will be detected as outlier values
out_cut <- 0.05
# 0 < fraction < 1. Features with proportion of zeros > zero_cut are removed.
zero_cut <- 0.90
# samples with library size < lib_cut will be excluded in the analysis
lib_cut <- 0
# TRUE indicates a taxon would be classified as a structural zero in the
# corresponding experimental group using its asymptotic lower bound. More
# specifically, ```neg_lb = TRUE``` indicates you are using both criteria
# stated in section 3.2 of [ANCOM-II]
# (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682008/) to detect structural
# zeros; Otherwise, ```neg_lb = FALSE``` will only use the equation 1 in
# section 3.2 of [ANCOM-II](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682008/)
# for declaring structural zeros.
neg_lb <- TRUE
prepro <- feature_table_pre_process(feature_table, t2.meta.PvL, sample_var, group_var,
out_cut, zero_cut, lib_cut, neg_lb)
# Preprocessed feature table
feature_table1 <- prepro$feature_table
# Preprocessed metadata
meta_data1 <- prepro$meta_data
# Structural zero info
struc_zero1 <- prepro$structure_zeros _____no_output_____# Run ANCOM
# name of the main variable of interest (character)
main_var <- "Treatment"
p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction
alpha <- 0.05
# character string representing the formula for adjustment
adj_formula <- NULL
# character string representing the formula for random effects in lme
rand_formula <- NULL
t_start <- Sys.time()
res <- ANCOM(feature_table1, meta_data1, struc_zero1, main_var, p_adj_method,
alpha, adj_formula, rand_formula)
t_end <- Sys.time()
t_end - t_start
# write output to file
# output contains the "W" statistic for each taxa - a count of the number of times
# the null hypothesis is rejected for each taxa
# detected_x are logicals indicating detection at specified FDR cut-off
write_csv(res$out, "2021-07-25_WGS_T2_PvL_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero1), nrow(feature_table1), sum(apply(struc_zero1, 1, sum) == 0))
res$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))
ggsave(filename = paste(lubridate::today(),'volcano_WGS_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')_____no_output_____# to find most significant taxa, I will sort the data
# 1) y (W statistic)
# 2) according to the absolute value of CLR mean difference
sig <- res$fig$data %>%
mutate(taxa_id = str_split_fixed(res$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'
arrange(desc(y), desc(abs(x))) %>%
filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa
write.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T2_PvL.csv',sep='_'))_____no_output_____# save features with W > 0
non.zero <- res$fig$data %>%
arrange(desc(y), desc(abs(x))) %>%
mutate(taxa_id = str_split_fixed(res$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'
W.proportion = y/(n_taxa-1)) %>% # add W
filter(y > 0) %>%
rowid_to_column()
write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T2_PvL.csv',sep='_'))_____no_output_____# plot top 20 taxa
sig %>%
slice_head(n=20) %>%
ggplot(aes(x, taxa_id)) +
geom_point(aes(size = 1)) +
theme_bw(base_size = 16) +
guides(size = FALSE) +
labs(x = 'CLR Mean Difference', y = NULL)
ggsave(filename = paste(lubridate::today(),'Top20_WGS_T2_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)Saving 10 x 7 in image
</code>
### Let v Let-co-housed_____no_output_____
<code>
# Data Preprocessing
feature_table <- t2.LvLCH
sample_var <- "Sample"
group_var <- "Treatment"
out_cut <- 0.05
zero_cut <- 0.90
lib_cut <- 0
neg_lb <- TRUE
prepro <- feature_table_pre_process(feature_table, t2.meta.LvLCH, sample_var, group_var,
out_cut, zero_cut, lib_cut, neg_lb)
# Preprocessed feature table
feature_table2 <- prepro$feature_table
# Preprocessed metadata
meta_data2 <- prepro$meta_data
# Structural zero info
struc_zero2 <- prepro$structure_zeros _____no_output_____# Run ANCOM
# name of the main variable of interest (character)
main_var <- "Treatment"
p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction
alpha <- 0.05
# character string representing the formula for adjustment
adj_formula <- NULL
# character string representing the formula for random effects in lme
rand_formula <- NULL
t_start <- Sys.time()
res2 <- ANCOM(feature_table2, meta_data2, struc_zero2, main_var, p_adj_method,
alpha, adj_formula, rand_formula)
t_end <- Sys.time()
t_end - t_start
# write output to file
# output contains the "W" statistic for each taxa - a count of the number of times
# the null hypothesis is rejected for each taxa
# detected_x are logicals indicating detection at specified FDR cut-off
write_csv(res2$out, "2021-07-25_WGS_T2_LvLCH_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero2), nrow(feature_table2), sum(apply(struc_zero2, 1, sum) == 0))
res2$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))
ggsave(filename = paste(lubridate::today(),'volcano_WGS_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')_____no_output_____# save features with W > 0
non.zero <- res2$fig$data %>%
arrange(desc(y), desc(abs(x))) %>%
mutate(taxa_id = str_split_fixed(res2$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'
W.proportion = y/(n_taxa-1)) %>% # add W
filter(y > 0) %>%
rowid_to_column()
write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T2_LvLCH.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data
# 1) y (W statistic)
# 2) according to the absolute value of CLR mean difference
sig <- res2$fig$data %>%
mutate(taxa_id = str_split_fixed(res2$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'
arrange(desc(y), desc(abs(x))) %>%
filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa
write.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T2_LvLCH.csv',sep='_'))_____no_output_____# plot top 20 taxa
sig %>%
slice_head(n=20) %>%
ggplot(aes(x, taxa_id)) +
geom_point(aes(size = 1)) +
theme_bw(base_size = 16) +
guides(size = FALSE) +
labs(x = 'CLR Mean Difference', y = NULL)
ggsave(filename = paste(lubridate::today(),'Top20_WGS_T2_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)Saving 10 x 7 in image
</code>
## T5_____no_output_____
<code>
t5 <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T5/T5_filtered_greater_00001.csv')
head(t5,n=1)Warning message:
“Missing column names filled in: 'X1' [1]”
[36m──[39m [1m[1mColumn specification[1m[22m [36m──────────────────────────────────────────────────[39m
cols(
.default = col_double(),
X1 = [31mcol_character()[39m
)
[36mℹ[39m Use [30m[47m[30m[47m`spec()`[47m[30m[49m[39m for the full column specifications.
t5.meta <- read_csv('https://github.com/bryansho/PCOS_WGS_16S_metabolome/raw/master/DESEQ2/WGS/T5/Deseq2_T5_mapping.csv')
head(t5.meta,n=1)
[36m──[39m [1m[1mColumn specification[1m[22m [36m──────────────────────────────────────────────────[39m
cols(
SampleID = [31mcol_character()[39m,
Treatment = [31mcol_character()[39m,
Timepoint = [32mcol_double()[39m
)
# subset data
t5.meta.PvL <- t5.meta %>% filter(Treatment == 'Placebo' | Treatment == 'Let')
t5.PvL <- t5 %>% select(X1, any_of(t5.meta.PvL$SampleID)) %>% column_to_rownames('X1')
t5.meta.LvLCH <- t5.meta %>% filter(Treatment == 'Let' | Treatment == 'CoL')
t5.LvLCH <- t5 %>% select(X1, any_of(t5.meta.LvLCH$SampleID)) %>% column_to_rownames('X1')_____no_output_____
</code>
### Placebo v Let_____no_output_____
<code>
# Data Preprocessing
feature_table <- t5.PvL
sample_var <- "SampleID"
group_var <- "Treatment"
out_cut <- 0.05
zero_cut <- 0.90
lib_cut <- 0
neg_lb <- TRUE
prepro <- feature_table_pre_process(feature_table, t5.meta.PvL, sample_var, group_var,
out_cut, zero_cut, lib_cut, neg_lb)
# Preprocessed feature table
feature_table3 <- prepro$feature_table
# Preprocessed metadata
meta_data3 <- prepro$meta_data
# Structural zero info
struc_zero3 <- prepro$structure_zeros _____no_output_____# Run ANCOM
# name of the main variable of interest (character)
main_var <- "Treatment"
p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction
alpha <- 0.05
# character string representing the formula for adjustment
adj_formula <- NULL
# character string representing the formula for random effects in lme
rand_formula <- NULL
t_start <- Sys.time()
res3 <- ANCOM(feature_table3, meta_data3, struc_zero3, main_var, p_adj_method,
alpha, adj_formula, rand_formula)
t_end <- Sys.time()
t_end - t_start
# write output to file
# output contains the "W" statistic for each taxa - a count of the number of times
# the null hypothesis is rejected for each taxa
# detected_x are logicals indicating detection at specified FDR cut-off
write_csv(res3$out, "2021-07-25_WGS_T5_PvL_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero3), nrow(feature_table3), sum(apply(struc_zero3, 1, sum) == 0))
res3$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))
ggsave(filename = paste(lubridate::today(),'volcano_WGS_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')_____no_output_____# save features with W > 0
non.zero <- res3$fig$data %>%
arrange(desc(y), desc(abs(x))) %>%
mutate(taxa_id = str_split_fixed(res3$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'
W.proportion = y/(n_taxa-1)) %>% # add W
filter(y > 0) %>%
rowid_to_column()
write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T5_PvL.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data
# 1) y (W statistic)
# 2) according to the absolute value of CLR mean difference
sig <- res3$fig$data %>%
mutate(taxa_id = str_split_fixed(res3$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'
arrange(desc(y), desc(abs(x))) %>%
filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa
write.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T5_PvL.csv',sep='_'))_____no_output_____# plot top 20 taxa
sig %>%
slice_head(n=20) %>%
ggplot(aes(x, taxa_id)) +
geom_point(aes(size = 1)) +
theme_bw(base_size = 16) +
guides(size = FALSE) +
labs(x = 'CLR Mean Difference', y = NULL)
ggsave(filename = paste(lubridate::today(),'Top20_WGS_T5_PvL.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width = 10)Saving 10 x 7 in image
</code>
### Let v Let-co-housed_____no_output_____
<code>
# Data Preprocessing
feature_table <- t5.LvLCH
sample_var <- "SampleID"
group_var <- "Treatment"
out_cut <- 0.05
zero_cut <- 0.90
lib_cut <- 0
neg_lb <- TRUE
prepro <- feature_table_pre_process(feature_table, t5.meta.LvLCH, sample_var, group_var,
out_cut, zero_cut, lib_cut, neg_lb)
# Preprocessed feature table
feature_table4 <- prepro$feature_table
# Preprocessed metadata
meta_data4 <- prepro$meta_data
# Structural zero info
struc_zero4 <- prepro$structure_zeros _____no_output_____# Run ANCOM
# name of the main variable of interest (character)
main_var <- "Treatment"
p_adj_method <- "BH" # number of taxa > 10, therefore use Benjamini-Hochberg correction
alpha <- 0.05
# character string representing the formula for adjustment
adj_formula <- NULL
# character string representing the formula for random effects in lme
rand_formula <- NULL
t_start <- Sys.time()
res4 <- ANCOM(feature_table4, meta_data4, struc_zero4, main_var, p_adj_method,
alpha, adj_formula, rand_formula)
t_end <- Sys.time()
t_end - t_start
# write output to file
# output contains the "W" statistic for each taxa - a count of the number of times
# the null hypothesis is rejected for each taxa
# detected_x are logicals indicating detection at specified FDR cut-off
write_csv(res4$out, "2021-07-25_WGS_T5_LvLCH_ANCOM_data.csv")_____no_output_____n_taxa <- ifelse(is.null(struc_zero4), nrow(feature_table4), sum(apply(struc_zero4, 1, sum) == 0))
res4$fig + scale_y_continuous(sec.axis = sec_axis(~ . * 100 / n_taxa, name = 'W proportion'))
ggsave(filename = paste(lubridate::today(),'volcano_WGS_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina')Saving 7 x 7 in image
# save features with W > 0
non.zero <- res4$fig$data %>%
arrange(desc(y), desc(abs(x))) %>%
mutate(taxa_id = str_split_fixed(res4$fig$data$taxa_id, pattern='s_', n=2)[,2], # remove leading 's_'
W.proportion = y/(n_taxa-1)) %>% # add W
filter(y > 0) %>%
rowid_to_column()
write.csv(non.zero, paste(lubridate::today(),'NonZeroW_Features_WGS_T5_LvLCH.csv',sep='_'))_____no_output_____# to find most significant taxa, I will sort the data
# 1) y (W statistic)
# 2) according to the absolute value of CLR mean difference
sig <- res4$fig$data %>%
mutate(taxa_id = str_split_fixed(res4$fig$data$taxa_id, pattern='s_', n=2)[,2]) %>% # remove leading 's_'
arrange(desc(y), desc(abs(x))) %>%
filter(y >= (0.7*n_taxa), !is.na(taxa_id)) # keep significant taxa, remove unidentified taxa
write.csv(sig, paste(lubridate::today(),'SigFeatures_WGS_T5_LvLCH.csv',sep='_'))_____no_output_____# plot top 20 taxa
sig %>%
slice_head(n=20) %>%
ggplot(aes(x, taxa_id)) +
geom_point(aes(size = 1)) +
theme_bw(base_size = 16) +
guides(size = FALSE) +
labs(x = 'CLR Mean Difference', y = NULL)
ggsave(filename = paste(lubridate::today(),'Top20_WGS_T5_LvLCH.pdf',sep='_'), bg = 'transparent', device = 'pdf', dpi = 'retina', width=10)ERROR while rich displaying an object: Error: Aesthetics must be either length 1 or the same as the data (1): x and y
Traceback:
1. FUN(X[[i]], ...)
2. tryCatch(withCallingHandlers({
. if (!mime %in% names(repr::mime2repr))
. stop("No repr_* for mimetype ", mime, " in repr::mime2repr")
. rpr <- repr::mime2repr[[mime]](obj)
. if (is.null(rpr))
. return(NULL)
. prepare_content(is.raw(rpr), rpr)
. }, error = error_handler), error = outer_handler)
3. tryCatchList(expr, classes, parentenv, handlers)
4. tryCatchOne(expr, names, parentenv, handlers[[1L]])
5. doTryCatch(return(expr), name, parentenv, handler)
6. withCallingHandlers({
. if (!mime %in% names(repr::mime2repr))
. stop("No repr_* for mimetype ", mime, " in repr::mime2repr")
. rpr <- repr::mime2repr[[mime]](obj)
. if (is.null(rpr))
. return(NULL)
. prepare_content(is.raw(rpr), rpr)
. }, error = error_handler)
7. repr::mime2repr[[mime]](obj)
8. repr_text.default(obj)
9. paste(capture.output(print(obj)), collapse = "\n")
10. capture.output(print(obj))
11. evalVis(expr)
12. withVisible(eval(expr, pf))
13. eval(expr, pf)
14. eval(expr, pf)
15. print(obj)
16. print.ggplot(obj)
17. ggplot_build(x)
18. ggplot_build.ggplot(x)
19. by_layer(function(l, d) l$compute_aesthetics(d, plot))
20. f(l = layers[[i]], d = data[[i]])
21. l$compute_aesthetics(d, plot)
22. f(..., self = self)
23. check_aesthetics(evaled, n)
24. abort(glue("Aesthetics must be either length 1 or the same as the data ({n}): ",
. glue_collapse(names(which(!good)), ", ", last = " and ")))
25. signal_abort(cnd)
Saving 10 x 7 in image
</code>
|
{
"repository": "bryansho/PCOS_WGS_16S_metabolome",
"path": "Revision/ANCOM/WGS/WGS_ANCOM.ipynb",
"matched_keywords": [
"DESeq2"
],
"stars": 3,
"size": 367115,
"hexsha": "cb5ad604a1f9273d1cd5785e9e5328f94877bef2",
"max_line_length": 82300,
"avg_line_length": 303.1502890173,
"alphanum_fraction": 0.9115508764
}
|
# Notebook from georgedeath/egreedy
Path: notebooks/New_fitness_functions_and_acquisition_functions.ipynb
# Using a new function to evaluate or evaluating a new acquisition function_____no_output_____In this notebook we describe how to integrate a new fitness function to the testing framework as well as how to integrate a new acquisition function._____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
# add the egreedy module to the path (one directory up from this)
import sys, os
sys.path.append(os.path.realpath(os.path.pardir))_____no_output_____
</code>
## New fitness function_____no_output_____The `perform_experiment` function in the `optimizer` class, used to carry out the optimisation runs (see its docstring and `run_all_experiments.py` for usage examples), imports a fitness **class**. This class, when instantiated is also callable. The class is imported from the `test_problems` module. Therefore, the easiest way to incorporate your own fitness function is to add it to the `test_problems` module by creating a python file in the `egreedy/test_problems/` directory and adding a line importing it into the namespace (see `egreedy/test_problems/__init__.py` for examples) so that it can be directly imported from `test_problems`.
If, for example, your fitness class is called `xSquared` and is located in the file `xs.py`, you would place the python file in the directory `egreedy/test_problems` and add the line:
```
from .xs import xSquared
```
to the `egreedy/test_problems/__init__.py` file.
We will now detail how to structure your fitness class and show the required class methods by creating a new fitness class for the function
\begin{equation}
f( \mathbf{x} ) = \sum_{i=1}^2 x_i^2,
\end{equation}
where $\mathbf{x} \in [-5, 5]^2.$_____no_output_____
<code>
class xSquared:
"""Example fitness class.
.. math::
f(x) = \sum_{i=1}^2 x_i^2
This demonstration class shows all the required attributes and
functionality of the fitness function class.
"""
def __init__(self):
"""Initialisation function.
This is called when the class is instantiated and sets up its
attributes as well as any other internal variables that may
be needed.
"""
# problem dimensionality
self.dim = 2
# lower and upper bounds for each dimension (must be numpy.ndarray)
self.lb = np.array([-5., -5.])
self.ub = np.array([5., 5.])
# location(s) of the optima (optional - not always known)
self.xopt = np.array([0.])
# its/thier fitness value(s)
self.yopt = np.array([0.])
# callable constraint function for the problem - should return
# True if the argument value is **valid** - if no constraint function
# is required then this can take the value of None
self.cf = None
def __call__(self, x):
"""Main callable function.
This is called after the class is instantiated, e.g.
>>> f = xSquared()
>>> f(np.array([2., 2.]))
array([8.])
Note that it is useful to have a function that is able deal with
multiple inputs, which should a numpy.ndarray of shape (N, dim)
"""
# ensure the input is at least 2d, this will cause one-dimensional
# vectors to be reshaped to shape (1, dim)
x = np.atleast_2d(x)
# evaluate the function
val = np.sum(np.square(x), axis=1)
# return the evaluations
return val_____no_output_____
</code>
This class can then either be placed in the directories discussed above and used for evaluating multiple techniques on it or used for testing purposes._____no_output_____### Optimising the new test function with an acquistion function_____no_output_____The following code outlines how to optimise your newly created test function with the $\epsilon$-greedy with Pareto front selection ($\epsilon$-PF) algorithm._____no_output_____
<code>
from pyDOE2 import lhs
from egreedy.optimizer import perform_BO_iteration
# ---- instantiate the test problem
f = xSquared()
# ---- Generate testing data by Latin hypercube sampling across the domain
n_training = 2 * f.dim
# LHS sample in [0, 1]^2 and rescale to problem domain
Xtr = lhs(f.dim, n_training, criterion='maximin')
Xtr = (f.ub - f.lb) * Xtr + f.lb
# expensively evaluate and ensure shape is (n_training, 1)
Ytr = np.reshape(f(Xtr), (n_training, 1))
# ---- Select an acquistion function with optimiser.
# In this case we select e-greedy with Pareto front selection (e-PF)
# known as eFront.
#
# All the acqusition functions have the same parameters:
# lb : lower-bound constraints (numpy.ndarray)
# ub : upper-bound constraints (numpy.ndarray)
# acq_budget : max number of calls to the GP model
# cf : callable function constraint function that returns True if
# the argument vector is VALID. Optional, has a value of None
# if not used
# acquisition_args : optional dictionary containing key:value pairs
# of arguments to a specific acqutision function.
# e.g. for an e-greedy method then the dict
# {'epsilon': 0.1} would dictate the epsilon value.
# e-greedy with Pareto front selection (e-PF), known as eFront
from egreedy.acquisition_functions import eFront
# instantiate the optimiser with a budget of 5000d and epsilon = 0.1
acq_budget = 5000 * f.dim
acquisition_args = {'epsilon': 0.1}
acq_func = eFront(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
# ---- Perform the bayesian optimisation loop for a total budget of 20
# function evaluations (including those used for LHS sampling)
total_budget = 20
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr,f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()Training a GP model with 4 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [2.74739209 1.85140059]
Function value: 10.9758
Best function value so far: 2.34029
Training a GP model with 5 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.8734869 0.23262576]
Function value: 0.817094
Best function value so far: 0.817094
Training a GP model with 6 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.48106594 0.1616895 ]
Function value: 0.257568
Best function value so far: 0.257568
Training a GP model with 7 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.18353004 0.31916069]
Function value: 0.135547
Best function value so far: 0.135547
Training a GP model with 8 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.13640659 0.16686483]
Function value: 0.0464506
Best function value so far: 0.0464506
Training a GP model with 9 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.01887471 0.03552285]
Function value: 0.00161813
Best function value so far: 0.00161813
Training a GP model with 10 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00498028 0.00452991]
Function value: 4.53233e-05
Best function value so far: 4.53233e-05
Training a GP model with 11 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00296791 0.01886975]
Function value: 0.000364876
Best function value so far: 4.53233e-05
Training a GP model with 12 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00228287 -0.0035852 ]
Function value: 1.80651e-05
Best function value so far: 1.80651e-05
Training a GP model with 13 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00136503 0.01035887]
Function value: 0.00010917
Best function value so far: 1.80651e-05
Training a GP model with 14 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.00031969 -0.00034095]
Function value: 2.18449e-07
Best function value so far: 2.18449e-07
Training a GP model with 15 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.0022843 0.00156171]
Function value: 7.65699e-06
Best function value so far: 2.18449e-07
Training a GP model with 16 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.00435965 0.00438578]
Function value: 3.82417e-05
Best function value so far: 2.18449e-07
Training a GP model with 17 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [-0.0038385 -0.01013353]
Function value: 0.000117422
Best function value so far: 2.18449e-07
Training a GP model with 18 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.01889642 0.02033263]
Function value: 0.000770491
Best function value so far: 2.18449e-07
Training a GP model with 19 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [ 0.00050108 -0.02413065]
Function value: 0.000582539
Best function value so far: 2.18449e-07
</code>
The plot below shows the difference between the best seen function value and the true minimum, i.e. $|f^\star - f_{min}|$, over each iteration._____no_output_____
<code>
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.semilogy(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()_____no_output_____
</code>
## New acquisition function_____no_output_____We now detail how to create your own acquisition function class and integrate it into the testing suite.
In a similar manner to the fitness function classes, the acquisition function classes are imported from the `egreedy.acquisition_functions` module, with the specific classes available determined by the `__ini__.py` file in the same module.
If, for example, your acquisition function class is called `greed` and is located in the file `gr.py`, you would place the python file in the directory `egreedy/acquisition_functions` and add the line:
```
from .gr import greed
```
to the `egreedy/acquisition_functions/__init__.py` file.
The python file `egreedy/acquisition_functions/acq_func_optimisers.py` contains base classes for the acquisition function classes. We will now demonstrate how to implement two simple acquisition functions and then show how to optimise one of the test functions included in the suite._____no_output_____The `BaseOptimiser` class is the base acquisition function class that implements the standard interface for acquisition function optimizers. It only contains an initialisation function with several arguments:
- lb: lower-bound constraint
- ub: upper-bound constraint
- acq_budget : maximum number of calls to the Gaussian Process
- cf : callable constraint function that returns True if the argument decision vector is VALID (optional, default value: None)
- acquisition_args : Optional dictionary containing additional arguments that are unpacked into key=value arguments for an internal acquisition function; e.g. {'epsilon':0.1}.
The `ParetoFrontOptimiser` class implements the base class as well as an additional function named `get_front(model)` that takes in a GPRegression model from GPy and approximates its Pareto front of model prediction and uncertainty. It returns the decision vectors belonging to the members of the front, an array containing corresponding their predicted value, and an array containing the prediction uncertainty.
We first create a simple acquisition function, extending the base class, that generates uniform samples in space and uses the Gaussian Process's mean prediction to select the best (lowest value) predicted location._____no_output_____
<code>
from egreedy.acquisition_functions.acq_func_optimisers import BaseOptimiser
class greedy_sample(BaseOptimiser):
"""Greedy function that uniformly samples the GP posterior
and returns the location with the best (lowest) mean predicted value.
"""
# note we do not need to implement an __init__ method because the
# base class already does this. Here we will include a commented
# version for clarity.
# def __init__(self, lb, ub, acq_budget, cf=None, acquisition_args={}):
# self.lb = lb
# self.ub = ub
# self.cf = cf
# self.acquisition_args = acquisition_args
# self.acq_budget = acq_budget
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
after uniformly sampling decision space.
"""
# generate samples
X = np.random.uniform(self.lb, self.ub,
size=(acq_budget, self.lb.size))
# evaluate them with the gp
mu, sigmasqr = model.predict(X, full_cov=False)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]_____no_output_____from egreedy.acquisition_functions.acq_func_optimisers import ParetoFrontOptimiser
class greedy_pfront(ParetoFrontOptimiser):
"""Exploitative method that calculates the approximate Pareto front
of a GP model and returns the Pareto set member that has the best
(lowest) predicted value.
"""
# again here we do not need to implement an __init__ method.
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
from the approximate Pareto set of the GP's predicted value and
its corresponding uncertainty.
"""
# approximate the pareto set; here X are the locations of the
# members of the set and mu and sigma are their predicted values
# and uncertainty
X, mu, sigma = self.get_front(model)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]_____no_output_____
</code>
We now create a similar script to the one used above in the function example. This time we will optimise the `push4` function included in the test suite and load the training data associated with the first run all techniques evaluated in the paper carried out.
Note that in this case the training data contains arguments to be passed into the function during instantiation. This is because the `push4` runs are evaluated on a *problem instance* basis._____no_output_____
<code>
from egreedy.optimizer import perform_BO_iteration
from egreedy import test_problems
# ---- optimisation run details
problem_name = 'push4'
run_no = 1
acq_budget = 5000 * 4 # because the problem dimensionality is 4
total_budget = 25
# ---- load the training data
data_file = f'../training_data/{problem_name:}_{run_no:}.npz'
with np.load(data_file, allow_pickle=True) as data:
Xtr = data['arr_0']
Ytr = data['arr_1']
if 'arr_2' in data:
f_optional_arguments = data['arr_2'].item()
else:
f_optional_arguments = {}
# ---- instantiate the test problem
f_class = getattr(test_problems, problem_name)
f = f_class(**f_optional_arguments)
# ---- instantiate the acquistion function we created earlier
acq_func = greedy_sample(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr, f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()Training a GP model with 8 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.99363897 0.49599772 0.99868045 0.97496078]
Function value: 8.38627
Best function value so far: 2.01458
Training a GP model with 9 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.99735174 0.05688416 0.95975881 0.09915249]
Function value: 6.39738
Best function value so far: 2.01458
Training a GP model with 10 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.99426994 0.48176988 0.04539143 0.52413627]
Function value: 4.30593
Best function value so far: 2.01458
Training a GP model with 11 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.97676317 0.20990921 0.03083971 0.94382162]
Function value: 4.32446
Best function value so far: 2.01458
Training a GP model with 12 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.98233984 0.45681325 0.32258875 0.65155143]
Function value: 1.9939
Best function value so far: 1.9939
Training a GP model with 13 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.98055465 0.40088946 0.31056256 0.72057199]
Function value: 2.33662
Best function value so far: 1.9939
Training a GP model with 14 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.87735007 0.4604941 0.26450847 0.67341351]
Function value: 0.679518
Best function value so far: 0.679518
Training a GP model with 15 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.77766143 0.49442689 0.30924655 0.68061374]
Function value: 1.68864
Best function value so far: 0.679518
Training a GP model with 16 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.86687143 0.5127836 0.19990311 0.69685941]
Function value: 0.951315
Best function value so far: 0.679518
Training a GP model with 17 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.85716692 0.48996299 0.19234286 0.6498631 ]
Function value: 2.24849
Best function value so far: 0.679518
Training a GP model with 18 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.87333699 0.56790019 0.27448691 0.80146525]
Function value: 1.56872
Best function value so far: 0.679518
Training a GP model with 19 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.88507765 0.4508956 0.22725763 0.75254124]
Function value: 1.00578
Best function value so far: 0.679518
Training a GP model with 20 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.90001785 0.5103195 0.25189873 0.67868624]
Function value: 2.1292
Best function value so far: 0.679518
Training a GP model with 21 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.82927784 0.53929787 0.12721344 0.76115955]
Function value: 1.65907
Best function value so far: 0.679518
Training a GP model with 22 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.88407381 0.3162302 0.39928955 0.58348638]
Function value: 1.58757
Best function value so far: 0.679518
Training a GP model with 23 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.02661341 0.95209027 0.97135965 0.98684695]
Function value: 7.67696
Best function value so far: 0.679518
Training a GP model with 24 data points.
Optimising the acquisition function.
Expensively evaluating the new selected location.
New location : [0.89053271 0.37854249 0.30893649 0.67180526]
Function value: 3.55493
Best function value so far: 0.679518
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()_____no_output_____
</code>
|
{
"repository": "georgedeath/egreedy",
"path": "notebooks/New_fitness_functions_and_acquisition_functions.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 3,
"size": 52944,
"hexsha": "cb5cc59de7c542d82d6feecac845394ce7bc41a0",
"max_line_length": 13804,
"avg_line_length": 73.9441340782,
"alphanum_fraction": 0.765299184
}
|
# Notebook from matty-tran/test-blog
Path: _notebooks/2022-03-14-dh140Final.ipynb
# **G.G.: Good Game?** by Matthew Tran _____no_output_____## March 14, 2022_____no_output_____## **Introduction** _____no_output_____In the modern age, video games have become a modern past time enjoyed by many people of various ages. A now lucrative industry, video games come in a variety of genres, experiences, and platforms. When asked about successful video games, a handful of titles might come to mind. Ones that are iconic because of their characters, revolutionary because of the way they engage with storytelling, or perhaps nostalgic because of how long they have been around.
This project seeks to define top performing video games and the traits that may have contributed to the success of these titles. Subsequently, I would like to conduct a more qualitative investigation on these titles, mainly examining reviews to paint a clearer picture of what consumers like about top games. _____no_output_____## **The Data**_____no_output_____Initial exploration of defining what makes a good game will be conducted using the Video Games CORGIS dataset which can be accessed [here.](https://corgis-edu.github.io/corgis/python/video_games/) This data was originally collected by Dr. Joe Cox who conducted an empirical investigation of U.S. sales data of video games. Dr. Cox concluded that the major factors that predict for a title's ability to attain "blockbuster" status were threefold: the company that produced the title, the console, and the critic reviews.
I would like to use the data that Dr. Cox collected, which spans thousands of titles that were released between 2004 and 2010, and conduct my own analysis agnostic to his fidnings.
The categoies that I am interested in and their possible effects on the success of a game are:
1. Maximum number of players: how many people can play this game at one time?
2. Online Features: does the game support online play?
3. Genre: what genre does this game belong to?
Within these categories, I would like to measure success of a game using:
1. Review score: the typical review score out of 100
2. Sales: the total sales made on the game measured in millions of dollars
3. Completionist: players reported completing everything in the game
_____no_output_____## **Data Exploration**_____no_output_____
<code>
#hide
import pandas as pd
import seaborn as sns_____no_output_____#hide
import video_games_____no_output_____#hide
video_game = video_games.get_video_game()_____no_output_____#hide
df = pd.read_csv('video_games.csv')_____no_output_____#hide-input
df.head()_____no_output_____
</code>
### 1. What are the top games by critic reviews? _____no_output_____
<code>
#hide-input
df[['Title','Metrics.Review Score']].sort_values('Metrics.Review Score', ascending = False )_____no_output_____
</code>
### 2. What are the top games by sales? _____no_output_____
<code>
#hide-input
df[['Title', 'Metrics.Sales']].sort_values('Metrics.Sales', ascending = False) _____no_output_____
</code>
### 3. What games have the most number of people who report completing the game?
* will be skewed based on how many people played the game _____no_output_____
<code>
#hide-input
df[['Title', 'Length.Completionists.Polled']].sort_values ('Length.Completionists.Polled', ascending = False) _____no_output_____
</code>
### 4. What genre of game was popular on the market during this time period (2004-2010)? _____no_output_____
<code>
#collapse-output
df['Metadata.Genres'].value_counts()_____no_output_____
</code>
### I would like to take the "top games" from questions 1-3 and get a closer look at these titles, since they are considered "top performing" in their respective categories. _____no_output_____
<code>
#collapse-output
df.iloc[837]_____no_output_____#collapse-output
df.iloc[156]_____no_output_____#collapse-output
df.iloc[442]_____no_output_____#hide-input
df.iloc[[837,156,442]]_____no_output_____
</code>
Observed similarities and differences:
1. Action as one of the genres, though none fall exclusively into action only.
2. All 3 were a sequel of some kind, and based off of a previously licensed entity.
3. Max players do not go above 2, two of the three games are only single-player.
4. All games came from different publishers.
5. All released for different consoles. _____no_output_____Because I am interested in the intersection of video games and pedagogy, I wanted to see the games that were considered "Educational."
* These were only the titles exclusively listed as 'Educational' as the genre_____no_output_____
<code>
#hide-input
df[df['Metadata.Genres'] == 'Educational']_____no_output_____#collapse-output
df.iloc[549]_____no_output_____#collapse-output
df.iloc[1000]_____no_output_____
</code>
Takeaways from initial data exploration:
1. Because of the saturation of Action games, I would like to take a closer look at the metrics for success in that specific genre, as well as the other genres that are well-represented in the market.
2. Because the games that were successful in these categories were all sequels of some kind, I think it would be interested to investigate if there are any titles that were successful without being a sequel, which would speak to the degree to which a factor like nostalgia or investment in a story/ universe contribute to a title's success.
3. Because these three games did not have a max player capacity above 2, are there any titles that support multiplayer that are also finding success?
4. Are there certain publishers or consoles that are finding more general success with their titles than others? _____no_output_____## **Further Exploration** _____no_output_____Based on the preliminary findings from my first data exploration, I would like to take a closer look at the data in certain places. _____no_output_____### Defining Success
Using the metrics I established previously, I would like to examine the top-performing games in the categories of critic reviews, sales, and number of completionists. _____no_output_____### 1. Critic Reviews _____no_output_____
<code>
#hide
df_reviews = df[['Title','Metrics.Review Score']]_____no_output_____#hide
df_reviews_top = df_reviews[df_reviews['Metrics.Review Score'] > 90].sort_values('Metrics.Review Score', ascending = False)_____no_output_____#hide
df_reviews_top.index_____no_output_____#hide
df2 = df.iloc[df_reviews_top.index]_____no_output_____#hide-input
sns.regplot(x = df2['Metrics.Review Score'], y = df2['Metrics.Sales'])_____no_output_____
</code>
Here, a sucessful game by critic review was defined as having a critic review score of over 90, of which there were 29 games. It does not seem to be the case, however, that a high critic score correlates very strongly to commercial success in sales. In fact, the games that received the highest critic scores were not the ones which had the most number of sales, with a handfull of games receiving more commercial sucess, and the highest seller (in this group) having the lowest critics score... _____no_output_____
<code>
#hide-input
sns.regplot(x = df2['Metrics.Review Score'], y = df2['Length.Completionists.Polled'])_____no_output_____
</code>
I observed an even weaker relationship between critic review scores and number of completionists in for the games.
This could however be because the games which received the highest critic review scores, such as Grand Theft Auto IV, are known for being "open-world" games in which the player can freely navigate the world without the story being a main part of interacting with the game. _____no_output_____
<code>
#collapse-output
df2[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Length.Completionists.Polled', 'Metadata.Genres']].sort_values('Metrics.Sales', ascending = False)_____no_output_____
</code>
Notably, 27 out of the 29 titles that were considered top-performers as described by their critic review scores had Action as one of their genre descriptors. The two games that did not belong to this genre were considered as Role-Playing and Racing/ Driving games. _____no_output_____### 2. Commercial Sales _____no_output_____
<code>
#hide
df_sales = df[['Title', 'Metrics.Sales']]_____no_output_____#hide
df['Metrics.Sales'].mean_____no_output_____#hide
df_sales_top = df_sales[df_sales['Metrics.Sales'] > 4.69]_____no_output_____#hide
len(df_sales_top.index)_____no_output_____#hide
df3 = df.iloc[df_sales_top.index]_____no_output_____#hide-input
sns.regplot(x = df3['Metrics.Sales'], y =df3['Metrics.Review Score'] )_____no_output_____
</code>
Very interestingly, for the top-performing games in terms if sales, being 14 games, there was actually a negative correlation between sales and critic scores. Shockingly, the game with the most sales had the lowest (sub-60) score of the group of games! However, the games with the highest critic scores in this set still had sales that were above the mean of the entire set, so these games were by no means unsuccessful. _____no_output_____
<code>
#hide-input
sns.regplot(x = df3['Metrics.Sales'], y =df3['Length.Completionists.Polled'])_____no_output_____
</code>
A similar negative relationship was observed between sales and number of completionist players. For similar reasons as the to critic scores grouping, the top game, Wii Play, is not a game that is well-known for having a definitive plot that players follow, but rather is a game that is often played socially with family and friends. _____no_output_____
<code>
#hide-input
df3[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Length.Completionists.Polled', 'Metadata.Genres']].sort_values('Metrics.Sales', ascending = False)_____no_output_____
</code>
The distribution of genres in this group were slightly more diverse than that of the critic scores group. While Action games still held a slight majority at 8 out 14 games being part of the Action genre, Role-Playing, sports, and Driving games made up the remainder of this group. _____no_output_____### 3. Completionists (or not?) _____no_output_____Following my analysis of the top-performing games under critic scores and commercial sales, I have decided not to continue with using number of completionists as a measure of success for a variety of reasons. Firstly, this number would already be skewed because of how the number of players would affect this figure, and completionist data as such would require standardization. While the additional work of standardizing this data is not very much work, I also chose not to use number of completionists in the remainder of my analysis because of how easily this number could be affected by the type of game. There are many games that are made simply to be enjoyed, and do not have the aspect of following a story or plot that other games have. In the former case, players would not be as motivated to "complete" the game, which would skew how the number of com_____no_output_____### Action Games and Reviews? _____no_output_____Because of the overrepresentation of Action games in the games with high critic reviews, I wanted to explore the idea that critics tend to favor games that are of the Action genre. _____no_output_____
<code>
#hide
df_action = df[df['Metadata.Genres'] == 'Action'] _____no_output_____#collapse-output
df_action['Metrics.Review Score'].mean_____no_output_____#hide
df_sports = df[df['Metadata.Genres'] == 'Sports'] _____no_output_____#collapse-output
df_sports['Metrics.Review Score'].mean_____no_output_____#hide
df_strategy = df[df['Metadata.Genres'] == 'Strategy'] _____no_output_____#collapse-output
df_strategy['Metrics.Review Score'].mean_____no_output_____
</code>
Looking at the 3 most common genres and examining the mean critic review scores, it seems that there does not seem to be an inherent bias for Action games amonst critics, since strategy games had a higher mean score, though I think this is one area of analysis that could benefit from more investigation. _____no_output_____## **Who's at the Top?**_____no_output_____From both my own personal perspective, as well as how I assume businesses and consumers would define success, I think commerical sales is the best way to mesure the success of a game. However, because I think critic reviews may encapsulate some measure of the quality of a game, I think it would be beneficial to include critics reviews as a measure of success in some way. Therefore, I decided that when choosing the "top games," I would choose those games that were present in both categories or top-performers in critic scores and sales. That is, games that received both above a 90 on critic scores and had sales above 4.69.
To account for any phenomenon that goes beyond any conventional measure of success I would like to include those titles that had extremely high sales, but perhaps were not deemed a "good game" by critics. These three games would be: Wii Play, Mario Kart Wii, and New Super Mario Bros, all titles that had commericial sales greater that 10 million dollars. _____no_output_____
<code>
#hide
top_reviews = df2['Title'].tolist()
top_sales = df3['Title'].tolist()_____no_output_____#collapse-output
top_sales_____no_output_____#collapse-output
top_reviews_____no_output_____#collapse-output
print(set(top_sales).intersection(set(top_reviews))){'Mario Kart DS', 'Call of Duty 4: Modern Warfare', 'Super Mario Galaxy', 'Super Smash Bros.: Brawl', 'Halo 3', 'Grand Theft Auto IV'}
#hide
top_games = set(top_sales).intersection(set(top_reviews))_____no_output_____#hide
top_games_dict = {'Grand Theft Auto IV' : 837,
'Mario Kart DS' : 22,
'Halo 3' : 420,
'Call of Duty 4: Modern Warfare' : 421,
'Super Mario Galaxy' : 422,
'Super Smash Bros.: Brawl' : 835
}_____no_output_____#hide
target_indices = [837, 22, 420, 421, 422, 835, 156, 833, 157]
top_games = df.iloc[target_indices]_____no_output_____#hide
top_games = top_games[['Title', 'Metrics.Review Score', 'Metrics.Sales', 'Metadata.Genres', 'Metadata.Sequel?', 'Metadata.Publishers', 'Features.Max Players', 'Release.Console', 'Release.Year']]_____no_output_____#hide-input
top_games.sort_values('Metrics.Sales', ascending = False)_____no_output_____#hide-input
sns.countplot(x = top_games['Metadata.Genres'], palette = 'ch:.25')_____no_output_____#hide-input
sns.countplot(x = top_games['Metadata.Publishers'], palette = 'ch:.25')_____no_output_____#hide-input
sns.countplot(x = top_games['Features.Max Players'], palette = 'ch:.25')_____no_output_____#hide-input
sns.countplot(x = top_games['Release.Console'], palette = 'ch:.25')_____no_output_____
</code>
## **Discussion**_____no_output_____Examining the commonalities among the top performing games, it is clear that Nintendo games have the highest sales. They make up 6 of the 9 games that I identified as top-performing games, and represent the 6 highest-earning games in the entire dataset. This seems to operate independently of critic reviews, as the three highest selling games did not receive scores above 90 from critics.
I think that there are factors, especially metadata about each game beyond the scope of information that was included in this dataset, that contributes to why games from Nintendo, and especially those that came out at the top of this dataset were considered top-performers by sales.
Three of the top four games- Wii Play, Mario Kart Wii, and Mario Kart DS- are titles that do not have a strong storyline for the player to follow. Rather, they are multiplayer games that are centered around gaming as a social aspect. With family or friends, players can compete on teams with or against each other. Because you are constantly playing with real people in a competitive environment, the gaming experience is kept dynamic and engaging, rather then relying on a progressing in a story line.
When considering what kinds of games are successful in the market, it may be helpful to consider whether a game is player-versus-player (PVP) or player-vs-everyone (PVE). Wii Play, Mario Kart Wii, and Mario Kart DS, are examples of PVP games, that is, players do not play by the themselves against computers, but rather against other real players, and these kinds of games inherently carry with them a competitive aspect. In terms of motivation, players are motivated to constantly return to the game in order to hone their skills in the game. In many PVE games, players are instead motivated by the desire to progress in the game itself.
The other game that was represented in the top-performing game, despite not having the same PVP quality as the others, was New Super Mario Bros. I think the reason that this title in particular was so successful is because of its recognisability. Just the name Mario in the gaming sphere is already enough for people, gamer or not, to have a mental image of what the game will entail. As a game that has had many remakes and interations, I think that this game's successful largely comes from its capacity to combine the nostalgia of players with the refreshing nature of a game remake or sequel. A game beloved by many, the Super Mario series of games is one that people are invested in because of their emotional attatchment to the games and characters.
When it comes to learning, motivation is a crucial part of pedagogy. In both the conventional sense and in the realm of possibly gamifying learning, I think that it would be helpful to incoroporate a healthy amount of competition, whether it be against the self or against others. I think it is also important for students to have the ability to engage with other students as well, as this social aspect to learning and gaming is something that motivates students additionally. _____no_output_____## **Nintendo: A Closer Look** _____no_output_____Looking at the top-performing games, it is clear to see that Nintendo has a clear group on the gaming market when it comes to sales. As such, I would like to examine just what about these games makes them so desirable to players, and as such I would like to look to Nintendo themselves to see how they would market and describe these games. _____no_output_____
<code>
#hide
from wordcloud import WordCloud, ImageColorGenerator
from PIL import Image
import matplotlib.pyplot as plt_____no_output_____#hide
myStopWords = list(punctuation) + stopwords.words('english')_____no_output_____#hide
super_mario_describe = '''
Bowser has taken over the Mushroom Kingdom, and it's up to Mario to put an end to his sinister reign! Battle Bowser's vile henchmen through 32 levels in the Original 1985 game mode. Move on to collecting special Red Coins and Yoshi Eggs in Challenge mode. Then, try to unlock a secret mode that's waiting to be found by super players like you! Every mode will give you the chance to beat your own score, and there's a lot more to do than just saving a princess. So get ready for a brick-smashin', pipe-warpin', turtle-stompin' good time!
Mario™ and Luigi™ star in their first ever Mushroom Kingdom adventure! Find out why Super Mario Bros. is instantly recognizable to millions of people across the globe, and what made it the best-selling game in the world for three decades straight. Jump over obstacles, grab coins, kick shells, and throw fireballs through eight action-packed worlds in this iconic NES classic. Only you and the Mario Bros. can rescue Princess Toadstool from the clutches of the evil Bowser.
Pick up items and throw them at your adversaries to clear levels in seven fantastical worlds. Even enemies can be picked up and tossed across the screen. Each character has a unique set of abilities: Luigi can jump higher and farther than any of the other characters, Toad can dig extremely fast and pull items out of the ground quicker than anyone, and the princess is the only one who can jump and hover temporarily. This unique installment in the Mario series will keep you coming back for more!
Relive the classic that brought renowned power-ups such as the Tanooki Suit to the world of Super Mario Bros.!
Bowser™ and the Koopalings are causing chaos yet again, but this time they’re going beyond the Mushroom Kingdom into the seven worlds that neighbor it. Now Mario™ and Luigi™ must battle a variety of enemies, including a Koopaling in each unique and distinctive world, on their way to ultimately taking on Bowser himself. Lucky for the brothers, they have more power-ups available than ever before. Fly above the action using the Super Leaf, swim faster by donning the Frog Suit, or defeat enemies using the Hammer Bros. Suit. Use the brand-new overworld map to take the chance to play a minigame in hopes of gaining extra lives or to find a Toad’s House where you can pick up additional items. All this (and more) combines into one of gaming’s most well-known and beloved titles—are you ready to experience gaming bliss?
'''_____no_output_____#hide-input
wc = WordCloud().generate_from_text(super_mario_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc)
plt.axis('off')
plt.show()_____no_output_____#hide
mario_kart_describe = '''
Select one of eight characters from the Mario™ series—offering a variety of driving styles—and take on three championship cups in three different kart classes. Win enough, and you'll unlock a fourth circuit: the ultra-tough Special Cup. Crossing the finish line in first place isn't an easy task, though, as each track has unique obstacles to conquer and racers can obtain special power-ups that boost them to victory. With more than 15 tracks to master and nearly endless replay value, Super Mario Kart is classic gaming…with some banana peels thrown in for good measure!
The newest installment of the fan-favorite Mario Kart™ franchise brings Mushroom Kingdom racing fun into glorious 3D. For the first time, drivers explore new competitive kart possibilities, such as soaring through the skies or plunging into the depths of the sea. New courses, strategic new abilities and customizable karts bring the racing excitement to new heights.
FEATURES:
The Mario Kart franchise continues to evolve. New kart abilities add to the wild fun that the games are known for. On big jumps, a kart deploys a wing to let it glide over the track shortcut. When underwater, a propeller pops out to help the kart cruise across the sea floor.
Players can show their own style by customizing their vehicles with accessories that give them a competitive advantage. For instance, giant tires help a kart drive off-road, while smaller tires accelerate quickly on paved courses.
People can choose to race as one of their favorite Mushroom Kingdom characters or even as their Mii™ character.
New courses take players on wild rides over mountains, on city streets and through a dusty desert. Nintendo fans will recognize new courses on Wuhu Island and in the jungles from Donkey Kong Country™ Returns.
The game supports both SpotPass™ and StreetPass™ features.
Players can compete in local wireless matches or online over a broadband Internet connection.
The newest installment of the fan-favorite Mario Kart™ franchise brings Mushroom Kingdom racing fun into glorious 3D. For the first time, drivers explore new competitive kart possibilities, such as soaring through the skies or plunging into the depths of the sea. New courses, strategic new abilities and customizable karts bring the racing excitement to new heights.
FEATURES:
The Mario Kart franchise continues to evolve. New kart abilities add to the wild fun that the games are known for. On big jumps, a kart deploys a wing to let it glide over the track shortcut. When underwater, a propeller pops out to help the kart cruise across the sea floor.
Players can show their own style by customizing their vehicles with accessories that give them a competitive advantage. For instance, giant tires help a kart drive off-road, while smaller tires accelerate quickly on paved courses.
People can choose to race as one of their favorite Mushroom Kingdom characters or even as their Mii™ character.
New courses take players on wild rides over mountains, on city streets and through a dusty desert. Nintendo fans will recognize new courses on Wuhu Island and in the jungles from Donkey Kong Country™ Returns.
The game supports both SpotPass™ and StreetPass™ features.
Players can compete in local wireless matches or online over a broadband Internet connection.
'''_____no_output_____#hide-input
wc2 = WordCloud().generate_from_text(mario_kart_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc2)
plt.axis('off')
plt.show()_____no_output_____#hide
smash_bros_describe = '''
Super Smash Bros. for Nintendo 3DS is the first portable entry in the renowned series, in which game worlds collide. Up to four players battle each other locally or online using some of Nintendo’s most well-known and iconic characters across beautifully designed stages inspired by classic portable Nintendo games. It’s a genuine, massive Super Smash Bros. experience that’s available to play on the go, anytime, anywhere.
FEATURES:
Smash and crash through “Smash Run” mode, a new mode exclusive to the Nintendo 3DS version that gives up to four players five minutes to fight solo through a huge battlefield while taking down recognizable enemies from almost every major Nintendo franchise and multiple third-party partners. Defeated enemies leave behind power-ups to collect. Players who collect more power-ups have an advantage once time runs out and the battle with opponents begins.
Compete with classic characters from the Super Smash Bros. series like Mario, Link, Samus and Pikachu, along with new challengers like Mega Man, Little Mac and newly announced Palutena, the Goddess of Light from the Kid Icarus games. For the first time players can even compete as their own Mii characters.
Customize different aspects of your character when playing locally or online with friends in a variety of multiplayer modes.
View most elements of the high-energy action at silky-smooth 60 frames per second and in eye-popping stereoscopic 3D.
Fight against friends and family locally or online, or battle random challengers all over the world online in “For Fun” or “For Glory” modes.
Gaming icons clash in the ultimate brawl you can play anytime, anywhere! Smash rivals off the stage as new characters Simon Belmont and King K. Rool join Inkling, Ridley, and every fighter in Super Smash Bros. history. Enjoy enhanced speed and combat at new stages based on the Castlevania series, Super Mario Odyssey, and more!
Having trouble choosing a stage? Then select the Stage Morph option to transform one stage into another while battling—a series first! Plus, new echo fighters Dark Samus, Richter Belmont, and Chrom join the battle. Whether you play locally or online, savor the faster combat, new attacks, and new defensive options, like a perfect shield. Jam out to 900 different music compositions and go 1-on-1 with a friend, hold a 4-player free-for-all, kick it up to 8-player battles and more! Feel free to bust out your GameCube controllers—legendary couch competitions await—or play together anytime, anywhere!
'''_____no_output_____#hide-input
wc3 = WordCloud().generate_from_text(smash_bros_describe)
#Use matplotlib.pyplot to display the fitted wordcloud
#Turn axis off to get rid of axis numbers
plt.imshow(wc3)
plt.axis('off')
plt.show()_____no_output_____
</code>
### It's Mario's World and We're Just Playing in It _____no_output_____After creating word clouds from Nintendo's descriptions of its highest selling titles from 2004-2010, there are some recurring themes that we see when Nintendo describes its games to players and potential customers. Words unique to the game, such as "stage," "kart", and "world" are combined with descriptors such as "new," "fun," and "unique," as well as familiar terms such as "Nintendo," "Mario," and "Bowser," to create a sense that the player will be buying into a refreshing, updated, and modernized version of a product that they know and love. I think that much of Nintendo's success in the gaming market comes from the so-called empire that it has created both with its consistency of creating modern versions of its classic titles and capitalizing off of the nostalgia for these titles as well.
For developers that are not Nintendo, I think that it is important to create characters that people will love, and create a universe around these characters, incorporating them into different games and genres. While Mario is one character that definitely become a poster-child for Nintendo, I think that other characters such as Link and Zelda, or the Pokemon franchise in general have also achieved a similar status of recognizability for the company, and would likely be top-performing games in a more modern dataset. _____no_output_____## **Conclusion** _____no_output_____Through conducting this analysis of the video games dataset from CORGIS, I was able to learn a lot about the market in general, and what makes a "successful" game. My findings constrasted my expectations, but I was able to come to conclusions that I believe would be helpful for both game developers, and my own interests in gamifying learning.
In my exploration of both this project, and the course Digital Humanities 140, I learned many Python tools and became more comfortable working with new libraries as well as datasets. Although I used pandas for the majority of my analysis, the two libraries that I found helpful as well were seaborn and wordcloud for data visualization. Seaborn allowed me to combine aesthetic graphical information with statistical information, and wordcloud allowed me to create easy-to-understand visualizations, both of which reminded me of the importance of being able to tell a story with your data.
In the future, it would be fascinating to conduct a similar study with the modern video game market. Nowadays, gaming has been expanded to PC and mobile platforms, which were not represented in the CORGIS dataset. Additionally, many games are now free-to-play, so I think the metrics that are used for success may be a bit different that they were in my investigation. With the rise of e-sports and streaming, gaming is consumed in ways outside of simply playing the game, and has become a form of entertainment that is similar to movies, sporting, and YouTube.
I would like to acknowledge Professor Winjum for his dedication to instruction this quarter, and his continual understanding. Thank you! _____no_output_____
|
{
"repository": "matty-tran/test-blog",
"path": "_notebooks/2022-03-14-dh140Final.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 534190,
"hexsha": "cb5dbc4e33fb2244722b46afe6b59c2779cd56ea",
"max_line_length": 103660,
"avg_line_length": 156.1046171829,
"alphanum_fraction": 0.836964376
}
|
# Notebook from sdabhi23/q-cloud-programming
Path: ibm/basics_of_qiskit/Challenge1_BasicQuantumCircuits_Solutions.ipynb
# Introduction to Qiskit
Welcome to the Quantum Challenge! Here you will be using Qiskit, the open source quantum software development kit developed by IBM Quantum and community members around the globe. The following exercises will familiarize you with the basic elements of Qiskit and quantum circuits.
To begin, let us define what a quantum circuit is:
> **"A quantum circuit is a computational routine consisting of coherent quantum operations on quantum data, such as qubits. It is an ordered sequence of quantum gates, measurements, and resets, which may be conditioned on real-time classical computation."** (https://qiskit.org/textbook/ch-algorithms/defining-quantum-circuits.html)
While this might be clear to a quantum physicist, don't worry if it is not self-explanatory to you. During this exercise you will learn what a qubit is, how to apply quantum gates to it, and how to measure its final state. You will then be able to create your own quantum circuits! By the end, you should be able to explain the fundamentals of quantum circuits to your colleagues.
Before starting with the exercises, please run cell *Cell 1* below by clicking on it and pressing 'shift' + 'enter'. This is the general way to execute a code cell in the Jupyter notebook environment that you are using now. While it is running, you will see `In [*]:` in the top left of that cell. Once it finishes running, you will see a number instead of the star, which indicates how many cells you've run. You can find more information about Jupyter notebooks here: https://qiskit.org/textbook/ch-prerequisites/python-and-jupyter-notebooks.html.
---
For useful tips to complete this exercise as well as pointers for communicating with other participants and asking questions, please take a look at the following [repository](https://github.com/qiskit-community/may4_challenge_exercises). You will also find a copy of these exercises, so feel free to edit and experiment with these notebooks.
---_____no_output_____
<code>
# Cell 1
import numpy as np
from qiskit import Aer, QuantumCircuit, execute
from qiskit.visualization import plot_histogram
from IPython.display import display, Math, Latex
from may4_challenge import plot_state_qsphere
from may4_challenge.ex1 import minicomposer
from may4_challenge.ex1 import check1, check2, check3, check4, check5, check6, check7, check8
from may4_challenge.ex1 import return_state, vec_in_braket, statevec_____no_output_____
</code>
## Exercise I: Basic Operations on Qubits and Measurements
### Writing down single-qubit states
Let us start by looking at a single qubit. The main difference between a classical bit, which can take the values 0 and 1 only, is that a quantum bit, or **qubit**, can be in the states $\vert0\rangle$, $\vert1\rangle$, as well as a linear combination of these two states. This feature is known as superposition, and allows us to write the most general state of a qubit as:
$$\vert\psi\rangle = \sqrt{1-p}\vert0\rangle + e^{i \phi} \sqrt{p} \vert1\rangle$$
If we were to measure the state of this qubit, we would find the result $1$ with probability $p$, and the result $0$ with probability $1-p$. As you can see, the total probability is $1$, meaning that we will indeed measure either $0$ or $1$, and no other outcomes exists.
In addition to $p$, you might have noticed another parameter above. The variable $\phi$ indicates the relative quantum phase between the two states $\vert0\rangle$ and $\vert1\rangle$. As we will discover later, this relative phase is quite important. For now, it suffices to note that the quantum phase is what enables interference between quantum states, resulting in our ability to write quantum algorithms for solving specific tasks.
If you are interested in learning more, we refer you to [the section in the Qiskit textbook on representations of single-qubit states](https://qiskit.org/textbook/ch-states/representing-qubit-states.html).
### Visualizing quantum states
We visualize quantum states throughout this exercise using what is known as a `qsphere`. Here is how the `qsphere` looks for the states $\vert0\rangle$ and $\vert1\rangle$, respectively. Note that the top-most part of the sphere represents the state $\vert0\rangle$, while the bottom represents $\vert1\rangle$.
<img src="qsphere01.png" alt="qsphere with states 0 and 1" style="width: 400px;"/>
It should be no surprise that the superposition state with quantum phase $\phi = 0$ and probability $p = 1/2$ (meaning an equal likelihood of measuring both 0 and 1) is shown on the `qsphere` with two points. However, note also that the size of the circles at the two points is smaller than when we had simply $\vert0\rangle$ and $\vert1\rangle$ above. This is because the size of the circles is proportional to the probability of measuring each one, which is now reduced by half.
<img src="qsphereplus.png" alt="qsphere with superposition 1" style="width: 200px;"/>
In the case of superposition states, where the quantum phase is non-zero, the qsphere allows us to visualize that phase by changing the color of the respective blob. For example, the state with $\phi = 90^\circ$ (degrees) and probability $p = 1/2$ is shown in the `qsphere` below.
<img src="qspherey.png" alt="qsphere with superposition 2" style="width: 200px;"/>
### Manipulating qubits
Qubits are manipulated by applying quantum gates. Let's go through an overview of the different gates that we will consider in the following exercises.
First, let's describe how we can change the value of $p$ for our general quantum state. To do this, we will use two gates:
1. **$X$-gate**: This gate flips between the two states $\vert0\rangle$ and $\vert1\rangle$. This operation is the same as the classical NOT gate. As a result, the $X$-gate is sometimes referred to as a bit flip or NOT gate. Mathematically, the $X$ gate changes $p$ to $1-p$, so in particular from 0 to 1, and vice versa.
2. **$H$-gate**: This gate allows us to go from the state $\vert0\rangle$ to the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + \vert1\rangle\right)$. This state is also known as the $\vert+\rangle$. Mathematically, this means going from $p=0, \phi=0$ to $p=1/2, \phi=0$. As the final state of the qubit is a superposition of $\vert0\rangle$ and $\vert1\rangle$, the Hadamard gate represents a true quantum operation.
Notice that both gates changed the value of $p$, but not $\phi$. Fortunately for us, it's quite easy to visualize the action of these gates by looking at the figure below.
<img src="quantumgates.png" alt="quantum gates" style="width: 400px;"/>
Once we have the state $\vert+\rangle$, we can then change the quantum phase by applying several other gates. For example, an $S$ gate adds a phase of $90$ degrees to $\phi$, while the $Z$ gate adds a phase of $180$ degrees to $\phi$. To subtract a phase of $90$ degrees, we can apply the $S^\dagger$ gate, which is read as S-dagger, and commonly written as `sdg`. Finally, there is a $Y$ gate which applies a sequence of $Z$ and $X$ gates.
You can experiment with the gates $X$, $Y$, $Z$, $H$, $S$ and $S^\dagger$ to become accustomed to the different operations and how they affect the state of a qubit. To do so, you can run *Cell 2* which starts our circuit widget. After running the cell, choose a gate to apply to a qubit, and then choose the qubit (in the first examples, the only qubit to choose is qubit 0). Watch how the corresponding state changes with each gate, as well as the description of that state. It will also provide you with the code that creates the corresponding quantum circuit in Qiskit below the qsphere.
If you want to learn more about describing quantum states, Pauli operators, and other single-qubit gates, see chapter 1 of our textbook: https://qiskit.org/textbook/ch-states/introduction.html._____no_output_____
<code>
# Cell 2
# press shift + return to run this code cell
# then, click on the gate that you want to apply to your qubit
# next, you have to choose the qubit that you want to apply it to (choose '0' here)
# click on clear to restart
minicomposer(1, dirac=True, qsphere=True)_____no_output_____
</code>
Here are four small exercises to attain different states on the qsphere. You can either solve them with the widget above and copy paste the code it provides into the respective cells to create the quantum circuits, or you can directly insert a combination of the following code lines into the program to apply the different gates:
qc.x(0) # bit flip
qc.y(0) # bit and phase flip
qc.z(0) # phase flip
qc.h(0) # superpostion
qc.s(0) # quantum phase rotation by pi/2 (90 degrees)
qc.sdg(0) # quantum phase rotation by -pi/2 (90 degrees)
The '(0)' indicates that we apply this gate to qubit 'q0', which is the first (and in this case only) qubit.
Try to attain the given state on the qsphere in each of the following exercises.
### I.i) Let us start by performing a bit flip. The goal is to reach the state $\vert1\rangle$ starting from state $\vert0\rangle$. <img src="state1.png" width="300">
If you have reached the desired state with the widget, copy and paste the code from *Cell 2* into *Cell 3* (where it says "FILL YOUR CODE IN HERE") and run it to check your solution._____no_output_____
<code>
# Cell 3
def create_circuit():
qc = QuantumCircuit(1)
#
#
qc.x(0)
#
#
return qc
# check solution
qc = create_circuit()
state = statevec(qc)
check1(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True) Missing environment variable MAY4_CHALLENGE_VALIDATION_ENDPOINT. Set it with the URL to the root of the validation server.
Using staging server at https://eu-gb.functions.cloud.ibm.com/api/v1/web/salvador.de.la.puente.gonzalez%40ibm.com_dev/default/may4_challenge
</code>
### I.ii) Next, let's create a superposition. The goal is to reach the state $|+\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle\right)$. <img src="stateplus.png" width="300">
Fill in the code in the lines indicated in *Cell 4*. If you prefer the widget, you can still copy the code that the widget gives in *Cell 2* and paste it into *Cell 4*._____no_output_____
<code>
# Cell 4
def create_circuit2():
qc = QuantumCircuit(1)
#
#
qc.h(0)
#
#
return qc
qc = create_circuit2()
state = statevec(qc)
check2(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True) Correct 🎉! Well done!
Your progress: 2/8
</code>
### I.iii) Let's combine those two. The goal is to reach the state $|-\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle - |1\rangle\right)$. <img src="stateminus.png" width="300">
Can you combine the above two tasks to come up with the solution?_____no_output_____
<code>
# Cell 5
def create_circuit3():
qc = QuantumCircuit(1)
#
#
qc.x(0)
qc.h(0)
#
#
return qc
qc = create_circuit3()
state = statevec(qc)
check3(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True) Correct 🎉! Well done!
Your progress: 3/8
</code>
### I.iv) Finally, we move on to the complex numbers. The goal is to reach the state $|\circlearrowleft\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle - i|1\rangle\right)$. <img src="stateleft.png" width="300"> _____no_output_____
<code>
# Cell 6
def create_circuit4():
qc = QuantumCircuit(1)
#
#
qc.h(0)
qc.sdg(0)
#
#
return qc
qc = create_circuit4()
state = statevec(qc)
check4(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True) Correct 🎉! Well done!
Your progress: 4/8
</code>
## Exercise II: Quantum Circuits Using Multi-Qubit Gates
Great job! Now that you've understood the single-qubit gates, let us look at gates operating on multiple qubits. The basic gates on two qubits are given by
qc.cx(c,t) # controlled-X (= CNOT) gate with control qubit c and target qubit t
qc.cz(c,t) # controlled-Z gate with control qubit c and target qubit t
qc.swap(a,b) # SWAP gate that swaps the states of qubit a and qubit b
If you'd like to read more about the different multi-qubit gates and their relations, visit chapter 2 of our textbook: https://qiskit.org/textbook/ch-gates/introduction.html.
As before, you can use the two-qubit circuit widget below to see how the combined two qubit state evolves when applying different gates (run *Cell 7*) and get the corresponding code that you can copy and paste into the program. Note that for two qubits a general state is of the form $a|00\rangle + b |01\rangle + c |10\rangle + d|11\rangle$, where $a$, $b$, $c$, and $d$ are complex numbers whose absolute values squared give the probability to measure the respective state; e.g., $|a|^2$ would be the probability to end in state '0' on both qubits. This means we can now have up to four points on the qsphere._____no_output_____
<code>
# Cell 7
# press shift + return to run this code cell
# then, click on the gate that you want to apply followed by the qubit(s) that you want it to apply to
# for controlled gates, the first qubit you choose is the control qubit and the second one the target qubit
# click on clear to restart
minicomposer(2, dirac = True, qsphere = True)_____no_output_____
</code>
We start with the canonical two qubit gate, the controlled-NOT (also CNOT or CX) gate. Here, as with all controlled two qubit gates, one qubit is labelled as the "control", while the other is called the "target". If the control qubit is in state $|0\rangle$, it applies the identity $I$ gate to the target, i.e., no operation is performed. Instead, if the control qubit is in state $|1\rangle$, an X-gate is performed on the target qubit. Therefore, with both qubits in one of the two classical states, $|0\rangle$ or $|1\rangle$, the CNOT gate is limited to classical operations.
This situation changes dramatically when we first apply a Hadamard gate to the control qubit, bringing it into the superposition state $|+\rangle$. The action of a CNOT gate on this non-classical input can produce highly entangled states between control and target qubits. If the target qubit is initially in the $|0\rangle$ state, the resulting state is denoted by $|\Phi^+\rangle$, and is one of the so-called Bell states.
### II.i) Construct the Bell state $|\Phi^+\rangle = \frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right)$. <img src="phi+.png" width="300">
For this state we would have probability $\frac{1}{2}$ to measure "00" and probability $\frac{1}{2}$ to measure "11". Thus, the outcomes of both qubits are perfectly correlated._____no_output_____
<code>
# Cell 8
def create_circuit():
qc = QuantumCircuit(2)
#
#
qc.h(0)
qc.cx(0, 1)
#
#
return qc
qc = create_circuit()
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check5(state)
qc.draw(output='mpl') # we draw the circuit_____no_output_____
</code>
Next, try to create the state of perfectly anti-correlated qubits. Note the minus sign here, which indicates the relative phase between the two states.
### II.ii) Construct the Bell state $\vert\Psi^-\rangle = \frac{1}{\sqrt{2}}\left(\vert01\rangle - \vert10\rangle\right)$. <img src="psi-.png" width="300"> _____no_output_____
<code>
# Cell 9
def create_circuit6():
qc = QuantumCircuit(2,2) # this time, we not only want two qubits, but also
# two classical bits for the measurement later
#
#
qc.h(0)
qc.x(1)
qc.cx(0, 1)
qc.z(1)
#
#
return qc
qc = create_circuit6()
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check6(state)
qc.measure(0, 0) # we perform a measurement on qubit q_0 and store the information on the classical bit c_0
qc.measure(1, 1) # we perform a measurement on qubit q_1 and store the information on the classical bit c_1
qc.draw(output='mpl') # we draw the circuit_____no_output_____
</code>
As you can tell from the circuit (and the code) we have added measurement operators to the circuit. Note that in order to store the measurement results, we also need two classical bits, which we have added when creating the quantum circuit: `qc = QuantumCircuit(num_qubits, num_classicalbits)`.
In *Cell 10* we have defined a function `run_circuit()` that will run a circuit on the simulator. If the right state is prepared, we have probability $\frac{1}{2}$ to measure each of the two outcomes, "01" and "10". However, performing the measurement with 1000 shots does not imply that we will measure exactly 500 times "01" and 500 times "10". Just like flipping a coin multiple times, it is unlikely that one will get exactly a 50/50 split between the two possible output values. Instead, there are fluctuations about this ideal distribution. You can call `run_circuit` multiple times to see the variance in the ouput.
_____no_output_____
<code>
# Cell 10
def run_circuit(qc):
backend = Aer.get_backend('qasm_simulator') # we choose the simulator as our backend
result = execute(qc, backend, shots = 1000).result() # we run the simulation
counts = result.get_counts() # we get the counts
return counts
counts = run_circuit(qc)
print(counts)
plot_histogram(counts) # let us plot a histogram to see the possible outcomes and corresponding probabilities{'10': 485, '01': 515}
</code>
### II.iii) You are given the quantum circuit described in the function below. Swap the states of the first and the second qubit.
This should be your final state: <img src="stateIIiii.png" width="300"> _____no_output_____
<code>
# Cell 11
def create_circuit7():
qc = QuantumCircuit(2)
qc.rx(np.pi/3,0)
qc.x(1)
return qc
qc = create_circuit7()
#
#
qc.swap(0, 1)
#
#
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check7(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True) _____no_output_____
</code>
### II.iv) Write a program from scratch that creates the GHZ state (on three qubits), $\vert \text{GHZ}\rangle = \frac{1}{\sqrt{2}} \left(|000\rangle + |111 \rangle \right)$, performs a measurement with 2000 shots, and returns the counts. <img src="ghz.png" width="300">
If you want to track the state as it is evolving, you could use the circuit widget from above for three qubits, i.e., `minicomposer(3, dirac=True, qsphere=True)`. For how to get the counts of a measurement, look at the code in *Cell 9* and *Cell 10*._____no_output_____
<code>
# Cell 12
#
def run_circuit(qc, shots):
backend = Aer.get_backend('qasm_simulator') # we choose the simulator as our backend
result = execute(qc, backend, shots = shots).result() # we run the simulation
counts = result.get_counts() # we get the counts
return counts
qc = QuantumCircuit(3)
qc.h(0)
qc.cx(0, 1)
qc.cx(1, 2)
qc.measure_all()
counts = run_circuit(qc, 2000)
#
#
#
print(counts)
check8(counts)
plot_histogram(counts){'000': 993, '111': 1007}
Correct 🎉! Well done!
Your progress: 8/8
</code>
Congratulations for finishing this introduction to Qiskit! Once you've reached all 8 points, the solution string will be displayed. You need to copy and paste that string on the IBM Quantum Challenge page to complete the exercise and track your progress.
Now that you have created and run your first quantum circuits, you are ready for the next exercise, where we will make use of the actual hardware and learn how to reduce the noise in the outputs._____no_output_____
|
{
"repository": "sdabhi23/q-cloud-programming",
"path": "ibm/basics_of_qiskit/Challenge1_BasicQuantumCircuits_Solutions.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 4,
"size": 195924,
"hexsha": "cb5f04792d034407ece548286f2f503e012a941e",
"max_line_length": 28768,
"avg_line_length": 230.7703180212,
"alphanum_fraction": 0.9054020947
}
|
# Notebook from sundyCoder/STPD
Path: attacks/.ipynb_checkpoints/non-targeted_attacks_collection-checkpoint.ipynb
<code>
# Python Libraries
%matplotlib inline
import pickle
import numpy as np
import pandas as pd
import matplotlib
from keras.datasets import cifar10
from keras import backend as K
# Custom Networks
from networks.lenet import LeNet
from networks.pure_cnn import PureCnn
from networks.network_in_network import NetworkInNetwork
from networks.resnet import ResNet
from networks.densenet import DenseNet
from networks.wide_resnet import WideResNet
from networks.capsnet import CapsNet
import cv2 as cv
# Helper functions
from differential_evolution import differential_evolution
import helper
#from scipy.misc import imsave
import scipy.misc
matplotlib.style.use('ggplot')
np.random.seed(100)_____no_output_____(x_train, y_train), (x_test, y_test) = cifar10.load_data()
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
_____no_output_____def perturb_image(xs, img):
# If this function is passed just one perturbation vector,
# pack it in a list to keep the computation the same
if xs.ndim < 2:
xs = np.array([xs])
# Copy the image n == len(xs) times so that we can
# create n new perturbed images
tile = [len(xs)] + [1]*(xs.ndim+1)
imgs = np.tile(img, tile)
# Make sure to floor the members of xs as int types
xs = xs.astype(int)
for x,img in zip(xs, imgs):
# Split x into an array of 5-tuples (perturbation pixels)
# i.e., [[x,y,r,g,b], ...]
pixels = np.split(x, len(x) // 5)
for pixel in pixels:
# At each pixel's x,y position, assign its rgb value
x_pos, y_pos, *rgb = pixel
img[x_pos, y_pos] = rgb
return imgs_____no_output_____K.tensorflow_backend._get_available_gpus()
#nin = NetworkInNetwork()
#resnet = ResNet()
densenet = DenseNet()
models = [densenet]
Successfully loaded densenet
x_test.shape_____no_output_____def predict_classes(xs, img, target_class, model, minimize=True):
# Perturb the image with the given pixel(s) x and get the prediction of the model
imgs_perturbed = perturb_image(xs, img)
predictions = model.predict(imgs_perturbed)[:,target_class]
# This function should always be minimized, so return its complement if needed
return predictions if minimize else 1 - predictions_____no_output_____def attack_success(x, img, target_class, model, targeted_attack=False, verbose=False):
# Perturb the image with the given pixel(s) and get the prediction of the model
attack_image = perturb_image(x, x_test[img])
confidence = model.predict(attack_image)[0]
predicted_class = np.argmax(confidence)
# If the prediction is what we want (misclassification or
# targeted classification), return True
if (verbose):
print('Confidence:', confidence[target_class])
if ((targeted_attack and predicted_class == target_class) or
(not targeted_attack and predicted_class != target_class)):
return True
# NOTE: return None otherwise (not False), due to how Scipy handles its callback function_____no_output_____# def save_success(img, name):
# scipy.misc.imsave('data/'+name + tail, img)_____no_output_____count = 0
import os
def attack(img, model,cls_id, case_path, target=None, pixel_count=1,
maxiter=75, popsize=400,verbose=False):
# Change the target class based on whether this is a targeted attack or not
targeted_attack = target is not None
target_class = target if targeted_attack else y_test[img,0]
# Define bounds for a flat vector of x,y,r,g,b values
# For more pixels, repeat this layout
bounds = [(0,32), (0,32), (0,256), (0,256), (0,256)] * pixel_count
# Population multiplier, in terms of the size of the perturbation vector x
popmul = max(1, popsize // len(bounds))
# Format the predict/callback functions for the differential evolution algorithm
predict_fn = lambda xs: predict_classes(
xs, x_test[img], target_class, model, target is None)
callback_fn = lambda x, convergence: attack_success(
x, img, target_class, model, targeted_attack, verbose)
# Call Scipy's Implementation of Differential Evolution
attack_result = differential_evolution(
predict_fn, bounds, maxiter=maxiter, popsize=popmul,
recombination=1, atol=-1, callback=callback_fn, polish=False)
# Calculate some useful statistics to return from this function
attack_image = perturb_image(attack_result.x, x_test[img])[0]
prior_probs = model.predict_one(x_test[img])
predicted_probs = model.predict_one(attack_image)
predicted_class = np.argmax(predicted_probs)
actual_class = y_test[img,0]
success = predicted_class != actual_class
# if(success):
# #count += 1
# name = 'horrse_attacked_'+str(img)+'_'+str(actual_class) +'_'+str(predicted_class)+'.png'
# save_success(attack_image,name)
cdiff = prior_probs[actual_class] - predicted_probs[actual_class]
import scipy.misc
if(predicted_probs[actual_class] < 0.5):
# Show the best attempt at a solution (successful or not)
helper.plot_image(attack_image, actual_class, class_names, predicted_class)
#saved
cls_name = case_path + str(cls_id)+'_'+class_names[cls_id]
ori_name = cls_name + '/original/'+str(img) + '_' + str(actual_class) + '.png'
ori_path = cls_name + '/original/'
if not os.path.exists(ori_path):
#os.makedirs(Annotations_path)
os.system('mkdir -p %s' % (ori_path))
scipy.misc.imsave(ori_name, x_test[img])
at_name = cls_name + '/attacked/'+str(img) +'_'+str(actual_class) +'_'+str(predicted_class)+'.png'
at_path = cls_name + '/attacked/'
if not os.path.exists(at_path):
#os.makedirs(Annotations_path)
os.system('mkdir -p %s' %(at_path))
#scipy.misc.imsave(at_name, attack_image)
cv.imwrite(at_name, attack_image)
#np.savetxt('horse_cor_'+str(img)+'.txt', attack_result.x,delimiter=',')
#np.savetxt('test.out', x, delimiter=',')
print("success:", prior_probs[actual_class], predicted_probs[actual_class])
else:
ok_cls_name = case_path+str(cls_id)+'_'+class_names[cls_id]
ok_name = ok_cls_name + '/OK/'+str(img) +'_' + str(actual_class)+ '.png'
ok_path = ok_cls_name + '/OK/'
if not os.path.exists(ok_path):
#os.makedirs(Annotations_path)
os.system('mkdir -p %s' %(ok_path))
cv.imwrite(ok_name, x_test[img])
#scipy.misc.imsave(ok_name, x_test[img])
# Show the best attempt at a solution (successful or not)
helper.plot_image(attack_image, actual_class, class_names, predicted_class)
return [model.name, pixel_count, img, actual_class, predicted_class, success, cdiff, prior_probs, predicted_probs, attack_result.x]_____no_output_____# pixels = 1 # Number of pixels to attack
# model = resnet
# for i in range(10000):
# if(y_test[i] == 0):
# image = i
# cls = 0
# _ = attack(image, model, cls, pixel_count=pixels,verbose=True)
pixels = 1 # Number of pixels to attack
model = densenet
case_path = 'densenet_data_p1/'
for i in range(10000):
print(i)
cls = y_test[i][0]
image = i
_ = attack(image, model, cls, case_path,pixel_count=pixels,verbose=True)605
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
Confidence: 1.0
15+52+6+40+28+30+29+27+22+29+33+30+13+57+16+41+10+54+8+63_____no_output_____print(y_test.shap)_____no_output_____
</code>
|
{
"repository": "sundyCoder/STPD",
"path": "attacks/.ipynb_checkpoints/non-targeted_attacks_collection-checkpoint.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 52647,
"hexsha": "cb5f91afdf8250f063833d4a50dc9e36de39b823",
"max_line_length": 10980,
"avg_line_length": 94.5188509874,
"alphanum_fraction": 0.8083081657
}
|
# Notebook from NehaKoppikar/multi-omics-state-of-the-field
Path: notebooks/Omics_terms.ipynb
**Aims**:
- extract the omics mentioned in multi-omics articles
**NOTE**: the articles not in PMC/with no full text need to be analysed separately, or at least highlighted._____no_output_____
<code>
%run notebook_setup.ipynb_____no_output_____import pandas
pandas.set_option('display.max_colwidth', 100)_____no_output_____%vault from pubmed_derived_data import literature, literature_subjects_____no_output_____literature['title_abstract_text_subjects'] = (
literature['title']
+ ' ' + literature['abstract_clean'].fillna('')
+ ' ' + literature_subjects.apply(lambda x: ' '.join(x[x == True].index), axis=1)
+ ' ' + literature['full_text'].fillna('')
)_____no_output_____omics_features = literature.index.to_frame().drop(columns='uid').copy()_____no_output_____from functools import partial
from helpers.text_processing import check_usage
from pandas import Series
check_usage_in_input = partial(
check_usage,
data=literature,
column='title_abstract_text_subjects',
limit=5 # show only first 5 results
)_____no_output_____TERM_IN_AT_LEAST_N_ARTICLES = 5_____no_output_____
</code>
# Omics_____no_output_____## 1. Lookup by words which end with -ome_____no_output_____
<code>
cellular_structures = {
# organelles
'peroxisome',
'proteasome',
'ribosome',
'exosome',
'nucleosome',
'polysome',
'autosome',
'autophagosome',
'endosome',
'lysosome',
# proteins and molecular complexes
'spliceosome',
'cryptochrome',
# chromosmes
'autosome',
'chromosome',
'x-chromosome',
'y-chromosome',
}
species = {
'trichome'
}
tools_and_methods = {
# dry lab
'dphenome',
'dgenome',
'reactome',
'rexposome',
'phytozome',
'rgenome',
'igenome', # iGenomes
# wet lab
'microtome'
}_____no_output_____not_an_ome = {
'outcome',
'middle-income',
'welcome',
'wellcome', # :)
'chrome',
'some',
'cumbersome',
'become',
'home',
'come',
'overcome',
'cytochrome',
'syndrome',
'ubiome',
'biome', # this IS an ome, but more into envrionmental studies, rather than molecular biology!
'fluorochrome',
'post-genome',
'ubiquitin-proteasome', # UPS
*tools_and_methods,
*cellular_structures,
*species
}_____no_output_____from omics import get_ome_regexp
ome_re = get_ome_regexp()
get_ome_regexp??_____no_output_____ome_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(ome_re)[0]
.to_frame('term').reset_index()
)
ome_occurrences = ome_occurrences[~ome_occurrences.term.isin(not_an_ome)]
ome_occurrences.head(3)_____no_output_____
</code>
### 1.1 Harmonise hyphenation_____no_output_____
<code>
from helpers.text_processing import report_hyphenation_trends, harmonise_hyphenation_____no_output_____hyphenation_rules = report_hyphenation_trends(ome_occurrences.term)
hyphenation_rules_____no_output_____ome_occurrences.term = harmonise_hyphenation(ome_occurrences.term, hyphenation_rules)_____no_output_____
</code>
### 1.2 Fix typos_____no_output_____
<code>
from helpers.text_processing import find_term_typos, create_typos_map_____no_output_____ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_ome_typos = find_term_typos(ome_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_ome_typos_____no_output_____check_usage_in_input('1-metabolome')_____no_output_____check_usage_in_input('miRNAome')_____no_output_____check_usage_in_input('miRome')_____no_output_____check_usage_in_input('rexposome')_____no_output_____check_usage_in_input('glycol-proteome')_____no_output_____check_usage_in_input('rgenome')_____no_output_____check_usage_in_input('iGenomes')_____no_output_____check_usage_in_input('cancergenome')_____no_output_____is_typo_subset_or_variant = {
('transcritome', 'transcriptome'): True,
('transciptome', 'transcriptome'): True,
('tanscriptome', 'transcriptome'): True,
('trascriptome', 'transcriptome'): True,
('microbome', 'microbiome'): True,
('protenome', 'proteome'): True,
# (neither n- nor o- is frequent enough on its own)
('o-glycoproteome', 'glycoproteome'): True,
('n-glycoproteome', 'glycoproteome'): True,
('glycol-proteome', 'glycoproteome'): True, # note "glycol" instead of "glyco"
('mirome', 'mirnome'): True,
('1-metabolome', 'metabolome'): True
}
ome_typos_map = create_typos_map(potential_ome_typos, is_typo_subset_or_variant)_____no_output_____replaced = ome_occurrences.term[ome_occurrences.term.isin(ome_typos_map)]
replaced.value_counts()_____no_output_____len(replaced)_____no_output_____ome_occurrences.term = ome_occurrences.term.replace(ome_typos_map)_____no_output_____
</code>
### 1.3 Replace synonymous and narrow terms_____no_output_____
<code>
ome_replacements = {}_____no_output_____
</code>
#### miRNAomics → miRNomics_____no_output_____miRNAome is more popular name for -ome, while miRNomics is more popular for -omics._____no_output_____
<code>
ome_occurrences.term.value_counts().loc[['mirnome', 'mirnaome']]_____no_output_____
</code>
As I use -omcis for later on, for consistency I will change miRNAome → miRNome_____no_output_____
<code>
ome_replacements['miRNAome'] = 'miRNome'_____no_output_____
</code>
#### Cancer genome → genome_____no_output_____
<code>
ome_occurrences.term.value_counts().loc[['genome', 'cancer-genome']]_____no_output_____ome_replacements['cancer-genome'] = 'genome'_____no_output_____
</code>
#### Host microbiome → microbiome_____no_output_____
<code>
ome_occurrences.term.value_counts().loc[['microbiome', 'host-microbiome']]_____no_output_____ome_replacements['host-microbiome'] = 'microbiome'_____no_output_____
</code>
#### Replace the values_____no_output_____
<code>
ome_occurrences.term = ome_occurrences.term.replace(
{k.lower(): v.lower() for k, v in ome_replacements.items()}
)_____no_output_____
</code>
### 1.4 Summarise popular \*ome terms_____no_output_____
<code>
ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
ome_common_counts = ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES]
ome_common_counts_____no_output_____ome_common_terms = Series(ome_common_counts.index)
ome_common_terms[ome_common_terms.str.endswith('some')]_____no_output_____
</code>
### 2. Lookup by omics and adjectives_____no_output_____
<code>
from omics import get_omics_regexp
omics_re = get_omics_regexp()
get_omics_regexp??_____no_output_____check_usage_in_input('integromics')_____no_output_____check_usage_in_input('meta-omics')_____no_output_____check_usage_in_input('post-genomic')_____no_output_____check_usage_in_input('3-omics')_____no_output_____multi_omic = {
'multi-omic',
'muti-omic',
'mutli-omic',
'multiomic',
'cross-omic',
'panomic',
'pan-omic',
'trans-omic',
'transomic',
'four-omic',
'multiple-omic',
'inter-omic',
'poly-omic',
'polyomic',
'integromic',
'integrated-omic',
'integrative-omic',
'3-omic'
}
tools = {
# MixOmics
'mixomic',
# MetaRbolomics
'metarbolomic',
# MinOmics
'minomic',
# LinkedOmics - TCGA portal
'linkedomic',
# Mergeomics - https://doi.org/10.1186/s12864-016-3198-9
'mergeomic'
}
vague = {
'single-omic'
}
adjectives = {
'economic',
'socio-economic',
'socioeconomic',
'taxonomic',
'syndromic',
'non-syndromic',
'agronomic',
'anatomic',
'autonomic',
'atomic',
'palindromic',
# temporal
'postgenomic',
'post-genomic'
}
not_an_omic = {
'non-omic', # this on was straightforward :)
*adjectives,
*multi_omic,
*tools,
*vague
}_____no_output_____omic_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(omics_re)[0]
.to_frame('term').reset_index()
)
omic_occurrences = omic_occurrences[~omic_occurrences.term.isin(not_an_omic)]
omic_occurrences.head(2)_____no_output_____
</code>
### 2.1 Harmonise hyphenation_____no_output_____
<code>
hyphenation_rules = report_hyphenation_trends(omic_occurrences.term)
hyphenation_rules_____no_output_____omic_occurrences.term = harmonise_hyphenation(omic_occurrences.term, hyphenation_rules)_____no_output_____
</code>
### 2.2 Fix typos_____no_output_____
<code>
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_omic_typos = find_term_typos(omic_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_omic_typos_____no_output_____check_usage_in_input('non-omic')_____no_output_____check_usage_in_input('C-metabolomics')_____no_output_____
</code>
Not captured in the text abstract, but full version has 13C, so carbon-13, so type of metabolomics._____no_output_____
<code>
check_usage_in_input('miRNAomics')_____no_output_____check_usage_in_input('miRomics')_____no_output_____check_usage_in_input('MinOmics')_____no_output_____check_usage_in_input('onomic', words=True)_____no_output_____literature.loc[omic_occurrences[omic_occurrences.term == 'onomic'].uid].title_abstract_text_subjects_____no_output_____check_usage_in_input(r'\bonomic', words=False, highlight=' onomic')_____no_output_____check_usage_in_input(' ionomic', words=False)_____no_output_____check_usage_in_input('integratomic', words=False)_____no_output_____
</code>
Note: integratomics has literally three hits in PubMed, two because of http://www.integratomics-time.com/_____no_output_____
<code>
is_typo_subset_or_variant = {
('phoshphoproteomic', 'phosphoproteomic'): True,
('transriptomic', 'transcriptomic'): True,
('transcripomic', 'transcriptomic'): True,
('transciptomic', 'transcriptomic'): True,
('trancriptomic', 'transcriptomic'): True,
('trascriptomic', 'transcriptomic'): True,
('metageonomic', 'metagenomic'): True,
('metaobolomic', 'metabolomic'): True,
('metabotranscriptomic', 'metatranscriptomic'): False,
('mirnaomic', 'mirnomic'): True,
('metranscriptomic', 'metatranscriptomic'): True,
('metranscriptomic', 'transcriptomic'): False,
('miromic', 'mirnomic'): True,
('n-glycoproteomic', 'glycoproteomic'): True,
('onomic', 'ionomic'): False,
('c-metabolomic', 'metabolomic'): True,
('integratomic', 'interactomic'): False,
('pharmacoepigenomic', 'pharmacogenomic'): False,
('metobolomic', 'metabolomic'): True,
# how to treat single-cell?
('scepigenomic', 'epigenomic'): True,
#('epitranscriptomic', 'transcriptomic'): False
('epigenomomic', 'epigenomic'): True,
}
omic_typos_map = create_typos_map(potential_omic_typos, is_typo_subset_or_variant)_____no_output_____replaced = omic_occurrences.term[omic_occurrences.term.isin(omic_typos_map)]
replaced.value_counts()_____no_output_____len(replaced)_____no_output_____omic_occurrences.term = omic_occurrences.term.replace(omic_typos_map)_____no_output_____
</code>
### 2.3 Popular *omic(s) terms:_____no_output_____
<code>
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].add_suffix('s')_____no_output_____
</code>
### Crude overview_____no_output_____
<code>
ome_terms = Series(ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)
omic_terms = Series(omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)_____no_output_____assert omics_features.index.name == 'uid'
for term in ome_terms:
mentioned_by_uid = set(ome_occurrences[ome_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)
for term in omic_terms:
mentioned_by_uid = set(omic_occurrences[omic_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)_____no_output_____from helpers.text_processing import prefix_remover
ome_terms_mentioned = omics_features['mentions_' + ome_terms].rename(columns=prefix_remover('mentions_'))
omic_terms_mentioned = omics_features['mentions_' + omic_terms].rename(columns=prefix_remover('mentions_'))_____no_output_____%R library(ComplexUpset);_____no_output_____%%R -i ome_terms_mentioned -w 800 -r 100
upset(ome_terms_mentioned, colnames(ome_terms_mentioned), min_size=10, width_ratio=0.1)[1] "Dropping 22 empty groups"
</code>
## Merge -ome and -omic terms_____no_output_____
<code>
from warnings import warn
terms_associated_with_omic = {
omic + 's': [omic]
for omic in omic_terms
}
for ome in ome_terms:
assert ome.endswith('ome')
auto_generate_omic_term = ome[:-3] + 'omics'
omic = auto_generate_omic_term
if omic not in terms_associated_with_omic:
if omic in omic_counts.index:
warn(f'{omic} was removed at thresholding, but it is a frequent -ome!')
else:
print(f'Creating omic {omic}')
terms_associated_with_omic[omic] = []
terms_associated_with_omic[omic].append(ome)Creating omic whole-genomics
Creating omic exomics
Creating omic whole-exomics
Creating omic exposomics
Creating omic whole-transcriptomics
Creating omic translatomics
Creating omic regulomics
Creating omic immunomics
Creating omic degradomics
Creating omic pan-genomics
Creating omic kinomics
Creating omic mycobiomics
from omics import add_entities_to_features
add_entities_to_omic_features = partial(
add_entities_to_features,
features=omics_features,
omics_terms=terms_associated_with_omic
)_____no_output_____omics = {k: [k] for k in terms_associated_with_omic}
add_entities_to_omic_features(omics, entity_type='ome_or_omic')_____no_output_____from omics import omics_by_entity, omics_by_entity_group_____no_output_____
</code>
interactomics is a proper "omics", but it is difficult to assign to a single entity - by definition_____no_output_____
<code>
check_usage_in_input('interactomics')_____no_output_____
</code>
phylogenomics is not an omic on its own, but if used in context of metagenomics it can refer to actual omics data_____no_output_____
<code>
check_usage_in_input('phylogenomics')_____no_output_____
</code>
regulomics is both a name of a tool, group (@MIM UW), and omics:_____no_output_____
<code>
check_usage_in_input('regulomics')_____no_output_____from functools import reduce
omics_mapped_to_entities = reduce(set.union, omics_by_entity.values())
set(terms_associated_with_omic) - omics_mapped_to_entities_____no_output_____assert omics_mapped_to_entities - set(terms_associated_with_omic) == set()_____no_output_____omics_mapped_to_entities_groups = reduce(set.union, omics_by_entity_group.values())
set(terms_associated_with_omic) - omics_mapped_to_entities_groups_____no_output_____add_entities_to_omic_features(omics_by_entity, entity_type='entity')_____no_output_____add_entities_to_omic_features(omics_by_entity_group, entity_type='entity_group')_____no_output_____
</code>
### Visualize the entities & entities groups_____no_output_____
<code>
omic_entities = omics_features['entity_' + Series(list(omics_by_entity.keys()))].rename(columns=prefix_remover('entity_'))
omic_entities_groups = omics_features['entity_group_' + Series(list(omics_by_entity_group.keys()))].rename(columns=prefix_remover('entity_group_'))_____no_output_____%%R -i omic_entities -w 800 -r 100
upset(omic_entities, colnames(omic_entities), min_size=10, width_ratio=0.1)_____no_output_____%%R -i omic_entities_groups -w 800 -r 100
upset(omic_entities_groups, colnames(omic_entities_groups), min_size=10, width_ratio=0.1)_____no_output_____
</code>
### Number of omics mentioned in abstract vs the multi-omic term used_____no_output_____
<code>
omes_or_omics_df = omics_features['ome_or_omic_' + Series(list(omics.keys()))].rename(columns=prefix_remover('ome_or_omic_'))_____no_output_____literature['omic_terms_detected'] = omes_or_omics_df.sum(axis=1)_____no_output_____lt = literature[['term', 'omic_terms_detected']]_____no_output_____literature.sort_values('omic_terms_detected', ascending=False)[['title', 'omic_terms_detected']].head(10)_____no_output_____%%R -i lt -w 800
(
ggplot(lt, aes(x=term, y=omic_terms_detected))
+ geom_violin(adjust=2)
+ geom_point()
+ theme_bw()
)_____no_output_____%vault store omics_features in pubmed_derived_data_____no_output_____
</code>
# Current limitations_____no_output_____## Patchy coverage_____no_output_____Currently I only detected omic-describing terms in less than 70% of abstracts:_____no_output_____
<code>
omic_entities.any(axis=1).mean()_____no_output_____
</code>
Potential solution: select a random sample of 50 articles, annotate manually, calculate sensitivity and specificity.
If any omic is consistently omitted, reconsider how search terms are created._____no_output_____## Apostrophes_____no_output_____Are we missing out on \*'omic terms, such us meta'omic used in [here](https://doi.org/10.1053/j.gastro.2014.01.049)?_____no_output_____
<code>
check_usage_in_input(
r'\w+\'omic',
words=False,
highlight='\'omic'
)_____no_output_____
</code>
unlikely (but would be nice to get it in!)_____no_output_____## Fields of study_____no_output_____
<code>
'genetics', 'epigenetics'_____no_output_____
</code>
Some authors may prefer to say "we integrated genetic and proteomic data" rather than "genomic and proteomic"_____no_output_____
|
{
"repository": "NehaKoppikar/multi-omics-state-of-the-field",
"path": "notebooks/Omics_terms.ipynb",
"matched_keywords": [
"single-cell",
"multi-omics"
],
"stars": 10,
"size": 293154,
"hexsha": "cb607166c7306f6127801b3a9864a56f2dc28e75",
"max_line_length": 55183,
"avg_line_length": 82.9994337486,
"alphanum_fraction": 0.784222627
}
|
# Notebook from AndOleAnd/Capstone_N_A_P
Path: Notebooks/Full_pipeline.ipynb
<code>
import math
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import h3 # h3 bins from uber_____no_output_____from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures_____no_output_____import sys
sys.path.append('../Scripts')
import capstone_functions as cf_____no_output_____
</code>
# Exploring Model Complexity vs Scores
### In this workbook we slowly add complexity to the partitioning model across a number of dimensions.
We use the predicted values for first half (h1) 2019 as the train values and the actual h1 2019 calues as the test set.
Finally we submit to zindi to get a score against the actual h2 2019 accident data._____no_output_____## Baseline_model
Uses simple grid based on quatiles to place ambulances around the city
Zindi score = 68.9760227569434_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.00, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='baseline', placement_method='baseline', verbose=2)file created ../Inputs/predictions_for_clustering_c.csv
1 clusters created
using star grid for placement
1 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.07503016990729658
Score on train set: 0.05597306216241996 (avg distance per accident)
20201217_prediction_0.0_baseline_baseline.csv saved in ../Outputs/
</code>
## Adding complexity 1
Use Partioning algorithm k_means to find optimal location for ambulances that minimizes the euclidean distance between ambulances and points_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.00, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='baseline', placement_method='k_means', verbose=2)file created ../Inputs/predictions_for_clustering_c.csv
1 clusters created
using k-means clustering
1 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.05897960880522698
Score on train set: 0.05054601342218318 (avg distance per accident)
20201217_prediction_0.0_baseline_k_means.csv saved in ../Outputs/
</code>
## Adding Complexity 2
Choose different algorithm that is not so influenced by outliers. Picks a median point as the cluster center.
zindi score = 49.9372135333768_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.00, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='baseline', placement_method='k_medoids', verbose=2)file created ../Inputs/predictions_for_clustering_c.csv
1 clusters created
using k-medoids clustering
1 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.05519941541897019
Score on train set: 0.040379031163907037 (avg distance per accident)
20201217_prediction_0.0_baseline_k_medoids.csv saved in ../Outputs/
</code>
## Adding Complexity 3
Filter outliers to reduce overfitting for rare events out side of the center of the city
zindi score = 44.4289573474198_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.003, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='baseline', placement_method='k_means', verbose=2)file created ../Inputs/predictions_for_clustering_c.csv
1 clusters created
using k-means clustering
1 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.05021612166332715
Score on train set: 0.03475499111996446 (avg distance per accident)
20201217_prediction_0.003_baseline_k_means.csv saved in ../Outputs/
</code>
## Adding Complexity 4
Using gradient descent to optimize placement by reducing loss funtion that is euclidean distance between centroids and points.
zindi score = 56.49581082745_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.003, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='baseline', placement_method='gradient_descent', verbose=2,
lr=8e-3, n_epochs=50, batch_size=2)file created ../Inputs/predictions_for_clustering_c.csv
1 clusters created
using gradient descent clustering
1 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.06169501641428757
Score on train set: 0.029772715695369757 (avg distance per accident)
20201217_prediction_0.003_baseline_gradient_descent.csv saved in ../Outputs/
</code>
## Adding Complexity 5
Creating different placement sets for different time and day combinations
zindi score = 43.9846518426706_____no_output_____
<code>
cf.full_pipeline(predict_period='2019_h1', frequency_cutoff=0, outlier_filter=0.004, test_period_date_start='2019-01-01', test_period_date_end='2020-07-01',
tw_cluster_strategy='holiday_simple', placement_method='k_means', verbose=2)file created ../Inputs/predictions_for_clustering_c.csv
5 clusters created
using k-means clustering
5 placement sets created
Total size of test set: 1922
Total size of train set: 3227
Score on test set: 0.05199322505912626
Score on train set: 0.03587050084914429 (avg distance per accident)
20201217_prediction_0.004_holiday_simple_k_means.csv saved in ../Outputs/
</code>
|
{
"repository": "AndOleAnd/Capstone_N_A_P",
"path": "Notebooks/Full_pipeline.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 256753,
"hexsha": "cb61814c032dc5c26bfbcb0929a0567bdd017e58",
"max_line_length": 41540,
"avg_line_length": 703.4328767123,
"alphanum_fraction": 0.9494377865
}
|
# Notebook from Teichlab/NaiveDE
Path: Examples/Mouse Cell Atlas brain Astrocyte vs Microglia DE.ipynb
<code>
%pylab inline
import pandas as pd
import plotnine as p
p.theme_set(p.theme_classic())
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = FalsePopulating the interactive namespace from numpy and matplotlib
counts = pd.read_parquet('mca_brain_counts.parquet')_____no_output_____sample_info = pd.read_parquet('mca_brain_cell_info.parquet')_____no_output_____
</code>
### Differential expression
Now let us investigate how this count depth effect plays in to a differential expression analysis. With all published large scale experiments cataloging cell types, it is getting increasingly easy to simply fetch some data and do quick comparisons. We will use data from the recent [single cell Mouse Cell Atlas][paper link]. To get something easy to compare, we use the samples called "Brain" and focus on the cells annotated as "Microglia" and "Astrocyte". Out of the ~400,000 cells in the study, these two cell types have 338 and 199 representative cells. On average they have about 700 total UMI counts each, so while the entire study is a pretty large scale, the individual cell types and cells are on a relatively small scale. The final table has 537 cells and 21,979 genes.
[paper link]: http://www.cell.com/cell/abstract/S0092-8674(18)30116-8_____no_output_____
<code>
sample_info['super_cell_type'].value_counts()_____no_output_____sub_samples = sample_info.query('super_cell_type in ["Microglia", "Astrocyte"]').copy()_____no_output_____sub_counts = counts.reindex(index=sub_samples.index)_____no_output_____sub_counts.shape_____no_output_____sub_samples['is_astrocyte'] = sub_samples['super_cell_type'] == 'Astrocyte'_____no_output_____import NaiveDE_____no_output_____sub_samples['total_count'] = sub_counts.sum(1)_____no_output_____figsize(11, 3)
sub_samples.total_count.hist(grid=False, fc='w', ec='k')_____no_output_____sub_samples.total_count.median(), sub_samples.total_count.mean()_____no_output_____print(sub_samples.head()) ClusterID Tissue Batch Cell Barcode \
Cell name
Brain_1.AAAACGCGAGTAGAATTA Brain_3 Brain Brain_1 AAAACGCGAGTAGAATTA
Brain_1.AAAACGGAGGAGATTTGC Brain_3 Brain Brain_1 AAAACGGAGGAGATTTGC
Brain_1.AAAACGGGCTGCGACACT Brain_2 Brain Brain_1 AAAACGGGCTGCGACACT
Brain_1.AAAACGGTGGTAGCTCAA Brain_3 Brain Brain_1 AAAACGGTGGTAGCTCAA
Brain_1.AAAACGGTTGCCATACAG Brain_3 Brain Brain_1 AAAACGGTTGCCATACAG
cell_type super_cell_type is_astrocyte \
Cell name
Brain_1.AAAACGCGAGTAGAATTA Astrocyte_Mfe8 high Astrocyte True
Brain_1.AAAACGGAGGAGATTTGC Astrocyte_Mfe8 high Astrocyte True
Brain_1.AAAACGGGCTGCGACACT Microglia Microglia False
Brain_1.AAAACGGTGGTAGCTCAA Astrocyte_Mfe8 high Astrocyte True
Brain_1.AAAACGGTTGCCATACAG Astrocyte_Mfe8 high Astrocyte True
total_count gene
Cell name
Brain_1.AAAACGCGAGTAGAATTA 1088.0 0
Brain_1.AAAACGGAGGAGATTTGC 967.0 0
Brain_1.AAAACGGGCTGCGACACT 543.0 0
Brain_1.AAAACGGTGGTAGCTCAA 679.0 0
Brain_1.AAAACGGTTGCCATACAG 957.0 0
</code>
In a differential expression test you simply include a covariate in the design matrix that informs the linear model about the different conditions you want to compare. Here we are comparing microglia and astrocytes._____no_output_____
<code>
%%time
lr_results = NaiveDE.lr_tests(sub_samples, np.log1p(sub_counts.T),
alt_model='C(is_astrocyte) + np.log(total_count) + 1',
null_model='np.log(total_count) + 1')CPU times: user 705 ms, sys: 136 ms, total: 841 ms
Wall time: 707 ms
lr_results.pval = lr_results.pval.clip_lower(lr_results.query('pval != 0')['pval'].min())
lr_results.qval = lr_results.qval.clip_lower(lr_results.query('qval != 0')['qval'].min())_____no_output_____print(lr_results.sort_values('pval').head()) Intercept C(is_astrocyte)[T.True] np.log(total_count) \
Atp1a2 -1.925596 1.840452 0.318532
Sparcl1 -1.008002 1.742278 0.179123
Tmsb4x -3.680027 -2.044908 0.948016
Hexb -2.165802 -2.032087 0.646263
Ctss -1.665139 -1.937761 0.553429
pval qval
Atp1a2 3.058918e-162 1.642639e-159
Sparcl1 3.548817e-158 1.905715e-155
Tmsb4x 2.742131e-153 1.472524e-150
Hexb 3.671724e-145 1.971716e-142
Ctss 8.167943e-144 4.386185e-141
example_genes = ['Apoe', 'Sparcl1', 'Tmsb4x', 'C1qa']
examples = lr_results.loc[example_genes]_____no_output_____img = \
p.qplot('C(is_astrocyte)[T.True]', '-np.log10(pval)', lr_results) \
+ p.annotate('text',
x=examples['C(is_astrocyte)[T.True]'] + 0.33,
y=-np.log10(examples['pval']),
label=examples.index) \
+ p.labs(title='Brain cell data')
img.save('4.png', verbose=False)
img_____no_output_____img = \
p.qplot('C(is_astrocyte)[T.True]', 'np.log(total_count)', lr_results) \
+ p.annotate('text',
x=examples['C(is_astrocyte)[T.True]'] + 0.33,
y=examples['np.log(total_count)'],
label=examples.index) \
+ p.labs(title='Brain cell data')
img.save('5.png', verbose=False)
img_____no_output_____print(lr_results.sort_values('C(is_astrocyte)[T.True]').head()) Intercept C(is_astrocyte)[T.True] np.log(total_count) \
Tmsb4x -3.680027 -2.044908 0.948016
Hexb -2.165802 -2.032087 0.646263
Ctss -1.665139 -1.937761 0.553429
C1qa -0.995722 -1.749257 0.423667
C1qc -2.215866 -1.619052 0.584999
pval qval
Tmsb4x 2.742131e-153 1.472524e-150
Hexb 3.671724e-145 1.971716e-142
Ctss 8.167943e-144 4.386185e-141
C1qa 1.826933e-136 9.810631e-134
C1qc 2.119271e-130 1.138049e-127
print(lr_results.sort_values('C(is_astrocyte)[T.True]').tail()) Intercept C(is_astrocyte)[T.True] np.log(total_count) \
Aldoc -2.687079 1.417820 0.435424
Clu -1.888573 1.539004 0.317413
Sparcl1 -1.008002 1.742278 0.179123
Atp1a2 -1.925596 1.840452 0.318532
Apoe -3.426031 1.907639 0.615229
pval qval
Aldoc 5.683797e-122 3.052199e-119
Clu 9.768731e-122 5.245808e-119
Sparcl1 3.548817e-158 1.905715e-155
Atp1a2 3.058918e-162 1.642639e-159
Apoe 1.250247e-123 6.713825e-121
</code>
Also in this case we can see that the count depth weights are deflated for lowly abundant genes._____no_output_____
<code>
img = \
p.qplot(sub_counts.sum(0).clip_lower(1), lr_results['np.log(total_count)'],
log='x') \
+ p.labs(x='Gene count across dataset', y='np.log(total_count)',
title='Brain cell data')
img.save('6.png', verbose=False)
img_____no_output_____xx = np.linspace(np.log(sub_samples.total_count.min()),
np.log(sub_samples.total_count.max()))
def linres(gene):
yy = \
lr_results.loc[gene, 'np.log(total_count)'] * xx \
+ lr_results.loc[gene, 'Intercept']
yy1 = np.exp(yy)
yy2 = np.exp(yy + lr_results.loc[gene, 'C(is_astrocyte)[T.True]'])
return yy1, yy2_____no_output_____
</code>
Similar to above, we can look at the relation between count depth and observed counts for a few genes, but we can also make sure to plot the stratifiction into the two cell types and how the regression models are predicting the counts._____no_output_____
<code>
figsize(11, 3)
ax = plt.gca()
for i, gene in enumerate(['Apoe', 'Sparcl1', 'Tmsb4x', 'C1qa']):
sub_samples['gene'] = counts[gene]
plt.subplot(1, 4, i + 1, sharey=ax)
if i == 0:
plt.ylabel('Counts + 1')
plt.loglog()
plt.scatter(sub_samples.loc[~sub_samples.is_astrocyte]['total_count'],
sub_samples.loc[~sub_samples.is_astrocyte]['gene'] + 1,
c='grey', marker='o', label='Microglia')
plt.scatter(sub_samples.loc[sub_samples.is_astrocyte]['total_count'],
sub_samples.loc[sub_samples.is_astrocyte]['gene'] + 1,
c='k', marker='x', label='Astrocyte')
yy1, yy2 = linres(gene)
plt.plot(np.exp(xx), yy1, c='w', lw=5)
plt.plot(np.exp(xx), yy1, c='r', lw=3, ls=':')
plt.plot(np.exp(xx), yy2, c='w', lw=5)
plt.plot(np.exp(xx), yy2, c='r', lw=3)
plt.title(gene)
plt.xlabel('Total counts')
plt.legend(scatterpoints=3);
plt.tight_layout()
plt.savefig('7.png', bbox_inches='tight')_____no_output_____
</code>
Again we can see the overall abundance is related to the slope of the lines. Another thing which seem to pop out in these plots is an interaction between cell type and slope. For example looking at C1qa the slope for the microglia seem underestimated. This makes sense, if this is an effect of count noise at low abundances.
My takeaway from this is that OLS regression might be OK if counts are large, but at lower levels model parameters are not estimated correctly due to the count nature of the data.
Notebooks of the analysis in this post are available [here](https://github.com/vals/Blog/tree/master/180226-count-offsets)._____no_output_____
|
{
"repository": "Teichlab/NaiveDE",
"path": "Examples/Mouse Cell Atlas brain Astrocyte vs Microglia DE.ipynb",
"matched_keywords": [
"differential expression"
],
"stars": 5,
"size": 254883,
"hexsha": "cb61997bd33617c9cdf57e2f3386e7d583dc86fc",
"max_line_length": 65416,
"avg_line_length": 387.9497716895,
"alphanum_fraction": 0.9327534594
}
|
# Notebook from jradavenport/IU-Aur
Path: known_systems.ipynb
Let's go through the known systems in [Table 1](https://www.aanda.org/articles/aa/full_html/2018/01/aa30655-17/T1.html) of Jurysek+(2018)_____no_output_____
<code>
# 11 systems listed in their Table 1
systems = ['RW Per', 'IU Aur', 'AH Cep', 'AY Mus',
'SV Gem', 'V669 Cyg', 'V685 Cen',
'V907 Sco', 'SS Lac', 'QX Cas', 'HS Hya']
P_EB = [13.1989, 1.81147, 1.7747, 3.2055, 4.0061, 1.5515,
1.19096, 3.77628, 14.4161, 6.004709, 1.568024]
_____no_output_____
</code>
I already know about some...
- [HS Hya](https://github.com/jradavenport/HS-Hya) (yes, the final eclipses!)
- [IU Aur](IU_Aur.ipynb) (yes, still eclipsing)
- [QX Cas](https://github.com/jradavenport/QX-Cas) (yes, but not eclipsing, though new eclipses present...)
- V907 Sco (yes, not sure if eclipsing still)
1. Go through each system. Check [MAST](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) (has 2-min data), data could be pulled with [lightkurve](https://docs.lightkurve.org/tutorials/),
2. if not check for general coverage with the [Web Viewing Tool](https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py)
3. and try to generate a 30-min lightcurve from pixel-level data with [Eleanor](https://adina.feinste.in/eleanor/getting_started/tutorial.html)
4. For every system w/ TESS data, make some basic light curves. Is eclipse still there? Is there rotation?
5. For each, find best paper(s) that characterize the system. Start w/ references in Table 1_____no_output_____
<code>
from IPython.display import Image
import warnings
warnings.filterwarnings('ignore')_____no_output_____import eleanor
import numpy as np
from astropy import units as u
import matplotlib.pyplot as plt
from astropy.coordinates import SkyCoord_____no_output_____import matplotlib
matplotlib.rcParams.update({'font.size':18})
matplotlib.rcParams.update({'font.family':'serif'})_____no_output_____for k in range(len(systems)):
try:
star = eleanor.Source(name=systems[k])
print(star.name, star.tic, star.gaia, star.tess_mag)
data = eleanor.TargetData(star)
q = (data.quality == 0)
plt.figure()
plt.plot(data.time[q], data.raw_flux[q]/np.nanmedian(data.raw_flux[q]), 'k')
# plt.plot(data.time[q], data.corr_flux[q]/np.nanmedian(data.corr_flux[q]) + 0.03, 'r')
plt.ylabel('Normalized Flux')
plt.xlabel('Time [BJD - 2457000]')
plt.title(star.name)
plt.show()
plt.figure()
plt.scatter((data.time[q] % P_EB[k])/P_EB[k], data.raw_flux[q]/np.nanmedian(data.raw_flux[q]))
# plt.plot(data.time[q], data.corr_flux[q]/np.nanmedian(data.corr_flux[q]) + 0.03, 'r')
plt.ylabel('Normalized Flux')
plt.xlabel('Phase (P='+str(P_EB[k])+')')
plt.title(star.name)
plt.show()
except:
print('Sorry '+systems[k])No eleanor postcard has been made for your target (yet). Using TessCut instead.
RW Per 410193513 229136921858228096 9.03575
</code>
|
{
"repository": "jradavenport/IU-Aur",
"path": "known_systems.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 570071,
"hexsha": "cb62e8d67c4e1f50bf0c73f5e5cd4302aada292c",
"max_line_length": 42904,
"avg_line_length": 1057.6456400742,
"alphanum_fraction": 0.9554441464
}
|
# Notebook from t-hdd/econ126
Path: Discussion Notebooks/Econ126_Discussion_Week_02_blank.ipynb
<code>
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline_____no_output_____
</code>
# Discussion: Week 2_____no_output_____
<code>
# Import the NumPy module
_____no_output_____
</code>
## Exercise: Capital Evolution in the Solow Model
Suppose that capital per worker $k_t$ evolves according to the following equation:
\begin{align}
k_{t+1} & = 0.12 \cdot 100 \cdot k_t^{1/3} + 0.9\cdot k_t, \tag{1}
\end{align}
where the first term on the right-hand side implies that the economy has a 12 percent savings rate, that total factor productivity equals 100, and that there is no growth in technology (or "labor efficiency"). The second term implies that the rate of capital depreciation is 10 percent (i.e., $1-\delta = 0.9 \Rightarrow \delta = 0.1$). Assume that capital per worker in the initial period $k_0$ is given.
The *steady state* quantity of capital per worker is the number $k^*$ such that if $k_t = k^*$, $k_{t+1} = k^*$. Find $k^*$ by dropping the time subscripts in equation (1) and solving for $k$. Obtain:
\begin{align}
k^* & = \left(\frac{0.1}{0.12\cdot 100}\right)^{3/2} = 1{,}314.53414 \tag{2}
\end{align}_____no_output_____### Part (a): Simulate 100 Periods_____no_output_____
<code>
# Create a variable called 'k0' that stores the initial quantity of capital in the economy. Set 'k0' to 400
# Create a variable called 'T' equal to the number of periods after 0 to simulate. Set T = 100
# Use the function np.zeros to create a variable called 'capital' equal to an array of zeros of length T+1
# Print the value of 'capital'
_____no_output_____# Set the first element of 'capital' to the value in k0
# Print the value of 'capital'
_____no_output_____# Use a for loop to iterate over the additional elemnts of the 'capital' array that need to be computed.
# Hint: capital has length T+1. The first value is filled, so you need fill the remaining T values.
# Print the value of 'capital'
_____no_output_____# Print the value of the last element of 'capital'
_____no_output_____# Plot the simulated capital per worker
_____no_output_____
</code>
### Part (b): Simulate 1,000 Periods_____no_output_____
<code>
# Create a variable called 'T' equal to the number of periods after 0 to simulate. Set T = 1000
# Use the function np.zeros to create a variable called 'capital' equal to an array of zeros of length T+1
# Set the first element of 'capital' to the value in k0
# Use a for loop to iterate over the additional elemnts of the 'capital' array that need to be computed.
# Print the value of the last element of 'capital'
_____no_output_____
</code>
### Part (c): Evaluation
Provide answers to the follow questions in the next cell.
**Question**
1. Why is the final value of capital computed in Part (b) closer to the true steady state than the value computed in Part (a)?_____no_output_____**Answer**
1. _____no_output_____
|
{
"repository": "t-hdd/econ126",
"path": "Discussion Notebooks/Econ126_Discussion_Week_02_blank.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 6900,
"hexsha": "cb63494338e5d63f2002a6904c0ee91af1706123",
"max_line_length": 431,
"avg_line_length": 35.0253807107,
"alphanum_fraction": 0.4034782609
}
|
# Notebook from Study-Repos-Forks/MadeWithML
Path: notebooks/15_Transformers.ipynb
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub
</div>
<br>
<hr>_____no_output_____# Transformers
In this lesson we will learn how to implement the Transformer architecture to extract contextual embeddings for our text classification task._____no_output_____<div align="left">
<a target="_blank" href="https://madewithml.com/courses/foundations/transformers/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>
<a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d"></a>
<a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>_____no_output_____# Overview_____no_output_____Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task.
- **advantages**:
- better representation for our input tokens via contextual embeddings where the token representation is based on the specific neighboring tokens using self-attention.
- sub-word tokens, as opposed to character tokens, since they can hold more meaningful representation for many of our keywords, prefixes, suffixes, etc.
- attend (in parallel) to all the tokens in our input, as opposed to being limited by filter spans (CNNs) or memory issues from sequential processing (RNNs).
- **disadvantages**:
- computationally intensive
- required large amounts of data (mitigated using pretrained models)_____no_output_____<div align="left">
<img src="https://madewithml.com/static/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>_____no_output_____# Set up_____no_output_____
<code>
!pip install transformers==3.0.2 -q[K |████████████████████████████████| 769 kB 5.3 MB/s
[K |████████████████████████████████| 3.0 MB 30.8 MB/s
[K |████████████████████████████████| 895 kB 44.0 MB/s
[K |████████████████████████████████| 1.2 MB 33.9 MB/s
[?25himport numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn_____no_output_____SEED = 1234_____no_output_____def set_seeds(seed=1234):
"""Set seeds for reproducibility."""
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility
set_seeds(seed=SEED)_____no_output_____# Set seeds for reproducibility
set_seeds(seed=SEED)_____no_output_____# Set device
cuda = True
device = torch.device("cuda" if (
torch.cuda.is_available() and cuda) else "cpu")
torch.set_default_tensor_type("torch.FloatTensor")
if device.type == "cuda":
torch.set_default_tensor_type("torch.cuda.FloatTensor")
print (device)cuda
</code>
## Load data_____no_output_____We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`)_____no_output_____
<code>
import numpy as np
import pandas as pd
import re
import urllib_____no_output_____# Load data
url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv"
df = pd.read_csv(url, header=0) # load
df = df.sample(frac=1).reset_index(drop=True) # shuffle
df.head()_____no_output_____# Reduce data size (too large to fit in Colab's limited memory)
df = df[:10000]
print (len(df))10000
</code>
## Preprocessing_____no_output_____We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc._____no_output_____
<code>
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re_____no_output_____nltk.download("stopwords")
STOPWORDS = stopwords.words("english")
print (STOPWORDS[:5])
porter = PorterStemmer()[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
['i', 'me', 'my', 'myself', 'we']
def preprocess(text, stopwords=STOPWORDS):
"""Conditional preprocessing on our text unique to our task."""
# Lower
text = text.lower()
# Remove stopwords
pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*')
text = pattern.sub('', text)
# Remove words in paranthesis
text = re.sub(r'\([^)]*\)', '', text)
# Spacing and filters
text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
return text_____no_output_____# Sample
text = "Great week for the NYSE!"
preprocess(text=text)_____no_output_____# Apply to dataframe
preprocessed_df = df.copy()
preprocessed_df.title = preprocessed_df.title.apply(preprocess)
print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}")Sharon Accepts Plan to Reduce Gaza Army Operation, Haaretz Says
sharon accepts plan reduce gaza army operation haaretz says
</code>
## Split data_____no_output_____
<code>
import collections
from sklearn.model_selection import train_test_split_____no_output_____TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15_____no_output_____def train_val_test_split(X, y, train_size):
"""Split dataset into data splits."""
X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)
return X_train, X_val, X_test, y_train, y_val, y_test_____no_output_____# Data
X = preprocessed_df["title"].values
y = preprocessed_df["category"].values_____no_output_____# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, train_size=TRAIN_SIZE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")X_train: (7000,), y_train: (7000,)
X_val: (1500,), y_val: (1500,)
X_test: (1500,), y_test: (1500,)
Sample point: lost flu paydays → Business
</code>
## Label encoder_____no_output_____
<code>
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(y)
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y):
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
y_one_hot[i][self.class_to_index[item]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
index = np.where(item == 1)[0][0]
classes.append(self.index_to_class[index])
return classes
def save(self, fp):
with open(fp, "w") as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, "r") as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)_____no_output_____# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
num_classes = len(label_encoder)
label_encoder.class_to_index_____no_output_____# Class weights
counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train])
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"counts: {counts}\nweights: {class_weights}")counts: [1746 1723 1725 1806]
weights: {0: 0.000572737686139748, 1: 0.0005803830528148578, 2: 0.0005797101449275362, 3: 0.0005537098560354374}
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = label_encoder.encode(y_train)
y_val = label_encoder.encode(y_val)
y_test = label_encoder.encode(y_test)
print (f"y_train[0]: {y_train[0]}")
print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}")y_train[0]: Business
y_train[0]: [1 0 0 0]
decode([y_train[0]]): ['Business']
</code>
## Tokenizer_____no_output_____We'll be using the [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer) to tokenize our input text in to sub-word tokens._____no_output_____
<code>
from transformers import DistilBertTokenizer
from transformers import BertTokenizer_____no_output_____# Load tokenizer and model
# tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
vocab_size = len(tokenizer)
print (vocab_size)_____no_output_____# Tokenize inputs
encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True)
X_train_ids = encoded_input["input_ids"]
X_train_masks = encoded_input["attention_mask"]
print (X_train_ids.shape, X_train_masks.shape)
encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True)
X_val_ids = encoded_input["input_ids"]
X_val_masks = encoded_input["attention_mask"]
print (X_val_ids.shape, X_val_masks.shape)
encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True)
X_test_ids = encoded_input["input_ids"]
X_test_masks = encoded_input["attention_mask"]
print (X_test_ids.shape, X_test_masks.shape)torch.Size([7000, 27]) torch.Size([7000, 27])
torch.Size([1500, 21]) torch.Size([1500, 21])
torch.Size([1500, 26]) torch.Size([1500, 26])
# Decode
print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}")tensor([ 102, 6677, 1441, 3982, 17973, 103, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0])
[CLS] lost flu paydays [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
# Sub-word tokens
print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0]))['[CLS]', 'lost', 'flu', 'pay', '##days', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']
</code>
## Datasets_____no_output_____We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits._____no_output_____
<code>
class TransformerTextDataset(torch.utils.data.Dataset):
def __init__(self, ids, masks, targets):
self.ids = ids
self.masks = masks
self.targets = targets
def __len__(self):
return len(self.targets)
def __str__(self):
return f"<Dataset(N={len(self)})>"
def __getitem__(self, index):
ids = torch.tensor(self.ids[index], dtype=torch.long)
masks = torch.tensor(self.masks[index], dtype=torch.long)
targets = torch.FloatTensor(self.targets[index])
return ids, masks, targets
def create_dataloader(self, batch_size, shuffle=False, drop_last=False):
return torch.utils.data.DataLoader(
dataset=self,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
pin_memory=False)_____no_output_____# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")Data splits:
Train dataset:<Dataset(N=7000)>
Val dataset: <Dataset(N=1500)>
Test dataset: <Dataset(N=1500)>
Sample point:
ids: tensor([ 102, 6677, 1441, 3982, 17973, 103, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0])
masks: tensor([1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])
targets: tensor([1., 0., 0., 0.], device="cpu")
# Create dataloaders
batch_size = 128
train_dataloader = train_dataset.create_dataloader(
batch_size=batch_size)
val_dataloader = val_dataset.create_dataloader(
batch_size=batch_size)
test_dataloader = test_dataset.create_dataloader(
batch_size=batch_size)
batch = next(iter(train_dataloader))
print ("Sample batch:\n"
f" ids: {batch[0].size()}\n"
f" masks: {batch[1].size()}\n"
f" targets: {batch[2].size()}")Sample batch:
ids: torch.Size([128, 27])
masks: torch.Size([128, 27])
targets: torch.Size([128, 4])
</code>
## Trainer_____no_output_____Let's create the `Trainer` class that we'll use to facilitate training for our experiments._____no_output_____
<code>
import torch.nn.functional as F_____no_output_____class Trainer(object):
def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None):
# Set params
self.model = model
self.device = device
self.loss_fn = loss_fn
self.optimizer = optimizer
self.scheduler = scheduler
def train_step(self, dataloader):
"""Train step."""
# Set model to train mode
self.model.train()
loss = 0.0
# Iterate over train batches
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, targets = batch[:-1], batch[-1]
self.optimizer.zero_grad() # Reset gradients
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, targets) # Define loss
J.backward() # Backward pass
self.optimizer.step() # Update weights
# Cumulative Metrics
loss += (J.detach().item() - loss) / (i + 1)
return loss
def eval_step(self, dataloader):
"""Validation or test step."""
# Set model to eval mode
self.model.eval()
loss = 0.0
y_trues, y_probs = [], []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, y_true = batch[:-1], batch[-1]
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, y_true).item()
# Cumulative Metrics
loss += (J - loss) / (i + 1)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
y_trues.extend(y_true.cpu().numpy())
return loss, np.vstack(y_trues), np.vstack(y_probs)
def predict_step(self, dataloader):
"""Prediction step."""
# Set model to eval mode
self.model.eval()
y_probs = []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = self.model(inputs)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
return np.vstack(y_probs)
def train(self, num_epochs, patience, train_dataloader, val_dataloader):
best_val_loss = np.inf
for epoch in range(num_epochs):
# Steps
train_loss = self.train_step(dataloader=train_dataloader)
val_loss, _, _ = self.eval_step(dataloader=val_dataloader)
self.scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = self.model
_patience = patience # reset _patience
else:
_patience -= 1
if not _patience: # 0
print("Stopping early!")
break
# Logging
print(
f"Epoch: {epoch+1} | "
f"train_loss: {train_loss:.5f}, "
f"val_loss: {val_loss:.5f}, "
f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, "
f"_patience: {_patience}"
)
return best_model_____no_output_____
</code>
# Transformer_____no_output_____## Scaled dot-product attention_____no_output_____The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn._____no_output_____$ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim)
$ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $
$ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $
$ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $
$ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $_____no_output_____## Multi-head attention_____no_output_____Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces._____no_output_____$ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $
* ${head}_i = attention(Q_i, K_i, V_i) $
* $h$ = # of self-attention heads
* $W_O \in \mathbb{R}^{hd_vXH} $
* $H$ = hidden dim. (or dimension of the model $d_{model}$)
_____no_output_____## Positional encoding_____no_output_____With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training._____no_output_____$ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $
$ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $
where:
* $pos$ = position of the token $(1...M)$
* $i$ = hidden dim $(1..H)$_____no_output_____This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads._____no_output_____## Architecture_____no_output_____And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on._____no_output_____<div align="left">
<img src="https://madewithml.com/static/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>_____no_output_____> We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to load a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) , which we'll use as a feature extractor and fine-tune on our own dataset._____no_output_____## Model_____no_output_____We're going to use a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) to act as a feature extractor. We'll only use the encoder to receive sequential and pooled outputs (`is_decoder=False` is default)._____no_output_____
<code>
from transformers import BertModel_____no_output_____# transformer = BertModel.from_pretrained("distilbert-base-uncased")
# embedding_dim = transformer.config.dim
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size_____no_output_____class Transformer(nn.Module):
def __init__(self, transformer, dropout_p, embedding_dim, num_classes):
super(Transformer, self).__init__()
self.transformer = transformer
self.dropout = torch.nn.Dropout(dropout_p)
self.fc1 = torch.nn.Linear(embedding_dim, num_classes)
def forward(self, inputs):
ids, masks = inputs
seq, pool = self.transformer(input_ids=ids, attention_mask=masks)
z = self.dropout(pool)
z = self.fc1(z)
return z_____no_output_____
</code>
> We decided to work with the pooled output, but we could have just as easily worked with the sequential output (encoder representation for each sub-token) and applied a CNN (or other decoder options) on top of it._____no_output_____
<code>
# Initialize model
dropout_p = 0.5
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model = model.to(device)
print (model.named_parameters)<bound method Module.named_parameters of Transformer(
(transformer): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(31090, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.5, inplace=False)
(fc1): Linear(in_features=768, out_features=4, bias=True)
)>
</code>
## Training_____no_output_____
<code>
# Arguments
lr = 1e-4
num_epochs = 100
patience = 10_____no_output_____# Define loss
class_weights_tensor = torch.Tensor(np.array(list(class_weights.values())))
loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor)_____no_output_____# Define optimizer & scheduler
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=0.1, patience=5)_____no_output_____# Trainer module
trainer = Trainer(
model=model, device=device, loss_fn=loss_fn,
optimizer=optimizer, scheduler=scheduler)_____no_output_____# Train
best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader)/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:14: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:15: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
from ipykernel import kernelapp as app
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:55: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
</code>
## Evaluation_____no_output_____
<code>
import json
from sklearn.metrics import precision_recall_fscore_support_____no_output_____def get_performance(y_true, y_pred, classes):
"""Per-class performance metrics."""
# Performance
performance = {"overall": {}, "class": {}}
# Overall performance
metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted")
performance["overall"]["precision"] = metrics[0]
performance["overall"]["recall"] = metrics[1]
performance["overall"]["f1"] = metrics[2]
performance["overall"]["num_samples"] = np.float64(len(y_true))
# Per-class performance
metrics = precision_recall_fscore_support(y_true, y_pred, average=None)
for i in range(len(classes)):
performance["class"][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i]),
}
return performance_____no_output_____# Get predictions
test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader)
y_pred = np.argmax(y_prob, axis=1)/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:14: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:15: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
from ipykernel import kernelapp as app
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:55: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
# Determine performance
performance = get_performance(
y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes)
print (json.dumps(performance["overall"], indent=2)){
"precision": 0.8085194951783808,
"recall": 0.8086666666666666,
"f1": 0.8083051845125695,
"num_samples": 1500.0
}
# Save artifacts
from pathlib import Path
dir = Path("transformers")
dir.mkdir(parents=True, exist_ok=True)
label_encoder.save(fp=Path(dir, "label_encoder.json"))
torch.save(best_model.state_dict(), Path(dir, "model.pt"))
with open(Path(dir, "performance.json"), "w") as fp:
json.dump(performance, indent=2, sort_keys=False, fp=fp)_____no_output_____
</code>
## Inference_____no_output_____
<code>
def get_probability_distribution(y_prob, classes):
"""Create a dict of class probabilities from an array."""
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results_____no_output_____# Load artifacts
device = torch.device("cpu")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json"))
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device))
model.to(device);_____no_output_____# Initialize trainer
trainer = Trainer(model=model, device=device)_____no_output_____# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")Data splits:
Train dataset:<Dataset(N=7000)>
Val dataset: <Dataset(N=1500)>
Test dataset: <Dataset(N=1500)>
Sample point:
ids: tensor([ 102, 6677, 1441, 3982, 17973, 103, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0])
masks: tensor([1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])
targets: tensor([1., 0., 0., 0.], device="cpu")
# Dataloader
text = "The final tennis tournament starts next week."
X = preprocess(text)
encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu"))
ids = encoded_input["input_ids"]
masks = encoded_input["attention_mask"]
y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids))
dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler)
dataloader = dataset.create_dataloader(batch_size=int(batch_size))_____no_output_____# Inference
y_prob = trainer.predict_step(dataloader)
y_pred = np.argmax(y_prob, axis=1)
label_encoder.index_to_class[y_pred[0]]/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:14: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:15: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
from ipykernel import kernelapp as app
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:76: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
# Class distributions
prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes)
print (json.dumps(prob_dist, indent=2)){
"Sports": 0.9999359846115112,
"World": 4.0660612285137177e-05,
"Sci/Tech": 1.1774928680097219e-05,
"Business": 1.1545793313416652e-05
}
</code>
## Interpretability_____no_output_____Let's visualize the self-attention weights from each of the attention heads in the encoder._____no_output_____
<code>
import sys
!rm -r bertviz_repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not "bertviz_repo" in sys.path:
sys.path += ["bertviz_repo"]rm: cannot remove 'bertviz_repo': No such file or directory
Cloning into 'bertviz_repo'...
remote: Enumerating objects: 1416, done.[K
remote: Counting objects: 100% (213/213), done.[K
remote: Compressing objects: 100% (142/142), done.[K
remote: Total 1416 (delta 137), reused 133 (delta 71), pack-reused 1203[K
Receiving objects: 100% (1416/1416), 213.85 MiB | 23.27 MiB/s, done.
Resolving deltas: 100% (900/900), done.
from bertviz import head_view_____no_output_____# Print input ids
print (ids)
print (tokenizer.batch_decode(ids))tensor([[ 102, 2531, 3617, 8869, 23589, 4972, 8553, 2205, 4082, 103]],
device="cpu")
['[CLS] final tennis tournament starts next week [SEP]']
# Get encoder attentions
seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True)
print (len(attn)) # 12 attention layers (heads)
print (attn[0].shape)12
torch.Size([1, 12, 10, 10])
# HTML set up
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))_____no_output_____# Visualize self-attention weights
call_html()
tokens = tokenizer.convert_ids_to_tokens(ids[0])
head_view(attention=attn, tokens=tokens)_____no_output_____
</code>
> Now you're ready to start the [MLOps lessons](https://madewithml.com/#mlops) to learn how to apply all this foundational modeling knowledge to responsibly deliver value._____no_output_____
|
{
"repository": "Study-Repos-Forks/MadeWithML",
"path": "notebooks/15_Transformers.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 3599,
"size": 881493,
"hexsha": "cb6389b28745817f370b49d507dea27617ea4762",
"max_line_length": 633235,
"avg_line_length": 245.0633861551,
"alphanum_fraction": 0.8056524555
}
|
# Notebook from naderabdalghani/udacity-deep-learning-nanodegree
Path: sagemaker-deployment/Project/solution/SageMaker Project.ipynb
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app._____no_output_____## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011._____no_output_____
<code>
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../datamkdir: cannot create directory ‘../data’: File exists
--2020-09-10 12:02:29-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 21.3MB/s in 6.3s
2020-09-10 12:02:36 (12.7 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
</code>
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set._____no_output_____
<code>
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels_____no_output_____data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
</code>
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records._____no_output_____
<code>
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test_____no_output_____train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))IMDb reviews (combined): train = 25000, test = 25000
</code>
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly._____no_output_____
<code>
print(train_X[100])
print(len(train_X[100]))
print(train_y[100])We purchased this series on DVD because of all of the glowing reviews we had seen here. I gave it three stars because there can be little doubt that sometimes the acting, directing and writing are brilliant. In fact they are so brilliant we did not see the propaganda that was being transmitted so smoothly on the series. If one watches it with discernment, one will see the entire litany of the radical right wing beliefs being promulgated by the Fox (Faux) News Network. To avoid giving away any spoilers I will refrain from pointing out all of the dozens of specific instances. A brief look at the plots found here on IMDb will disclose that everything from torture to gun control to the right of a network to provide "Infomercials" and call them news is justified with cute plot twists and impassioned speeches given by some of the best actors in the world. We watched many shows and finally gave up in disgust when they justified torture using Attorney General Gonzales as a shining example of why all kinds of torture should be used in the name of protecting all of us. The series also manages to demean male and female gays in subtle ways by using them as plot devices depicting evil people. All in all the complete litany of the radical religious right wing.<br /><br />No doubt the popularity of this program will be used by future historians as proof that America lost its way in the early part of the this century. As a student of history myself I would characterize this program as being in a league with the propaganda produced by Goebbels for Hitler and some of the propaganda produced by Hollywood for the American audience during WWII.<br /><br />So if you want to use this as a teaching tool to help your students understand how subtle propaganda can be then by all means do so. Just be sure to purchase an inexpensive used copy so you can avoid enriching the ultra right wingers at Faux Network who produced this travesty.
1940
0
</code>
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis._____no_output_____
<code>
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words_____no_output_____
</code>
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set._____no_output_____
<code>
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])_____no_output_____
</code>
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?_____no_output_____**Answer:** Beside removing HTML formatting and word tokenization (stemming), it removes punctuation marks, converts all letters to lowercase and removes English stopwords (e.g. and, the, a, etc.)_____no_output_____The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time._____no_output_____
<code>
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test_____no_output_____# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)Read preprocessed data from cache file: preprocessed_data.pkl
</code>
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews._____no_output_____### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'._____no_output_____
<code>
import numpy as np
from collections import Counter
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
words = [j for i in data for j in i]
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
word_count = Counter(words)
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict_____no_output_____word_dict = build_dict(train_X)_____no_output_____
</code>
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?_____no_output_____**Answer:** The five most frequently appearing words in the training set are: 'movi', 'film', 'one', 'like' and 'time'. Since the reviews are all about movies, this makes total sense_____no_output_____
<code>
# TODO: Use this space to determine the five most frequently appearing words in the training set.
print(word_dict){'movi': 2, 'film': 3, 'one': 4, 'like': 5, 'time': 6, 'good': 7, 'make': 8, 'charact': 9, 'get': 10, 'see': 11, 'watch': 12, 'stori': 13, 'even': 14, 'would': 15, 'realli': 16, 'well': 17, 'scene': 18, 'look': 19, 'show': 20, 'much': 21, 'end': 22, 'peopl': 23, 'bad': 24, 'go': 25, 'great': 26, 'also': 27, 'first': 28, 'love': 29, 'think': 30, 'way': 31, 'act': 32, 'play': 33, 'made': 34, 'thing': 35, 'could': 36, 'know': 37, 'say': 38, 'seem': 39, 'work': 40, 'plot': 41, 'two': 42, 'actor': 43, 'year': 44, 'come': 45, 'mani': 46, 'seen': 47, 'take': 48, 'life': 49, 'want': 50, 'never': 51, 'littl': 52, 'best': 53, 'tri': 54, 'man': 55, 'ever': 56, 'give': 57, 'better': 58, 'still': 59, 'perform': 60, 'find': 61, 'feel': 62, 'part': 63, 'back': 64, 'use': 65, 'someth': 66, 'director': 67, 'actual': 68, 'interest': 69, 'lot': 70, 'real': 71, 'old': 72, 'cast': 73, 'though': 74, 'live': 75, 'star': 76, 'enjoy': 77, 'guy': 78, 'anoth': 79, 'new': 80, 'role': 81, 'noth': 82, '10': 83, 'funni': 84, 'music': 85, 'point': 86, 'start': 87, 'set': 88, 'girl': 89, 'origin': 90, 'day': 91, 'world': 92, 'everi': 93, 'believ': 94, 'turn': 95, 'quit': 96, 'us': 97, 'direct': 98, 'thought': 99, 'fact': 100, 'minut': 101, 'horror': 102, 'kill': 103, 'action': 104, 'comedi': 105, 'pretti': 106, 'young': 107, 'wonder': 108, 'happen': 109, 'around': 110, 'got': 111, 'effect': 112, 'right': 113, 'long': 114, 'howev': 115, 'big': 116, 'line': 117, 'famili': 118, 'enough': 119, 'seri': 120, 'may': 121, 'need': 122, 'fan': 123, 'bit': 124, 'script': 125, 'beauti': 126, 'person': 127, 'becom': 128, 'without': 129, 'must': 130, 'alway': 131, 'friend': 132, 'tell': 133, 'reason': 134, 'saw': 135, 'last': 136, 'final': 137, 'kid': 138, 'almost': 139, 'put': 140, 'least': 141, 'sure': 142, 'done': 143, 'whole': 144, 'place': 145, 'complet': 146, 'kind': 147, 'differ': 148, 'expect': 149, 'shot': 150, 'far': 151, 'mean': 152, 'anyth': 153, 'book': 154, 'laugh': 155, 'might': 156, 'name': 157, 'sinc': 158, 'begin': 159, '2': 160, 'probabl': 161, 'woman': 162, 'help': 163, 'entertain': 164, 'let': 165, 'screen': 166, 'call': 167, 'tv': 168, 'moment': 169, 'away': 170, 'read': 171, 'yet': 172, 'rather': 173, 'worst': 174, 'run': 175, 'fun': 176, 'lead': 177, 'hard': 178, 'audienc': 179, 'idea': 180, 'anyon': 181, 'episod': 182, 'american': 183, 'found': 184, 'appear': 185, 'bore': 186, 'especi': 187, 'although': 188, 'hope': 189, 'keep': 190, 'cours': 191, 'anim': 192, 'job': 193, 'goe': 194, 'move': 195, 'sens': 196, 'version': 197, 'dvd': 198, 'war': 199, 'money': 200, 'someon': 201, 'mind': 202, 'mayb': 203, 'problem': 204, 'true': 205, 'hous': 206, 'everyth': 207, 'nice': 208, 'second': 209, 'rate': 210, 'three': 211, 'night': 212, 'follow': 213, 'face': 214, 'recommend': 215, 'product': 216, 'main': 217, 'worth': 218, 'leav': 219, 'human': 220, 'special': 221, 'excel': 222, 'togeth': 223, 'wast': 224, 'everyon': 225, 'sound': 226, 'john': 227, 'hand': 228, '1': 229, 'father': 230, 'later': 231, 'eye': 232, 'said': 233, 'view': 234, 'instead': 235, 'review': 236, 'boy': 237, 'high': 238, 'hour': 239, 'miss': 240, 'talk': 241, 'classic': 242, 'wife': 243, 'understand': 244, 'left': 245, 'care': 246, 'black': 247, 'death': 248, 'open': 249, 'murder': 250, 'write': 251, 'half': 252, 'head': 253, 'rememb': 254, 'chang': 255, 'viewer': 256, 'fight': 257, 'gener': 258, 'surpris': 259, 'includ': 260, 'short': 261, 'die': 262, 'fall': 263, 'less': 264, 'els': 265, 'entir': 266, 'piec': 267, 'involv': 268, 'pictur': 269, 'simpli': 270, 'top': 271, 'power': 272, 'home': 273, 'total': 274, 'usual': 275, 'budget': 276, 'attempt': 277, 'suppos': 278, 'releas': 279, 'hollywood': 280, 'terribl': 281, 'song': 282, 'men': 283, 'possibl': 284, 'featur': 285, 'portray': 286, 'disappoint': 287, 'poor': 288, '3': 289, 'coupl': 290, 'stupid': 291, 'camera': 292, 'dead': 293, 'wrong': 294, 'produc': 295, 'low': 296, 'either': 297, 'video': 298, 'aw': 299, 'definit': 300, 'except': 301, 'rest': 302, 'given': 303, 'absolut': 304, 'women': 305, 'lack': 306, 'word': 307, 'writer': 308, 'titl': 309, 'talent': 310, 'decid': 311, 'full': 312, 'perfect': 313, 'along': 314, 'style': 315, 'close': 316, 'truli': 317, 'school': 318, 'emot': 319, 'save': 320, 'sex': 321, 'age': 322, 'next': 323, 'bring': 324, 'mr': 325, 'case': 326, 'killer': 327, 'heart': 328, 'comment': 329, 'sort': 330, 'creat': 331, 'perhap': 332, 'came': 333, 'brother': 334, 'sever': 335, 'joke': 336, 'art': 337, 'dialogu': 338, 'game': 339, 'small': 340, 'base': 341, 'flick': 342, 'written': 343, 'sequenc': 344, 'meet': 345, 'earli': 346, 'often': 347, 'other': 348, 'mother': 349, 'develop': 350, 'humor': 351, 'actress': 352, 'consid': 353, 'dark': 354, 'guess': 355, 'amaz': 356, 'unfortun': 357, 'lost': 358, 'light': 359, 'exampl': 360, 'cinema': 361, 'drama': 362, 'ye': 363, 'white': 364, 'experi': 365, 'imagin': 366, 'mention': 367, 'stop': 368, 'natur': 369, 'forc': 370, 'manag': 371, 'felt': 372, 'cut': 373, 'present': 374, 'children': 375, 'fail': 376, 'son': 377, 'qualiti': 378, 'support': 379, 'car': 380, 'ask': 381, 'hit': 382, 'side': 383, 'voic': 384, 'extrem': 385, 'impress': 386, 'wors': 387, 'evil': 388, 'went': 389, 'stand': 390, 'certainli': 391, 'basic': 392, 'oh': 393, 'overal': 394, 'favorit': 395, 'horribl': 396, 'mysteri': 397, 'number': 398, 'type': 399, 'danc': 400, 'wait': 401, 'hero': 402, 'alreadi': 403, '5': 404, 'learn': 405, 'matter': 406, '4': 407, 'michael': 408, 'genr': 409, 'fine': 410, 'despit': 411, 'throughout': 412, 'walk': 413, 'success': 414, 'histori': 415, 'question': 416, 'zombi': 417, 'town': 418, 'realiz': 419, 'relationship': 420, 'child': 421, 'past': 422, 'daughter': 423, 'late': 424, 'b': 425, 'wish': 426, 'credit': 427, 'hate': 428, 'event': 429, 'theme': 430, 'touch': 431, 'citi': 432, 'today': 433, 'sometim': 434, 'behind': 435, 'god': 436, 'twist': 437, 'sit': 438, 'deal': 439, 'annoy': 440, 'stay': 441, 'abl': 442, 'rent': 443, 'pleas': 444, 'edit': 445, 'blood': 446, 'deserv': 447, 'comic': 448, 'anyway': 449, 'appar': 450, 'soon': 451, 'gave': 452, 'etc': 453, 'level': 454, 'slow': 455, 'chanc': 456, 'score': 457, 'bodi': 458, 'brilliant': 459, 'incred': 460, 'figur': 461, 'situat': 462, 'self': 463, 'major': 464, 'stuff': 465, 'decent': 466, 'element': 467, 'dream': 468, 'return': 469, 'obvious': 470, 'continu': 471, 'order': 472, 'pace': 473, 'ridicul': 474, 'happi': 475, 'group': 476, 'add': 477, 'highli': 478, 'thank': 479, 'ladi': 480, 'novel': 481, 'pain': 482, 'speak': 483, 'career': 484, 'shoot': 485, 'strang': 486, 'heard': 487, 'sad': 488, 'polic': 489, 'husband': 490, 'import': 491, 'break': 492, 'took': 493, 'cannot': 494, 'strong': 495, 'robert': 496, 'predict': 497, 'violenc': 498, 'hilari': 499, 'recent': 500, 'countri': 501, 'known': 502, 'particularli': 503, 'pick': 504, 'documentari': 505, 'season': 506, 'critic': 507, 'jame': 508, 'compar': 509, 'alon': 510, 'obviou': 511, 'told': 512, 'state': 513, 'visual': 514, 'rock': 515, 'offer': 516, 'exist': 517, 'theater': 518, 'opinion': 519, 'gore': 520, 'crap': 521, 'hold': 522, 'result': 523, 'room': 524, 'realiti': 525, 'hear': 526, 'effort': 527, 'clich': 528, 'thriller': 529, 'caus': 530, 'sequel': 531, 'explain': 532, 'serious': 533, 'king': 534, 'local': 535, 'ago': 536, 'hell': 537, 'none': 538, 'note': 539, 'allow': 540, 'david': 541, 'sister': 542, 'simpl': 543, 'femal': 544, 'deliv': 545, 'ok': 546, 'class': 547, 'convinc': 548, 'check': 549, 'suspens': 550, 'win': 551, 'buy': 552, 'oscar': 553, 'huge': 554, 'valu': 555, 'sexual': 556, 'scari': 557, 'cool': 558, 'similar': 559, 'excit': 560, 'provid': 561, 'apart': 562, 'exactli': 563, 'shown': 564, 'avoid': 565, 'seriou': 566, 'english': 567, 'taken': 568, 'whose': 569, 'cinematographi': 570, 'shock': 571, 'polit': 572, 'spoiler': 573, 'offic': 574, 'across': 575, 'middl': 576, 'street': 577, 'pass': 578, 'messag': 579, 'somewhat': 580, 'silli': 581, 'charm': 582, 'modern': 583, 'filmmak': 584, 'confus': 585, 'form': 586, 'tale': 587, 'singl': 588, 'jack': 589, 'mostli': 590, 'attent': 591, 'william': 592, 'carri': 593, 'sing': 594, 'subject': 595, 'five': 596, 'richard': 597, 'prove': 598, 'stage': 599, 'team': 600, 'unlik': 601, 'cop': 602, 'georg': 603, 'monster': 604, 'televis': 605, 'earth': 606, 'cover': 607, 'villain': 608, 'pay': 609, 'marri': 610, 'toward': 611, 'build': 612, 'pull': 613, 'parent': 614, 'due': 615, 'fill': 616, 'respect': 617, 'four': 618, 'dialog': 619, 'remind': 620, 'futur': 621, 'weak': 622, 'typic': 623, '7': 624, 'cheap': 625, 'intellig': 626, 'atmospher': 627, 'british': 628, 'clearli': 629, '80': 630, 'non': 631, 'dog': 632, 'paul': 633, 'fast': 634, '8': 635, 'artist': 636, 'knew': 637, 'crime': 638, 'easili': 639, 'escap': 640, 'doubt': 641, 'adult': 642, 'detail': 643, 'date': 644, 'member': 645, 'fire': 646, 'romant': 647, 'drive': 648, 'gun': 649, 'straight': 650, 'fit': 651, 'beyond': 652, 'attack': 653, 'imag': 654, 'upon': 655, 'posit': 656, 'whether': 657, 'peter': 658, 'fantast': 659, 'aspect': 660, 'captur': 661, 'appreci': 662, 'ten': 663, 'plan': 664, 'discov': 665, 'remain': 666, 'near': 667, 'period': 668, 'realist': 669, 'air': 670, 'mark': 671, 'red': 672, 'dull': 673, 'adapt': 674, 'within': 675, 'lose': 676, 'spend': 677, 'color': 678, 'materi': 679, 'chase': 680, 'mari': 681, 'storylin': 682, 'forget': 683, 'bunch': 684, 'clear': 685, 'lee': 686, 'victim': 687, 'nearli': 688, 'box': 689, 'york': 690, 'inspir': 691, 'match': 692, 'mess': 693, 'finish': 694, 'standard': 695, 'easi': 696, 'truth': 697, 'suffer': 698, 'busi': 699, 'dramat': 700, 'bill': 701, 'space': 702, 'western': 703, 'e': 704, 'list': 705, 'battl': 706, 'notic': 707, 'de': 708, 'french': 709, 'ad': 710, '9': 711, 'tom': 712, 'larg': 713, 'among': 714, 'eventu': 715, 'accept': 716, 'train': 717, 'agre': 718, 'soundtrack': 719, 'spirit': 720, 'third': 721, 'teenag': 722, 'soldier': 723, 'adventur': 724, 'drug': 725, 'suggest': 726, 'sorri': 727, 'famou': 728, 'normal': 729, 'cri': 730, 'babi': 731, 'ultim': 732, 'troubl': 733, 'contain': 734, 'certain': 735, 'cultur': 736, 'romanc': 737, 'rare': 738, 'lame': 739, 'somehow': 740, 'mix': 741, 'disney': 742, 'gone': 743, 'cartoon': 744, 'student': 745, 'reveal': 746, 'fear': 747, 'kept': 748, 'suck': 749, 'attract': 750, 'appeal': 751, 'premis': 752, 'greatest': 753, 'secret': 754, 'design': 755, 'shame': 756, 'throw': 757, 'copi': 758, 'scare': 759, 'wit': 760, 'admit': 761, 'america': 762, 'relat': 763, 'brought': 764, 'particular': 765, 'screenplay': 766, 'whatev': 767, 'pure': 768, '70': 769, 'averag': 770, 'harri': 771, 'master': 772, 'describ': 773, 'treat': 774, 'male': 775, '20': 776, 'fantasi': 777, 'issu': 778, 'warn': 779, 'inde': 780, 'forward': 781, 'background': 782, 'project': 783, 'free': 784, 'memor': 785, 'japanes': 786, 'poorli': 787, 'award': 788, 'locat': 789, 'amus': 790, 'potenti': 791, 'struggl': 792, 'magic': 793, 'weird': 794, 'societi': 795, 'okay': 796, 'doctor': 797, 'accent': 798, 'imdb': 799, 'hot': 800, 'water': 801, 'dr': 802, 'alien': 803, 'express': 804, '30': 805, 'odd': 806, 'crazi': 807, 'choic': 808, 'fiction': 809, 'studio': 810, 'becam': 811, 'control': 812, 'masterpiec': 813, 'difficult': 814, 'fli': 815, 'joe': 816, 'scream': 817, 'costum': 818, 'lover': 819, 'uniqu': 820, 'refer': 821, 'remak': 822, 'girlfriend': 823, 'vampir': 824, 'prison': 825, 'execut': 826, 'wear': 827, 'jump': 828, 'wood': 829, 'unless': 830, 'creepi': 831, 'cheesi': 832, 'superb': 833, 'otherwis': 834, 'parti': 835, 'ghost': 836, 'roll': 837, 'public': 838, 'mad': 839, 'depict': 840, 'earlier': 841, 'badli': 842, 'moral': 843, 'week': 844, 'jane': 845, 'fi': 846, 'dumb': 847, 'grow': 848, 'flaw': 849, 'sci': 850, 'deep': 851, 'maker': 852, 'cat': 853, 'footag': 854, 'connect': 855, 'older': 856, 'plenti': 857, 'bother': 858, 'outsid': 859, 'stick': 860, 'gay': 861, 'catch': 862, 'co': 863, 'plu': 864, 'popular': 865, 'equal': 866, 'social': 867, 'disturb': 868, 'quickli': 869, 'perfectli': 870, 'dress': 871, '90': 872, 'era': 873, 'mistak': 874, 'lie': 875, 'previou': 876, 'ride': 877, 'combin': 878, 'concept': 879, 'band': 880, 'surviv': 881, 'answer': 882, 'rich': 883, 'front': 884, 'christma': 885, 'sweet': 886, 'insid': 887, 'bare': 888, 'eat': 889, 'concern': 890, 'ben': 891, 'beat': 892, 'listen': 893, 'c': 894, 'serv': 895, 'term': 896, 'la': 897, 'german': 898, 'meant': 899, 'hardli': 900, 'stereotyp': 901, 'law': 902, 'innoc': 903, 'desper': 904, 'promis': 905, 'memori': 906, 'intent': 907, 'cute': 908, 'variou': 909, 'inform': 910, 'steal': 911, 'brain': 912, 'post': 913, 'tone': 914, 'island': 915, 'amount': 916, 'nuditi': 917, 'compani': 918, 'track': 919, 'claim': 920, 'store': 921, 'flat': 922, 'hair': 923, '50': 924, 'univers': 925, 'land': 926, 'kick': 927, 'fairli': 928, 'danger': 929, 'scott': 930, 'player': 931, 'plain': 932, 'step': 933, 'crew': 934, 'toni': 935, 'share': 936, 'tast': 937, 'centuri': 938, 'engag': 939, 'achiev': 940, 'cold': 941, 'travel': 942, 'record': 943, 'suit': 944, 'rip': 945, 'manner': 946, 'sadli': 947, 'wrote': 948, 'tension': 949, 'spot': 950, 'fascin': 951, 'intens': 952, 'familiar': 953, 'remark': 954, 'depth': 955, 'burn': 956, 'histor': 957, 'destroy': 958, 'sleep': 959, 'purpos': 960, 'languag': 961, 'ignor': 962, 'ruin': 963, 'delight': 964, 'italian': 965, 'unbeliev': 966, 'collect': 967, 'soul': 968, 'abil': 969, 'clever': 970, 'detect': 971, 'violent': 972, 'rape': 973, 'reach': 974, 'door': 975, 'scienc': 976, 'trash': 977, 'liter': 978, 'caught': 979, 'commun': 980, 'reveng': 981, 'creatur': 982, 'trip': 983, 'approach': 984, 'fashion': 985, 'intrigu': 986, 'skill': 987, 'paint': 988, 'introduc': 989, 'complex': 990, 'channel': 991, 'camp': 992, 'christian': 993, 'hole': 994, 'extra': 995, 'mental': 996, 'ann': 997, 'limit': 998, 'immedi': 999, '6': 1000, 'comput': 1001, 'million': 1002, 'slightli': 1003, 'mere': 1004, 'conclus': 1005, 'slasher': 1006, 'imposs': 1007, 'suddenli': 1008, 'neither': 1009, 'teen': 1010, 'crimin': 1011, 'nation': 1012, 'physic': 1013, 'spent': 1014, 'respons': 1015, 'planet': 1016, 'fake': 1017, 'receiv': 1018, 'blue': 1019, 'sick': 1020, 'bizarr': 1021, 'embarrass': 1022, 'indian': 1023, 'ring': 1024, '15': 1025, 'pop': 1026, 'drop': 1027, 'drag': 1028, 'haunt': 1029, 'suspect': 1030, 'pointless': 1031, 'edg': 1032, 'search': 1033, 'handl': 1034, 'common': 1035, 'biggest': 1036, 'arriv': 1037, 'faith': 1038, 'hurt': 1039, 'technic': 1040, 'angel': 1041, 'genuin': 1042, 'dad': 1043, 'solid': 1044, 'f': 1045, 'awesom': 1046, 'focu': 1047, 'colleg': 1048, 'van': 1049, 'former': 1050, 'count': 1051, 'tear': 1052, 'heavi': 1053, 'wall': 1054, 'rais': 1055, 'visit': 1056, 'younger': 1057, 'laughabl': 1058, 'sign': 1059, 'excus': 1060, 'fair': 1061, 'cult': 1062, 'key': 1063, 'tough': 1064, 'motion': 1065, 'super': 1066, 'desir': 1067, 'addit': 1068, 'stun': 1069, 'exploit': 1070, 'cloth': 1071, 'smith': 1072, 'tortur': 1073, 'race': 1074, 'davi': 1075, 'cross': 1076, 'author': 1077, 'jim': 1078, 'minor': 1079, 'consist': 1080, 'compel': 1081, 'focus': 1082, 'chemistri': 1083, 'commit': 1084, 'pathet': 1085, 'park': 1086, 'obsess': 1087, 'tradit': 1088, 'frank': 1089, 'grade': 1090, 'asid': 1091, '60': 1092, 'brutal': 1093, 'steve': 1094, 'somewher': 1095, 'depress': 1096, 'rule': 1097, 'opportun': 1098, 'grant': 1099, 'u': 1100, 'explor': 1101, 'honest': 1102, 'besid': 1103, 'anti': 1104, 'dub': 1105, 'intend': 1106, 'trailer': 1107, 'bar': 1108, 'regard': 1109, 'west': 1110, 'longer': 1111, 'scientist': 1112, 'decad': 1113, 'judg': 1114, 'silent': 1115, 'armi': 1116, 'creativ': 1117, 'wild': 1118, 'g': 1119, 'south': 1120, 'stewart': 1121, 'draw': 1122, 'road': 1123, 'govern': 1124, 'ex': 1125, 'boss': 1126, 'practic': 1127, 'club': 1128, 'festiv': 1129, 'motiv': 1130, 'gang': 1131, 'surprisingli': 1132, 'redeem': 1133, 'green': 1134, 'page': 1135, 'london': 1136, 'machin': 1137, 'display': 1138, 'idiot': 1139, 'aliv': 1140, 'militari': 1141, 'thrill': 1142, 'repeat': 1143, 'nobodi': 1144, 'yeah': 1145, '100': 1146, 'folk': 1147, '40': 1148, 'garbag': 1149, 'journey': 1150, 'smile': 1151, 'ground': 1152, 'tire': 1153, 'mood': 1154, 'bought': 1155, 'cost': 1156, 'sam': 1157, 'stone': 1158, 'mouth': 1159, 'noir': 1160, 'terrif': 1161, 'agent': 1162, 'requir': 1163, 'utterli': 1164, 'sexi': 1165, 'honestli': 1166, 'area': 1167, 'report': 1168, 'geniu': 1169, 'enter': 1170, 'glad': 1171, 'humour': 1172, 'investig': 1173, 'serial': 1174, 'occasion': 1175, 'passion': 1176, 'narr': 1177, 'marriag': 1178, 'climax': 1179, 'studi': 1180, 'industri': 1181, 'ship': 1182, 'center': 1183, 'demon': 1184, 'charli': 1185, 'nowher': 1186, 'hors': 1187, 'bear': 1188, 'loos': 1189, 'wow': 1190, 'hang': 1191, 'graphic': 1192, 'admir': 1193, 'giant': 1194, 'send': 1195, 'damn': 1196, 'loud': 1197, 'profession': 1198, 'subtl': 1199, 'rel': 1200, 'nake': 1201, 'blow': 1202, 'bottom': 1203, 'insult': 1204, 'batman': 1205, 'kelli': 1206, 'r': 1207, 'doubl': 1208, 'boyfriend': 1209, 'initi': 1210, 'frame': 1211, 'gem': 1212, 'opera': 1213, 'affect': 1214, 'challeng': 1215, 'drawn': 1216, 'cinemat': 1217, 'church': 1218, 'evid': 1219, 'nightmar': 1220, 'j': 1221, 'seek': 1222, 'fulli': 1223, 'l': 1224, 'arm': 1225, 'conflict': 1226, 'essenti': 1227, 'wind': 1228, 'henri': 1229, 'christoph': 1230, 'grace': 1231, 'assum': 1232, 'narrat': 1233, 'witch': 1234, 'push': 1235, 'hunt': 1236, 'wise': 1237, 'chri': 1238, 'repres': 1239, 'month': 1240, 'nomin': 1241, 'avail': 1242, 'sceneri': 1243, 'affair': 1244, 'hide': 1245, 'smart': 1246, 'justic': 1247, 'thu': 1248, 'bond': 1249, 'interview': 1250, 'flashback': 1251, 'outstand': 1252, 'constantli': 1253, 'presenc': 1254, 'satisfi': 1255, 'central': 1256, 'bed': 1257, 'iron': 1258, 'sell': 1259, 'content': 1260, 'everybodi': 1261, 'gag': 1262, 'slowli': 1263, 'hotel': 1264, 'hire': 1265, 'system': 1266, 'adam': 1267, 'individu': 1268, 'charl': 1269, 'thrown': 1270, 'hey': 1271, 'allen': 1272, 'mediocr': 1273, 'jone': 1274, 'lesson': 1275, 'billi': 1276, 'ray': 1277, 'cameo': 1278, 'photographi': 1279, 'fellow': 1280, 'pari': 1281, 'strike': 1282, 'rise': 1283, 'absurd': 1284, 'brief': 1285, 'independ': 1286, 'neg': 1287, 'impact': 1288, 'phone': 1289, 'model': 1290, 'born': 1291, 'ill': 1292, 'spoil': 1293, 'angl': 1294, 'fresh': 1295, 'likabl': 1296, 'abus': 1297, 'discuss': 1298, 'hill': 1299, 'ahead': 1300, 'sight': 1301, 'photograph': 1302, 'sent': 1303, 'logic': 1304, 'occur': 1305, 'blame': 1306, 'shine': 1307, 'mainli': 1308, 'bruce': 1309, 'forev': 1310, 'commerci': 1311, 'skip': 1312, 'held': 1313, 'surround': 1314, 'segment': 1315, 'teacher': 1316, 'blond': 1317, 'zero': 1318, 'trap': 1319, 'satir': 1320, 'summer': 1321, 'resembl': 1322, 'queen': 1323, 'six': 1324, 'ball': 1325, 'fool': 1326, 'twice': 1327, 'sub': 1328, 'tragedi': 1329, 'reaction': 1330, 'pack': 1331, 'bomb': 1332, 'will': 1333, 'protagonist': 1334, 'hospit': 1335, 'sport': 1336, 'mile': 1337, 'drink': 1338, 'trust': 1339, 'vote': 1340, 'mom': 1341, 'jerri': 1342, 'encount': 1343, 'plane': 1344, 'program': 1345, 'current': 1346, 'station': 1347, 'al': 1348, 'celebr': 1349, 'martin': 1350, 'choos': 1351, 'join': 1352, 'favourit': 1353, 'lord': 1354, 'tragic': 1355, 'round': 1356, 'field': 1357, 'robot': 1358, 'vision': 1359, 'jean': 1360, 'tie': 1361, 'arthur': 1362, 'fortun': 1363, 'random': 1364, 'roger': 1365, 'dread': 1366, 'psycholog': 1367, 'intern': 1368, 'epic': 1369, 'nonsens': 1370, 'prefer': 1371, 'improv': 1372, 'formula': 1373, 'pleasur': 1374, 'legend': 1375, 'highlight': 1376, '11': 1377, 'tape': 1378, 'dollar': 1379, 'porn': 1380, 'wide': 1381, 'object': 1382, 'fox': 1383, 'thin': 1384, 'gorgeou': 1385, 'ugli': 1386, 'buddi': 1387, 'influenc': 1388, 'prepar': 1389, 'nasti': 1390, 'ii': 1391, 'progress': 1392, 'supposedli': 1393, 'reflect': 1394, 'warm': 1395, 'youth': 1396, 'worthi': 1397, 'unusu': 1398, 'length': 1399, 'latter': 1400, 'crash': 1401, 'superior': 1402, 'shop': 1403, 'seven': 1404, 'childhood': 1405, 'theatr': 1406, 'remot': 1407, 'funniest': 1408, 'disgust': 1409, 'pilot': 1410, 'paid': 1411, 'trick': 1412, 'fell': 1413, 'convers': 1414, 'castl': 1415, 'rob': 1416, 'establish': 1417, 'disast': 1418, 'gangster': 1419, 'suicid': 1420, 'disappear': 1421, 'heaven': 1422, 'ident': 1423, 'mine': 1424, 'forgotten': 1425, 'singer': 1426, 'decis': 1427, 'mask': 1428, 'tend': 1429, 'heroin': 1430, 'brian': 1431, 'partner': 1432, 'desert': 1433, 'alan': 1434, 'recogn': 1435, 'p': 1436, 'ms': 1437, 'thoroughli': 1438, 'stuck': 1439, 'sky': 1440, 'replac': 1441, 'accur': 1442, 'market': 1443, 'commentari': 1444, 'seemingli': 1445, 'andi': 1446, 'uncl': 1447, 'clue': 1448, 'eddi': 1449, 'danni': 1450, 'devil': 1451, 'jackson': 1452, 'that': 1453, 'pair': 1454, 'refus': 1455, 'therefor': 1456, 'ed': 1457, 'unit': 1458, 'accid': 1459, 'fault': 1460, 'river': 1461, 'fate': 1462, 'tune': 1463, 'afraid': 1464, 'russian': 1465, 'hidden': 1466, 'clean': 1467, 'stephen': 1468, 'captain': 1469, 'convey': 1470, 'irrit': 1471, 'test': 1472, 'instanc': 1473, 'readi': 1474, 'quick': 1475, 'european': 1476, 'insan': 1477, 'daniel': 1478, 'frustrat': 1479, '1950': 1480, 'food': 1481, 'rescu': 1482, 'chines': 1483, 'wed': 1484, 'dirti': 1485, 'angri': 1486, 'lock': 1487, 'joy': 1488, 'price': 1489, 'steven': 1490, 'bland': 1491, 'cage': 1492, 'anymor': 1493, 'rang': 1494, 'wooden': 1495, 'news': 1496, 'rush': 1497, 'jason': 1498, 'n': 1499, 'twenti': 1500, 'led': 1501, 'martial': 1502, 'board': 1503, '12': 1504, 'worri': 1505, 'transform': 1506, 'cgi': 1507, 'hunter': 1508, 'symbol': 1509, 'piti': 1510, 'onto': 1511, 'invent': 1512, 'x': 1513, 'sentiment': 1514, 'johnni': 1515, 'explan': 1516, 'process': 1517, 'attitud': 1518, 'awar': 1519, 'owner': 1520, 'aim': 1521, 'favor': 1522, 'energi': 1523, 'floor': 1524, 'target': 1525, 'necessari': 1526, 'religi': 1527, 'opposit': 1528, 'chick': 1529, 'insight': 1530, 'blind': 1531, 'window': 1532, 'movement': 1533, 'comparison': 1534, 'research': 1535, 'deepli': 1536, 'mountain': 1537, 'possess': 1538, 'grand': 1539, 'comed': 1540, 'whatsoev': 1541, 'rain': 1542, 'bank': 1543, 'mid': 1544, 'shadow': 1545, 'began': 1546, 'parodi': 1547, 'princ': 1548, 'weapon': 1549, 'credibl': 1550, 'taylor': 1551, 'friendship': 1552, 'pre': 1553, 'flesh': 1554, 'teach': 1555, 'dougla': 1556, 'bloodi': 1557, 'hint': 1558, 'protect': 1559, 'terror': 1560, 'marvel': 1561, 'leader': 1562, 'anybodi': 1563, 'superman': 1564, 'accord': 1565, 'load': 1566, 'watchabl': 1567, 'drunk': 1568, 'brown': 1569, 'freddi': 1570, 'hitler': 1571, 'tim': 1572, 'seat': 1573, 'jeff': 1574, 'appropri': 1575, 'villag': 1576, 'unknown': 1577, 'keaton': 1578, 'charg': 1579, 'knock': 1580, 'media': 1581, 'unnecessari': 1582, 'empti': 1583, 'england': 1584, 'enemi': 1585, 'strength': 1586, 'perspect': 1587, 'craft': 1588, 'utter': 1589, 'dare': 1590, 'buck': 1591, 'wave': 1592, 'nativ': 1593, 'ford': 1594, 'correct': 1595, 'kiss': 1596, 'contrast': 1597, 'nazi': 1598, 'chill': 1599, 'magnific': 1600, 'knowledg': 1601, 'distract': 1602, 'soap': 1603, 'speed': 1604, 'anywher': 1605, 'mission': 1606, 'fred': 1607, 'breath': 1608, '1980': 1609, 'ice': 1610, 'crowd': 1611, 'moon': 1612, 'joan': 1613, 'jr': 1614, 'frighten': 1615, 'soft': 1616, '000': 1617, 'kate': 1618, 'dan': 1619, 'nick': 1620, 'hundr': 1621, 'dick': 1622, 'somebodi': 1623, 'dozen': 1624, 'radio': 1625, 'simon': 1626, 'shakespear': 1627, 'thousand': 1628, 'loss': 1629, 'academi': 1630, 'andrew': 1631, 'account': 1632, 'root': 1633, 'quot': 1634, 'sum': 1635, 'vehicl': 1636, '1970': 1637, 'behavior': 1638, 'convent': 1639, 'leg': 1640, 'regular': 1641, 'gold': 1642, 'compet': 1643, 'demand': 1644, 'worker': 1645, 'pretenti': 1646, 'notabl': 1647, 'privat': 1648, 'stretch': 1649, 'lynch': 1650, 'candi': 1651, 'explos': 1652, 'japan': 1653, 'interpret': 1654, 'constant': 1655, 'debut': 1656, 'tarzan': 1657, 'prais': 1658, 'sea': 1659, 'translat': 1660, 'revolv': 1661, 'spi': 1662, 'failur': 1663, 'technolog': 1664, 'threaten': 1665, 'jesu': 1666, 'sat': 1667, 'ass': 1668, 'quiet': 1669, 'franc': 1670, 'toy': 1671, 'aid': 1672, 'punch': 1673, 'kevin': 1674, 'met': 1675, 'higher': 1676, 'interact': 1677, 'abandon': 1678, 'vh': 1679, 'mike': 1680, 'bet': 1681, 'command': 1682, 'separ': 1683, 'confront': 1684, 'site': 1685, 'servic': 1686, 'gotten': 1687, 'recal': 1688, 'techniqu': 1689, 'stunt': 1690, 'belong': 1691, 'cabl': 1692, 'foot': 1693, 'bug': 1694, 'freak': 1695, 'fu': 1696, 'african': 1697, 'bright': 1698, 'jimmi': 1699, 'capabl': 1700, 'stock': 1701, 'succeed': 1702, 'fat': 1703, 'presid': 1704, 'clark': 1705, 'boat': 1706, 'gene': 1707, 'spanish': 1708, 'structur': 1709, 'paper': 1710, 'kidnap': 1711, 'factor': 1712, 'belief': 1713, 'whilst': 1714, 'educ': 1715, 'tree': 1716, 'witti': 1717, 'bob': 1718, 'complic': 1719, 'realis': 1720, 'attend': 1721, 'realism': 1722, 'finest': 1723, 'broken': 1724, 'assist': 1725, 'santa': 1726, 'smoke': 1727, 'v': 1728, 'determin': 1729, 'depart': 1730, 'up': 1731, 'observ': 1732, 'rubbish': 1733, 'fame': 1734, 'hat': 1735, 'domin': 1736, 'lewi': 1737, 'routin': 1738, 'oper': 1739, 'advanc': 1740, 'foreign': 1741, 'hook': 1742, 'morgan': 1743, 'kinda': 1744, 'safe': 1745, 'lone': 1746, 'numer': 1747, 'rank': 1748, 'shallow': 1749, 'vs': 1750, 'washington': 1751, 'shape': 1752, 'civil': 1753, 'rose': 1754, 'werewolf': 1755, 'morn': 1756, 'gari': 1757, 'accomplish': 1758, 'winner': 1759, 'ordinari': 1760, 'kong': 1761, 'virtual': 1762, 'peac': 1763, 'grab': 1764, 'whenev': 1765, 'offens': 1766, 'h': 1767, 'luck': 1768, 'bigger': 1769, 'complain': 1770, 'activ': 1771, 'patient': 1772, 'unfunni': 1773, 'contriv': 1774, 'welcom': 1775, 'trek': 1776, 'pretend': 1777, 'dimension': 1778, 'con': 1779, 'dri': 1780, 'lesbian': 1781, 'cain': 1782, 'wake': 1783, 'eric': 1784, 'flash': 1785, 'code': 1786, 'guard': 1787, 'statu': 1788, 'manipul': 1789, 'albert': 1790, 'dancer': 1791, 'corrupt': 1792, 'gain': 1793, 'signific': 1794, 'awkward': 1795, 'speech': 1796, 'context': 1797, 'sourc': 1798, 'clip': 1799, 'psycho': 1800, 'sean': 1801, '13': 1802, 'corni': 1803, 'anthoni': 1804, 'advic': 1805, 'priest': 1806, 'curiou': 1807, 'theatric': 1808, 'religion': 1809, 'w': 1810, 'reli': 1811, 'addict': 1812, 'flow': 1813, 'jennif': 1814, 'skin': 1815, 'asian': 1816, 'howard': 1817, 'specif': 1818, 'secur': 1819, 'core': 1820, 'organ': 1821, 'luke': 1822, 'golden': 1823, 'comfort': 1824, 'promot': 1825, 'cash': 1826, 'lucki': 1827, 'cheat': 1828, 'dislik': 1829, 'associ': 1830, 'lower': 1831, 'regret': 1832, 'devic': 1833, 'wing': 1834, 'degre': 1835, 'frankli': 1836, 'spell': 1837, 'frequent': 1838, 'balanc': 1839, 'contribut': 1840, 'forgiv': 1841, 'lake': 1842, 'sake': 1843, 'print': 1844, 'thoma': 1845, 'mass': 1846, 'betti': 1847, 'crack': 1848, 'unexpect': 1849, 'gordon': 1850, 'construct': 1851, 'unfold': 1852, 'grown': 1853, 'categori': 1854, 'depend': 1855, 'amateur': 1856, 'invit': 1857, 'walter': 1858, 'intellectu': 1859, 'condit': 1860, 'grew': 1861, 'honor': 1862, 'matur': 1863, 'anna': 1864, 'sole': 1865, 'veteran': 1866, 'spectacular': 1867, 'mirror': 1868, 'sudden': 1869, 'experienc': 1870, 'meanwhil': 1871, 'grip': 1872, 'freedom': 1873, 'overli': 1874, 'card': 1875, 'robin': 1876, 'gift': 1877, 'liner': 1878, 'demonstr': 1879, 'brilliantli': 1880, 'colour': 1881, 'theori': 1882, 'unabl': 1883, 'circumst': 1884, 'oliv': 1885, 'section': 1886, 'drew': 1887, 'subtitl': 1888, 'sheriff': 1889, 'crappi': 1890, 'cook': 1891, 'sheer': 1892, 'pile': 1893, 'laughter': 1894, 'matt': 1895, 'altern': 1896, 'path': 1897, 'parker': 1898, 'relief': 1899, 'lawyer': 1900, 'treatment': 1901, 'wander': 1902, 'hall': 1903, 'accident': 1904, 'defin': 1905, 'sinatra': 1906, 'captiv': 1907, 'hank': 1908, 'dragon': 1909, 'gratuit': 1910, 'moor': 1911, 'halloween': 1912, 'wound': 1913, 'unintent': 1914, 'kung': 1915, 'k': 1916, 'jacki': 1917, 'broadway': 1918, 'barbara': 1919, 'wayn': 1920, 'cowboy': 1921, 'spoof': 1922, 'statement': 1923, 'canadian': 1924, 'surreal': 1925, 'winter': 1926, 'compos': 1927, 'gonna': 1928, 'fish': 1929, 'cheer': 1930, 'treasur': 1931, 'fare': 1932, 'unrealist': 1933, 'sensit': 1934, 'emerg': 1935, 'woodi': 1936, 'victor': 1937, 'ran': 1938, 'neighbor': 1939, 'sympathet': 1940, 'driven': 1941, 'authent': 1942, 'glass': 1943, 'topic': 1944, 'expos': 1945, 'overlook': 1946, 'menac': 1947, 'handsom': 1948, 'gross': 1949, 'michel': 1950, 'chief': 1951, 'ancient': 1952, 'feet': 1953, 'comedian': 1954, 'stranger': 1955, 'nevertheless': 1956, 'russel': 1957, 'cinderella': 1958, 'contemporari': 1959, 'built': 1960, 'network': 1961, 'pleasant': 1962, 'miser': 1963, 'letter': 1964, 'consider': 1965, 'earn': 1966, 'underr': 1967, 'endless': 1968, 'gori': 1969, 'blockbust': 1970, 'switch': 1971, 'brook': 1972, 'solv': 1973, 'joseph': 1974, 'virgin': 1975, 'convict': 1976, 'edward': 1977, 'bullet': 1978, 'victoria': 1979, 'alex': 1980, 'scale': 1981, 'scenario': 1982, 'chosen': 1983, 'cynic': 1984, '0': 1985, 'outrag': 1986, 'com': 1987, 'sword': 1988, 'gut': 1989, 'curs': 1990, 'monkey': 1991, 'substanc': 1992, 'driver': 1993, 'uk': 1994, 'screenwrit': 1995, 'proper': 1996, 'wrap': 1997, 'juli': 1998, 'par': 1999, 'court': 2000, 'indic': 2001, 'bird': 2002, 'remov': 2003, 'roy': 2004, 'rental': 2005, 'inevit': 2006, 'advertis': 2007, 'loser': 2008, 'nanci': 2009, 'consequ': 2010, 'grave': 2011, 'naiv': 2012, 'germani': 2013, 'invis': 2014, 'fatal': 2015, 'slap': 2016, 'bridg': 2017, 'brave': 2018, 'le': 2019, 'footbal': 2020, 'anger': 2021, 'provok': 2022, 'loui': 2023, 'ador': 2024, 'chan': 2025, 'anderson': 2026, 'alcohol': 2027, 'willi': 2028, 'stumbl': 2029, 'ryan': 2030, 'professor': 2031, '1930': 2032, 'patrick': 2033, 'bat': 2034, 'sharp': 2035, 'australian': 2036, 'assassin': 2037, 'lousi': 2038, 'amateurish': 2039, 'cell': 2040, 'eight': 2041, 'saturday': 2042, 'liber': 2043, 'deni': 2044, 'refresh': 2045, 'trilog': 2046, 'strongli': 2047, 'heck': 2048, 'ape': 2049, 'sin': 2050, 'san': 2051, 'vagu': 2052, 'justifi': 2053, 'resid': 2054, 'mini': 2055, 'sympathi': 2056, 'reput': 2057, 'creator': 2058, 'defeat': 2059, 'terrifi': 2060, 'indi': 2061, 'prevent': 2062, 'endur': 2063, 'task': 2064, 'tediou': 2065, 'expert': 2066, 'tabl': 2067, 'trial': 2068, 'offend': 2069, 'rival': 2070, 'employ': 2071, 'che': 2072, 'basebal': 2073, 'imit': 2074, 'max': 2075, 'weekend': 2076, 'fairi': 2077, 'beach': 2078, 'pitch': 2079, 'complaint': 2080, 'europ': 2081, 'dig': 2082, 'risk': 2083, 'format': 2084, 'murphi': 2085, 'purchas': 2086, 'tini': 2087, 'glimps': 2088, 'reminisc': 2089, 'bite': 2090, 'harsh': 2091, 'titan': 2092, 'powel': 2093, 'nois': 2094, 'hype': 2095, 'fals': 2096, 'till': 2097, 'north': 2098, '14': 2099, 'asleep': 2100, 'prime': 2101, 'strip': 2102, 'africa': 2103, 'revel': 2104, 'destruct': 2105, 'descript': 2106, 'texa': 2107, 'surfac': 2108, 'uninterest': 2109, 'semi': 2110, 'arrest': 2111, 'spin': 2112, 'inner': 2113, 'excess': 2114, 'sitcom': 2115, 'massiv': 2116, 'maintain': 2117, 'controversi': 2118, 'twin': 2119, 'hitchcock': 2120, 'makeup': 2121, 'dinosaur': 2122, 'argu': 2123, 'reject': 2124, 'ludicr': 2125, 'kim': 2126, 'ideal': 2127, 'expens': 2128, 'stare': 2129, 'melodrama': 2130, 'insist': 2131, 'subplot': 2132, 'ala': 2133, 'forest': 2134, 'press': 2135, 'supernatur': 2136, 'erot': 2137, 'atroci': 2138, 'ga': 2139, 'nail': 2140, 'host': 2141, 'columbo': 2142, 'notch': 2143, 'identifi': 2144, 'cant': 2145, 'dude': 2146, 'presum': 2147, 'guest': 2148, 'character': 2149, 'crude': 2150, 'forgett': 2151, 'closer': 2152, 'plagu': 2153, 'method': 2154, 'ear': 2155, 'landscap': 2156, 'foster': 2157, 'princess': 2158, 'lion': 2159, 'border': 2160, 'beast': 2161, 'damag': 2162, 'jungl': 2163, 'birth': 2164, 'previous': 2165, 'accus': 2166, 'bound': 2167, 'storytel': 2168, 'aunt': 2169, 'urban': 2170, 'pacino': 2171, 'propaganda': 2172, 'thirti': 2173, 'chose': 2174, 'jess': 2175, 'emma': 2176, 'nude': 2177, 'guid': 2178, 'doll': 2179, 'mainstream': 2180, 'pet': 2181, '25': 2182, 'whoever': 2183, 'warrior': 2184, 'mate': 2185, 'gritti': 2186, 'poster': 2187, 'exact': 2188, 'upset': 2189, 'latest': 2190, 'deadli': 2191, 'cooper': 2192, 'friday': 2193, 'size': 2194, 'merit': 2195, 'citizen': 2196, 'sun': 2197, 'ton': 2198, 'contact': 2199, 'warner': 2200, '1990': 2201, 'popul': 2202, 'rough': 2203, 'wilson': 2204, 'blend': 2205, 'contest': 2206, 'settl': 2207, 'corps': 2208, 'buff': 2209, 'select': 2210, 'alic': 2211, 'rat': 2212, 'bu': 2213, 'overcom': 2214, 'metal': 2215, 'pitt': 2216, 'environ': 2217, 'mgm': 2218, 'widow': 2219, 'guilti': 2220, 'lift': 2221, 'revolut': 2222, 'link': 2223, 'particip': 2224, 'ted': 2225, 'corpor': 2226, 'afternoon': 2227, 'matrix': 2228, 'moron': 2229, 'exagger': 2230, 'prostitut': 2231, '1960': 2232, 'corner': 2233, 'johnson': 2234, 'accompani': 2235, 'instal': 2236, 'multipl': 2237, 'clair': 2238, 'leagu': 2239, 'hood': 2240, 'doom': 2241, 'friendli': 2242, 'holm': 2243, 'sincer': 2244, 'defend': 2245, 'string': 2246, 'examin': 2247, 'advis': 2248, 'campi': 2249, 'junk': 2250, 'hip': 2251, 'sunday': 2252, 'grim': 2253, 'irish': 2254, 'aka': 2255, 'lugosi': 2256, 'blah': 2257, 'tight': 2258, 'icon': 2259, 'pro': 2260, 'rachel': 2261, 'confid': 2262, 'shut': 2263, 'shake': 2264, 'varieti': 2265, 'mexican': 2266, 'directli': 2267, 'jaw': 2268, 'medic': 2269, 'denni': 2270, 'goal': 2271, 'attach': 2272, 'sullivan': 2273, 'prior': 2274, 'terrorist': 2275, 'breast': 2276, 'legendari': 2277, 'bourn': 2278, 'courag': 2279, 'sarah': 2280, 'duke': 2281, 'vietnam': 2282, 'sentenc': 2283, 'dean': 2284, 'truck': 2285, 'donald': 2286, 'split': 2287, 'entri': 2288, 'yell': 2289, 'un': 2290, 'behav': 2291, 'hong': 2292, 'nose': 2293, 'proceed': 2294, 'stolen': 2295, 'borrow': 2296, 'buri': 2297, 'swim': 2298, 'confess': 2299, 'crush': 2300, 'forth': 2301, 'unconvinc': 2302, 'jerk': 2303, 'lifetim': 2304, 'concentr': 2305, 'everywher': 2306, 'gather': 2307, 'turkey': 2308, 'california': 2309, 'deliveri': 2310, 'julia': 2311, 'pan': 2312, 'lip': 2313, 'spite': 2314, 'proud': 2315, 'freeman': 2316, 'flight': 2317, 'downright': 2318, 'reward': 2319, 'offici': 2320, 'hoffman': 2321, 'quest': 2322, 'china': 2323, 'fade': 2324, 'notori': 2325, 'worthwhil': 2326, 'fabul': 2327, 'betray': 2328, 'jail': 2329, 'jon': 2330, 'lazi': 2331, 'sink': 2332, 'inept': 2333, 'encourag': 2334, 'sir': 2335, 'retard': 2336, 'storm': 2337, 'lisa': 2338, 'survivor': 2339, 'bag': 2340, 'teeth': 2341, 'cousin': 2342, 'susan': 2343, 'relev': 2344, 'shower': 2345, 'branagh': 2346, 'bell': 2347, 'imageri': 2348, 'toler': 2349, 'hugh': 2350, 'tremend': 2351, 'bride': 2352, 'trade': 2353, 'alright': 2354, 'summari': 2355, 'facial': 2356, 'shark': 2357, 'mexico': 2358, 'quirki': 2359, 'finger': 2360, 'stab': 2361, 'hyster': 2362, 'blown': 2363, 'ha': 2364, 'bitter': 2365, 'pose': 2366, 'von': 2367, 'ron': 2368, 'christ': 2369, 'larri': 2370, 'scheme': 2371, 'address': 2372, 'bone': 2373, 'cruel': 2374, 'afterward': 2375, 'ned': 2376, 'thumb': 2377, 'screw': 2378, 'pursu': 2379, 'traci': 2380, 'beg': 2381, 'swear': 2382, 'snake': 2383, 'tour': 2384, 'feed': 2385, 'distinct': 2386, 'occas': 2387, 'chair': 2388, 'mechan': 2389, 'raw': 2390, 'obscur': 2391, 'photo': 2392, 'stomach': 2393, 'southern': 2394, 'sidney': 2395, 'heavili': 2396, 'argument': 2397, 'gruesom': 2398, 'resist': 2399, 'chain': 2400, 'hardi': 2401, 'cabin': 2402, 'holiday': 2403, 'render': 2404, 'necessarili': 2405, 'understood': 2406, 'indulg': 2407, 'philip': 2408, 'satan': 2409, 'racist': 2410, 'india': 2411, 'fourth': 2412, 'integr': 2413, 'belov': 2414, 'forgot': 2415, 'pregnant': 2416, 'tongu': 2417, 'lay': 2418, 'stalk': 2419, 'outfit': 2420, 'midnight': 2421, 'obnoxi': 2422, '17': 2423, 'magazin': 2424, 'slapstick': 2425, 'garden': 2426, 'ticket': 2427, 'restor': 2428, 'inhabit': 2429, 'carol': 2430, 'deeper': 2431, 'incid': 2432, 'brad': 2433, 'devot': 2434, 'lincoln': 2435, 'shoe': 2436, 'divorc': 2437, 'anticip': 2438, 'benefit': 2439, 'sandler': 2440, 'underground': 2441, 'maria': 2442, 'disbelief': 2443, 'guarante': 2444, 'lili': 2445, 'elizabeth': 2446, 'explod': 2447, 'creation': 2448, 'cring': 2449, 'mildli': 2450, 'slave': 2451, 'amazingli': 2452, 'capit': 2453, 'princip': 2454, 'bbc': 2455, 'greater': 2456, 'lesli': 2457, 'extraordinari': 2458, 'introduct': 2459, 'halfway': 2460, 'funnier': 2461, 'overwhelm': 2462, 'transfer': 2463, 'enhanc': 2464, 'text': 2465, 'advantag': 2466, 'punish': 2467, 'extent': 2468, 'tap': 2469, 'wreck': 2470, 'east': 2471, 'plant': 2472, 'jessica': 2473, 'error': 2474, 'deliber': 2475, 'dynam': 2476, 'preview': 2477, 'lo': 2478, 'horrif': 2479, 'lane': 2480, 'homosexu': 2481, 'sophist': 2482, 'vacat': 2483, 'miscast': 2484, 'ensu': 2485, '2000': 2486, 'miller': 2487, 'basi': 2488, 'appli': 2489, 'vincent': 2490, 'sleazi': 2491, 'mansion': 2492, 'extend': 2493, 'elev': 2494, 'spoken': 2495, 'via': 2496, 'measur': 2497, 'steel': 2498, 'reed': 2499, 'bollywood': 2500, 'uncomfort': 2501, 'overact': 2502, 'beer': 2503, 'mous': 2504, 'goofi': 2505, 'stanley': 2506, 'fix': 2507, 'assign': 2508, 'daili': 2509, 'conceiv': 2510, 'savag': 2511, 'blair': 2512, 'alter': 2513, 'cathol': 2514, 'dentist': 2515, 'breathtak': 2516, 'hippi': 2517, 'melt': 2518, 'subsequ': 2519, 'properli': 2520, 'sacrific': 2521, 'succe': 2522, 'oppos': 2523, 'everyday': 2524, 'carpent': 2525, 'burt': 2526, 'nowaday': 2527, 'inspector': 2528, 'massacr': 2529, 'circl': 2530, 'laura': 2531, 'block': 2532, 'neck': 2533, 'grey': 2534, 'lesser': 2535, 'fallen': 2536, 'mob': 2537, 'portrait': 2538, 'pool': 2539, 'fay': 2540, 'concert': 2541, 'access': 2542, 'christi': 2543, 'seagal': 2544, 'competit': 2545, 'usa': 2546, 'relax': 2547, 'jewish': 2548, 'isol': 2549, 'react': 2550, 'sinist': 2551, 'chees': 2552, 'jake': 2553, 'chop': 2554, 'appal': 2555, 'suitabl': 2556, 'immens': 2557, 'spiritu': 2558, 'nonetheless': 2559, 'nine': 2560, 'creep': 2561, '2006': 2562, 'lyric': 2563, 'stink': 2564, 'ironi': 2565, 'franchis': 2566, 'needless': 2567, 'nut': 2568, 'shirt': 2569, 'sold': 2570, 'reduc': 2571, 'rage': 2572, 'navi': 2573, 'adopt': 2574, 'user': 2575, 'showcas': 2576, 'spring': 2577, 'luci': 2578, 'retir': 2579, 'nurs': 2580, 'asham': 2581, 'digit': 2582, 'uninspir': 2583, 'jay': 2584, 'per': 2585, 'bath': 2586, 'zone': 2587, 'bulli': 2588, 'stanwyck': 2589, 'oddli': 2590, '2001': 2591, 'upper': 2592, 'laid': 2593, 'illustr': 2594, 'sutherland': 2595, '1940': 2596, 'broadcast': 2597, 'amongst': 2598, 'aspir': 2599, 'disguis': 2600, 'throat': 2601, 'brando': 2602, 'baker': 2603, 'stylish': 2604, 'fulfil': 2605, 'wanna': 2606, 'pound': 2607, '18': 2608, 'pride': 2609, 'neighborhood': 2610, 'nobl': 2611, 'thief': 2612, 'endear': 2613, 'wwii': 2614, 'em': 2615, 'impli': 2616, 'cinematograph': 2617, 'distribut': 2618, 'diseas': 2619, 'albeit': 2620, '16': 2621, 'prop': 2622, 'coher': 2623, 'shift': 2624, 'tens': 2625, 'shoulder': 2626, 'dawn': 2627, 'bo': 2628, 'rochest': 2629, 'dinner': 2630, 'bett': 2631, 'forti': 2632, 'rebel': 2633, 'poignant': 2634, 'surf': 2635, 'function': 2636, 'knife': 2637, 'silenc': 2638, 'wash': 2639, 'snow': 2640, 'contract': 2641, 'shout': 2642, 'matthau': 2643, 'eeri': 2644, 'internet': 2645, 'henc': 2646, 'height': 2647, 'duti': 2648, 'chuck': 2649, 'derek': 2650, 'widmark': 2651, 'proof': 2652, 'horrend': 2653, 'instinct': 2654, 'silver': 2655, 'cancel': 2656, 'heat': 2657, 'cannib': 2658, 'reunion': 2659, 'mindless': 2660, 'elvira': 2661, 'repetit': 2662, 'alik': 2663, 'mill': 2664, 'innov': 2665, 'absorb': 2666, 'pie': 2667, 'premier': 2668, 'etern': 2669, 'torn': 2670, 'neat': 2671, 'spielberg': 2672, 'incoher': 2673, 'elvi': 2674, 'greatli': 2675, 'glori': 2676, 'musician': 2677, 'homag': 2678, 'infam': 2679, 'crisi': 2680, 'itali': 2681, 'burton': 2682, 'diamond': 2683, 'britain': 2684, 'precis': 2685, 'nelson': 2686, 'redempt': 2687, 'trite': 2688, 'announc': 2689, 'racism': 2690, 'lovabl': 2691, 'bang': 2692, 'horrifi': 2693, 'wealthi': 2694, 'blank': 2695, 'fbi': 2696, 'dedic': 2697, 'flop': 2698, 'hammer': 2699, 'resolut': 2700, 'streisand': 2701, 'parallel': 2702, 'happili': 2703, 'ensembl': 2704, 'wilder': 2705, 'helen': 2706, 'chaplin': 2707, 'pat': 2708, 'mar': 2709, 'factori': 2710, 'disagre': 2711, 'plastic': 2712, 'triumph': 2713, 'st': 2714, 'conclud': 2715, 'carter': 2716, 'cube': 2717, 'oil': 2718, 'broke': 2719, 'weight': 2720, 'march': 2721, 'fighter': 2722, 'climb': 2723, 'bush': 2724, 'row': 2725, 'vega': 2726, 'chuckl': 2727, 'rocket': 2728, 'own': 2729, 'wherea': 2730, 'spare': 2731, 'unforgett': 2732, 'kurt': 2733, 'mst3k': 2734, 'meaning': 2735, 'dane': 2736, 'lust': 2737, 'thug': 2738, 'dump': 2739, 'luca': 2740, 'sensibl': 2741, 'boot': 2742, 'enorm': 2743, 'stress': 2744, 'difficulti': 2745, 'caricatur': 2746, 'dear': 2747, 'adequ': 2748, 'engin': 2749, 'butt': 2750, 'threat': 2751, 'fifti': 2752, 'brand': 2753, 'karloff': 2754, 'bobbi': 2755, 'rap': 2756, 'arnold': 2757, 'secretari': 2758, 'journalist': 2759, 'fest': 2760, 'homeless': 2761, 'barri': 2762, 'elabor': 2763, 'ego': 2764, 'ralph': 2765, 'polish': 2766, 'swing': 2767, 'hamlet': 2768, 'arrog': 2769, 'flynn': 2770, 'fanci': 2771, 'conspiraci': 2772, 'induc': 2773, 'spike': 2774, 'resort': 2775, 'simpson': 2776, 'unbear': 2777, 'arrang': 2778, 'grate': 2779, 'float': 2780, 'puppet': 2781, 'tool': 2782, 'tribut': 2783, 'boll': 2784, 'cruis': 2785, 'exercis': 2786, 'guilt': 2787, 'pig': 2788, 'phillip': 2789, 'choreograph': 2790, 'basement': 2791, 'muppet': 2792, 'puzzl': 2793, 'document': 2794, 'editor': 2795, 'item': 2796, 'medium': 2797, 'toilet': 2798, 'tower': 2799, 'slip': 2800, 'fianc': 2801, 'babe': 2802, '24': 2803, 'stan': 2804, 'layer': 2805, 'ward': 2806, 'ham': 2807, 'korean': 2808, 'scarecrow': 2809, 'file': 2810, 'superfici': 2811, 'slaughter': 2812, 'denzel': 2813, 'assur': 2814, 'orient': 2815, 'librari': 2816, 'portion': 2817, 'philosoph': 2818, 'doc': 2819, 'catherin': 2820, 'minim': 2821, 'territori': 2822, 'persona': 2823, 'spark': 2824, 'glover': 2825, 'larger': 2826, 'inexplic': 2827, 'transit': 2828, 'jeremi': 2829, 'wolf': 2830, 'owe': 2831, 'curti': 2832, 'boredom': 2833, 'financi': 2834, 'sneak': 2835, 'walken': 2836, 'pg': 2837, 'shi': 2838, 'jet': 2839, 'dorothi': 2840, 'ban': 2841, 'multi': 2842, 'metaphor': 2843, 'cusack': 2844, 'ambigu': 2845, 'backdrop': 2846, 'profound': 2847, 'hudson': 2848, 'eleph': 2849, 'whale': 2850, 'stiff': 2851, '2005': 2852, 'rave': 2853, 'birthday': 2854, 'elsewher': 2855, 'union': 2856, 'ultra': 2857, 'hack': 2858, 'implaus': 2859, 'notion': 2860, 'viru': 2861, 'gadget': 2862, 'canada': 2863, 'squar': 2864, 'disc': 2865, 'bibl': 2866, 'slight': 2867, 'eastwood': 2868, 'pad': 2869, 'newspap': 2870, 'afford': 2871, '1st': 2872, 'reader': 2873, 'poison': 2874, 'distanc': 2875, 'hawk': 2876, 'deriv': 2877, 'lloyd': 2878, 'eva': 2879, 'urg': 2880, 'superhero': 2881, 'skit': 2882, 'heston': 2883, 'button': 2884, 'essenc': 2885, 'cure': 2886, 'sadist': 2887, 'charisma': 2888, 'spread': 2889, 'huh': 2890, 'health': 2891, 'drown': 2892, 'montag': 2893, 'restaur': 2894, 'maniac': 2895, 'gradual': 2896, 'muslim': 2897, 'scoobi': 2898, 'fetch': 2899, 'estat': 2900, 'peak': 2901, 'godfath': 2902, 'dealt': 2903, 'invest': 2904, 'lab': 2905, 'companion': 2906, 'subtleti': 2907, 'cup': 2908, 'tea': 2909, 'alli': 2910, 'countless': 2911, 'servant': 2912, 'kane': 2913, 'gothic': 2914, 'miik': 2915, 'ritter': 2916, 'iii': 2917, 'electr': 2918, 'charismat': 2919, 'elect': 2920, 'salli': 2921, 'heroic': 2922, 'briefli': 2923, 'resourc': 2924, 'nuanc': 2925, 'reel': 2926, 'tender': 2927, 'grandmoth': 2928, 'toss': 2929, 'ingredi': 2930, 'wannab': 2931, 'admittedli': 2932, 'neil': 2933, 'bud': 2934, 'cole': 2935, 'stood': 2936, 'stronger': 2937, 'carrey': 2938, 'kubrick': 2939, 'punk': 2940, 'pit': 2941, 'mafia': 2942, 'mild': 2943, 'poverti': 2944, 'label': 2945, 'shall': 2946, 'pauli': 2947, 'gate': 2948, 'dawson': 2949, 'reev': 2950, 'cox': 2951, 'fond': 2952, 'assault': 2953, 'cardboard': 2954, 'tag': 2955, 'useless': 2956, 'outcom': 2957, 'astair': 2958, 'ian': 2959, 'easier': 2960, 'smash': 2961, 'smooth': 2962, 'updat': 2963, 'burst': 2964, 'terri': 2965, 'bakshi': 2966, 'increasingli': 2967, 'samurai': 2968, 'exchang': 2969, 'divers': 2970, 'qualifi': 2971, 'vari': 2972, '2002': 2973, 'melodramat': 2974, 'sketch': 2975, 'resolv': 2976, 'vulner': 2977, 'fist': 2978, 'rex': 2979, 'coincid': 2980, 'insert': 2981, 'conveni': 2982, 'reynold': 2983, 'brillianc': 2984, 'blast': 2985, 'suspend': 2986, 'tame': 2987, 'be': 2988, 'scratch': 2989, 'luckili': 2990, 'templ': 2991, 'ambiti': 2992, 'seventi': 2993, 'coach': 2994, 'meat': 2995, 'hamilton': 2996, 'fisher': 2997, 'matthew': 2998, 'strictli': 2999, 'gotta': 3000, 'nuclear': 3001, 'farm': 3002, 'jami': 3003, 'walker': 3004, 'soprano': 3005, 'pin': 3006, 'ninja': 3007, 'eccentr': 3008, 'spooki': 3009, 'monk': 3010, 'instantli': 3011, 'kudo': 3012, 'recreat': 3013, 'struck': 3014, 'grasp': 3015, 'revers': 3016, 'butcher': 3017, 'worthless': 3018, 'convolut': 3019, 'clock': 3020, 'brosnan': 3021, 'closet': 3022, 'joey': 3023, 'discoveri': 3024, 'cave': 3025, 'empir': 3026, 'timeless': 3027, 'fifteen': 3028, 'inconsist': 3029, 'importantli': 3030, 'wipe': 3031, 'eighti': 3032, 'communist': 3033, 'declar': 3034, 'sidekick': 3035, 'miracl': 3036, 'bleak': 3037, 'pal': 3038, 'gray': 3039, 'cliff': 3040, 'sloppi': 3041, 'mitchel': 3042, 'partli': 3043, 'selfish': 3044, 'clown': 3045, 'seller': 3046, 'evok': 3047, 'norman': 3048, 'enthusiast': 3049, 'stoog': 3050, 'piano': 3051, 'aforement': 3052, 'chew': 3053, 'lifestyl': 3054, 'websit': 3055, 'flawless': 3056, 'psychiatrist': 3057, '45': 3058, 'debat': 3059, 'ho': 3060, 'cheek': 3061, 'farc': 3062, 'superbl': 3063, 'australia': 3064, 'destin': 3065, 'seed': 3066, 'dash': 3067, 'regardless': 3068, 'incompet': 3069, 'directori': 3070, 'bash': 3071, 'soviet': 3072, 'kitchen': 3073, 'drivel': 3074, 'pressur': 3075, 'abc': 3076, 'splatter': 3077, 'dire': 3078, 'akshay': 3079, 'slice': 3080, 'wrestl': 3081, 'wick': 3082, 'anni': 3083, 'emili': 3084, 'suppli': 3085, 'distant': 3086, 'lou': 3087, 'cameron': 3088, 'helicopt': 3089, 'flower': 3090, 'doo': 3091, 'increas': 3092, 'chapter': 3093, 'seduc': 3094, 'beaten': 3095, 'artifici': 3096, 'duo': 3097, 'jar': 3098, 'blob': 3099, 'pleasantli': 3100, 'curios': 3101, 'recov': 3102, 'cagney': 3103, 'judi': 3104, 'boil': 3105, 'dave': 3106, 'ken': 3107, 'glow': 3108, 'prize': 3109, 'cia': 3110, 'mann': 3111, 'psychot': 3112, 'drunken': 3113, 'francisco': 3114, 'ellen': 3115, 'favour': 3116, 'craven': 3117, 'eleg': 3118, 'glenn': 3119, 'craig': 3120, 'panic': 3121, 'laurel': 3122, 'web': 3123, 'combat': 3124, 'ranger': 3125, 'goldberg': 3126, 'perri': 3127, 'splendid': 3128, 'hop': 3129, 'turner': 3130, 'plausibl': 3131, 'modesti': 3132, '20th': 3133, 'shortli': 3134, 'gandhi': 3135, 'slightest': 3136, 'alexand': 3137, 'gentl': 3138, 'hatr': 3139, 'philosophi': 3140, 'rid': 3141, 'graduat': 3142, 'wizard': 3143, 'min': 3144, 'greek': 3145, 'flip': 3146, 'fx': 3147, 'ruth': 3148, 'falk': 3149, 'fund': 3150, 'preciou': 3151, 'harm': 3152, 'jealou': 3153, 'ocean': 3154, 'holi': 3155, 'we': 3156, 'legal': 3157, 'lend': 3158, 'felix': 3159, 'manhattan': 3160, 'dracula': 3161, 'unpleas': 3162, 'tall': 3163, 'knight': 3164, 'futurist': 3165, 'digniti': 3166, 'forbidden': 3167, 'mock': 3168, 'scientif': 3169, 'tank': 3170, 'overdon': 3171, 'ami': 3172, 'bless': 3173, 'childish': 3174, 'thread': 3175, 'giallo': 3176, 'nod': 3177, 'reviv': 3178, 'explicit': 3179, 'margaret': 3180, '99': 3181, 'awaken': 3182, '2004': 3183, 'yesterday': 3184, 'awe': 3185, 'fever': 3186, 'eve': 3187, 'repeatedli': 3188, 'torment': 3189, 'thick': 3190, 'nerv': 3191, 'elderli': 3192, 'unwatch': 3193, 'verhoeven': 3194, 'mel': 3195, 'pirat': 3196, 'broad': 3197, 'uniform': 3198, 'timothi': 3199, 'griffith': 3200, 'automat': 3201, 'ambit': 3202, 'roman': 3203, 'absenc': 3204, 'bin': 3205, 'publish': 3206, 'ah': 3207, 'lean': 3208, 'rivet': 3209, 'eas': 3210, 'acclaim': 3211, 'kay': 3212, 'politician': 3213, 'custom': 3214, 'royal': 3215, 'stiller': 3216, 'romero': 3217, 'launch': 3218, 'pulp': 3219, 'crook': 3220, 'warren': 3221, 'darker': 3222, 'pierc': 3223, 'bathroom': 3224, 'wallac': 3225, 'transport': 3226, 'tomato': 3227, 'phrase': 3228, 'antic': 3229, 'termin': 3230, 'stinker': 3231, 'gabriel': 3232, 'purpl': 3233, 'homicid': 3234, 'sunshin': 3235, 'foul': 3236, 'q': 3237, 'kenneth': 3238, 'sixti': 3239, 'karen': 3240, 'album': 3241, 'pray': 3242, 'marin': 3243, 'revolutionari': 3244, 'hollow': 3245, 'contrari': 3246, 'donna': 3247, 'juvenil': 3248, 'eyr': 3249, 'choreographi': 3250, 'packag': 3251, 'awak': 3252, '2003': 3253, 'prom': 3254, 'rambo': 3255, 'evolv': 3256, 'coloni': 3257, 'li': 3258, 'saint': 3259, 'brazil': 3260, 'viciou': 3261, 'ought': 3262, 'horrid': 3263, 'blade': 3264, 'nerd': 3265, 'overr': 3266, 'beatti': 3267, 'conserv': 3268, 'candid': 3269, 'ireland': 3270, 'twelv': 3271, 'option': 3272, 'ramon': 3273, 'defi': 3274, 'boast': 3275, 'mildr': 3276, 'dose': 3277, 'stole': 3278, 'kapoor': 3279, 'mummi': 3280, 'funer': 3281, 'jazz': 3282, 'global': 3283, 'altman': 3284, 'collabor': 3285, 'flame': 3286, 'confirm': 3287, 'kirk': 3288, 'detract': 3289, 'astonish': 3290, 'natali': 3291, 'trio': 3292, 'fulci': 3293, 'protest': 3294, 'audio': 3295, 'blake': 3296, 'nicholson': 3297, 'leap': 3298, 'bottl': 3299, 'yellow': 3300, 'destini': 3301, 'racial': 3302, 'delici': 3303, 'spit': 3304, 'enterpris': 3305, 'mystic': 3306, 'tommi': 3307, 'shade': 3308, 'bull': 3309, 'whip': 3310, 'staff': 3311, 'threw': 3312, 'inherit': 3313, 'meaningless': 3314, 'neo': 3315, 'pseudo': 3316, 'popcorn': 3317, 'fonda': 3318, 'vivid': 3319, 'adolesc': 3320, 'visibl': 3321, 'swedish': 3322, 'enchant': 3323, 'harder': 3324, 'bedroom': 3325, 'todd': 3326, 'altogeth': 3327, 'reunit': 3328, 'merci': 3329, 'leonard': 3330, 'fanat': 3331, 'tip': 3332, 'roommat': 3333, 'await': 3334, 'ruthless': 3335, 'suspici': 3336, 'lawrenc': 3337, 'exhibit': 3338, 'voight': 3339, 'bust': 3340, 'synopsi': 3341, 'befriend': 3342, 'reserv': 3343, 'kennedi': 3344, 'wire': 3345, 'madonna': 3346, 'crocodil': 3347, 'moodi': 3348, 'lemmon': 3349, 'edi': 3350, 'uneven': 3351, 'decor': 3352, 'jew': 3353, 'atlanti': 3354, 'respond': 3355, 'dimens': 3356, 'voyag': 3357, 'clint': 3358, 'garner': 3359, 'bargain': 3360, 'incident': 3361, 'chao': 3362, 'clumsi': 3363, 'bold': 3364, '2007': 3365, 'ventur': 3366, 'carl': 3367, 'audit': 3368, 'bradi': 3369, 'abysm': 3370, 'centr': 3371, 'rural': 3372, 'unsettl': 3373, 'holli': 3374, 'palma': 3375, 'lit': 3376, 'versu': 3377, 'mall': 3378, 'humili': 3379, 'immigr': 3380, 'imperson': 3381, 'cd': 3382, '2nd': 3383, 'acknowledg': 3384, 'elimin': 3385, 'neglect': 3386, 'cuba': 3387, 'wealth': 3388, 'hart': 3389, 'trail': 3390, 'characterist': 3391, 'cari': 3392, 'nearbi': 3393, 'poetic': 3394, 'daddi': 3395, 'timon': 3396, 'ant': 3397, 'tiger': 3398, 'echo': 3399, 'troop': 3400, 'saga': 3401, 'pun': 3402, 'solo': 3403, 'domest': 3404, 'jeffrey': 3405, 'collaps': 3406, 'mistaken': 3407, 'celluloid': 3408, 'prejudic': 3409, 'paus': 3410, 'infect': 3411, 'marshal': 3412, 'mickey': 3413, 'repuls': 3414, 'homer': 3415, 'hbo': 3416, 'inappropri': 3417, 'milk': 3418, 'apolog': 3419, 'chest': 3420, 'coffe': 3421, 'coat': 3422, 'ginger': 3423, 'harvey': 3424, 'interrupt': 3425, 'leon': 3426, 'assembl': 3427, 'undoubtedli': 3428, 'pant': 3429, 'tribe': 3430, '1996': 3431, 'inan': 3432, 'promin': 3433, 'olivi': 3434, 'sore': 3435, 'equip': 3436, 'gear': 3437, 'cake': 3438, 'embrac': 3439, 'trace': 3440, 'pen': 3441, 'pot': 3442, 'colleagu': 3443, 'colonel': 3444, 'humbl': 3445, 'institut': 3446, 'maggi': 3447, 'instant': 3448, 'highest': 3449, 'solut': 3450, 'florida': 3451, 'aveng': 3452, 'furthermor': 3453, 'jenni': 3454, 'exot': 3455, 'primari': 3456, 'brooklyn': 3457, 'vulgar': 3458, 'consum': 3459, 'devast': 3460, 'retain': 3461, 'airplan': 3462, 'polanski': 3463, 'illog': 3464, 'seduct': 3465, '3rd': 3466, 'dutch': 3467, 'sale': 3468, 'smaller': 3469, 'descend': 3470, '1999': 3471, 'principl': 3472, 'outer': 3473, 'ya': 3474, 'wive': 3475, 'gender': 3476, 'rick': 3477, 'dian': 3478, 'godzilla': 3479, 'linda': 3480, 'strain': 3481, 'disabl': 3482, 'cope': 3483, 'poke': 3484, 'bowl': 3485, 'gloriou': 3486, 'predecessor': 3487, 'cue': 3488, 'inferior': 3489, 'secondli': 3490, 'glamor': 3491, 'primarili': 3492, 'yard': 3493, 'bubbl': 3494, 'beneath': 3495, 'scope': 3496, 'vast': 3497, 'lol': 3498, 'devoid': 3499, 'rabbit': 3500, 'mixtur': 3501, 'dive': 3502, 'blatant': 3503, 'gundam': 3504, 'dud': 3505, 'hal': 3506, 'disjoint': 3507, 'trademark': 3508, 'invas': 3509, 'aggress': 3510, 'streep': 3511, 'myer': 3512, 'alfr': 3513, 'alert': 3514, 'april': 3515, 'museum': 3516, 'z': 3517, 'hideou': 3518, 'simplist': 3519, 'breed': 3520, 'pearl': 3521, 'garbo': 3522, 'countrysid': 3523, 'talki': 3524, 'shirley': 3525, 'shelf': 3526, 'casual': 3527, 'senseless': 3528, 'et': 3529, 'arab': 3530, 'grinch': 3531, 'domino': 3532, 'uwe': 3533, 'vanish': 3534, 'stir': 3535, 'sh': 3536, 'experiment': 3537, 'boom': 3538, 'obtain': 3539, 'hardcor': 3540, 'mayor': 3541, 'defens': 3542, 'disgrac': 3543, 'slide': 3544, 'robberi': 3545, 'oz': 3546, 'maci': 3547, 'applaud': 3548, 'robinson': 3549, 'acid': 3550, 'hopeless': 3551, 'illeg': 3552, 'stellar': 3553, 'rendit': 3554, 'loyal': 3555, 'unhappi': 3556, 'stack': 3557, 'mail': 3558, 'khan': 3559, 'span': 3560, 'emphasi': 3561, 'declin': 3562, 'grandfath': 3563, 'tempt': 3564, 'blew': 3565, 'recruit': 3566, 'rifl': 3567, 'soccer': 3568, 'counter': 3569, 'fri': 3570, 'spider': 3571, 'wont': 3572, 'amanda': 3573, 'diana': 3574, 'dismiss': 3575, 'psychic': 3576, 'incomprehens': 3577, 'tenant': 3578, 'dicken': 3579, 'hartley': 3580, 'berlin': 3581, 'scroog': 3582, 'craze': 3583, 'topless': 3584, 'porno': 3585, 'sibl': 3586, 'ration': 3587, 'sympath': 3588, 'niro': 3589, 'parad': 3590, 'riot': 3591, 'faster': 3592, 'goer': 3593, 'bitch': 3594, 'resurrect': 3595, 'shed': 3596, 'lumet': 3597, 'trashi': 3598, 'shaw': 3599, 'justin': 3600, 'intim': 3601, 'woo': 3602, 'wet': 3603, 'revolt': 3604, 'ethnic': 3605, 'rider': 3606, 'wendi': 3607, 'partial': 3608, 'choru': 3609, 'hesit': 3610, 'patriot': 3611, 'immort': 3612, 'biographi': 3613, 'farmer': 3614, 'gap': 3615, 'dealer': 3616, 'unreal': 3617, 'commend': 3618, 'nephew': 3619, 'worm': 3620, 'slick': 3621, 'weakest': 3622, 'ballet': 3623, 'lena': 3624, 'hopper': 3625, 'feminist': 3626, 'mario': 3627, 'andr': 3628, 'honesti': 3629, '00': 3630, 'region': 3631, 'enlighten': 3632, 'wheel': 3633, 'eager': 3634, 'steam': 3635, 'ensur': 3636, 'jonathan': 3637, 'victori': 3638, 'wore': 3639, 'prequel': 3640, 'nostalg': 3641, 'skull': 3642, 'vice': 3643, 'psychopath': 3644, 'repress': 3645, 'snap': 3646, 'util': 3647, 'owen': 3648, 'safeti': 3649, 'confin': 3650, 'mutant': 3651, 'sappi': 3652, 'hung': 3653, 'properti': 3654, 'franco': 3655, 'morri': 3656, 'charlott': 3657, 'macarthur': 3658, 'composit': 3659, 'leo': 3660, 'sandra': 3661, 'similarli': 3662, 'kingdom': 3663, 'blunt': 3664, 'cg': 3665, 'compens': 3666, 'valuabl': 3667, 'rocki': 3668, 'emperor': 3669, 'repli': 3670, 'drain': 3671, 'del': 3672, 'drum': 3673, 'recycl': 3674, 'pattern': 3675, 'rambl': 3676, 'bumbl': 3677, 'hyde': 3678, 'rope': 3679, 'heartbreak': 3680, 'tad': 3681, 'thru': 3682, 'deed': 3683, 'despair': 3684, 'miseri': 3685, 'speci': 3686, 'whoopi': 3687, 'tail': 3688, 'acquir': 3689, 'bergman': 3690, 'latin': 3691, '1972': 3692, 'compass': 3693, 'exit': 3694, 'bonu': 3695, 'strand': 3696, 'snl': 3697, 'kyle': 3698, 'dust': 3699, 'montana': 3700, 'campbel': 3701, 'farrel': 3702, 'nervou': 3703, 'bow': 3704, 'dalton': 3705, 'tonight': 3706, 'rotten': 3707, 'airport': 3708, 'rapist': 3709, 'gimmick': 3710, 'contempl': 3711, 'radic': 3712, 'carradin': 3713, 'romp': 3714, 'chess': 3715, 'slug': 3716, 'pour': 3717, 'roth': 3718, 'mistress': 3719, 'bleed': 3720, 'orson': 3721, 'percept': 3722, 'downhil': 3723, 'da': 3724, '35': 3725, 'martian': 3726, 'wacki': 3727, 'olli': 3728, 'gal': 3729, 'oppress': 3730, 'belt': 3731, 'arguabl': 3732, 'shelley': 3733, 'edgar': 3734, 'taught': 3735, 'preach': 3736, 'unpredict': 3737, 'programm': 3738, 'banal': 3739, 'attorney': 3740, 'pervert': 3741, 'slash': 3742, 'tooth': 3743, 'stilt': 3744, 'tackl': 3745, 'heal': 3746, 'pursuit': 3747, 'pervers': 3748, 'melodi': 3749, 'mislead': 3750, '1983': 3751, 'arc': 3752, 'champion': 3753, 'dazzl': 3754, 'paltrow': 3755, 'graham': 3756, 'orang': 3757, 'chicken': 3758, 'duval': 3759, 'employe': 3760, 'raymond': 3761, 'closest': 3762, 'gambl': 3763, 'maid': 3764, 'mesmer': 3765, 'vocal': 3766, 'cleverli': 3767, 'plight': 3768, 'bela': 3769, 'uplift': 3770, 'marti': 3771, 'passeng': 3772, 'tiresom': 3773, 'rubi': 3774, 'sensat': 3775, 'poem': 3776, 'franki': 3777, 'virginia': 3778, 'conneri': 3779, 'vengeanc': 3780, 'dixon': 3781, 'inject': 3782, 'convincingli': 3783, 'numb': 3784, '1968': 3785, 'yawn': 3786, 'quarter': 3787, 'giggl': 3788, 'habit': 3789, 'engross': 3790, 'gerard': 3791, 'crystal': 3792, 'iran': 3793, 'lundgren': 3794, 'paranoia': 3795, 'outing': 3796, 'abraham': 3797, 'bay': 3798, 'calm': 3799, 'secretli': 3800, 'climact': 3801, 'suffic': 3802, 'amitabh': 3803, 'clone': 3804, 'swallow': 3805, 'tube': 3806, 'extens': 3807, 'mute': 3808, 'monologu': 3809, 'pokemon': 3810, 'scottish': 3811, 'volum': 3812, 'profan': 3813, 'whine': 3814, 'sirk': 3815, 'backward': 3816, 'underst': 3817, 'meander': 3818, 'profess': 3819, 'plod': 3820, 'junior': 3821, 'trend': 3822, 'bend': 3823, 'ethan': 3824, 'franci': 3825, 'grotesqu': 3826, 'richardson': 3827, 'nichola': 3828, 'chicago': 3829, 'fed': 3830, 'frankenstein': 3831, 'im': 3832, 'dispos': 3833, 'taxi': 3834, 'surpass': 3835, 'austen': 3836, 'lowest': 3837, 'poetri': 3838, 'abort': 3839, 'expand': 3840, 'earl': 3841, 'septemb': 3842, 'linger': 3843, 'spock': 3844, 'descent': 3845, 'der': 3846, 'myth': 3847, 'sue': 3848, 'mundan': 3849, 'greedi': 3850, 'tourist': 3851, 'rant': 3852, 'econom': 3853, 'simplic': 3854, 'household': 3855, 'compliment': 3856, 'dysfunct': 3857, 'lure': 3858, 'instrument': 3859, 'stallon': 3860, 'literatur': 3861, 'spoke': 3862, 'hum': 3863, 'catchi': 3864, 'cannon': 3865, 'waitress': 3866, 'rubber': 3867, 'muddl': 3868, 'eugen': 3869, 'nostalgia': 3870, 'firstli': 3871, 'mortal': 3872, 'irrelev': 3873, 'omen': 3874, 'june': 3875, 'occupi': 3876, 'dictat': 3877, 'crucial': 3878, 'lang': 3879, 'randi': 3880, 'mankind': 3881, 'recognis': 3882, 'louis': 3883, 'hello': 3884, 'dement': 3885, 'recognit': 3886, 'duck': 3887, 'furi': 3888, 'carel': 3889, 'map': 3890, 'insur': 3891, 'stale': 3892, 'phoni': 3893, 'alongsid': 3894, 'equival': 3895, 'coast': 3896, 'flee': 3897, 'eaten': 3898, 'deaf': 3899, 'phantom': 3900, 'molli': 3901, 'cent': 3902, 'damon': 3903, 'bacal': 3904, 'sissi': 3905, 'lengthi': 3906, 'blackmail': 3907, '1973': 3908, 'biko': 3909, 'bike': 3910, 'bump': 3911, 'newli': 3912, 'wisdom': 3913, 'labor': 3914, 'antwon': 3915, 'freez': 3916, 'dreari': 3917, 'heel': 3918, 'onlin': 3919, 'ashley': 3920, 'daisi': 3921, 'rooney': 3922, 'loyalti': 3923, 'drake': 3924, 'likewis': 3925, 'damm': 3926, 'rude': 3927, 'distinguish': 3928, 'grayson': 3929, 'cyborg': 3930, 'twilight': 3931, 'reign': 3932, 'buffalo': 3933, 'interior': 3934, 'approv': 3935, 'unorigin': 3936, 'basketbal': 3937, 'nineti': 3938, 'pink': 3939, 'attribut': 3940, 'emphas': 3941, 'worn': 3942, 'keith': 3943, 'analysi': 3944, 'butler': 3945, 'prey': 3946, 'incorpor': 3947, 'baddi': 3948, 'vein': 3949, 'barrymor': 3950, 'provoc': 3951, 'tunnel': 3952, 'proce': 3953, 'chronicl': 3954, 'ridden': 3955, 'startl': 3956, 'sailor': 3957, 'inher': 3958, 'exposur': 3959, 'boxer': 3960, 'er': 3961, 'unrel': 3962, 'elm': 3963, 'degrad': 3964, 'nicol': 3965, 'bunni': 3966, 'underli': 3967, 'robbin': 3968, 'predat': 3969, 'drift': 3970, 'walsh': 3971, 'condemn': 3972, 'hypnot': 3973, 'barrel': 3974, 'fleet': 3975, 'carla': 3976, 'stalker': 3977, 'indiffer': 3978, 'substitut': 3979, 'meg': 3980, 'belushi': 3981, 'undeni': 3982, 'julian': 3983, 'mormon': 3984, 'improvis': 3985, 'simmon': 3986, 'millionair': 3987, 'mighti': 3988, 'othello': 3989, 'meyer': 3990, 'shove': 3991, 'lampoon': 3992, 'roof': 3993, '3d': 3994, 'firm': 3995, 'vital': 3996, 'edgi': 3997, 'agenda': 3998, 'dolph': 3999, 'alison': 4000, 'unawar': 4001, 'greed': 4002, 'exquisit': 4003, 'hay': 4004, 'reid': 4005, 'nyc': 4006, 'palac': 4007, 'rukh': 4008, 'disord': 4009, 'alarm': 4010, 'priceless': 4011, 'errol': 4012, 'watson': 4013, 'warmth': 4014, 'marion': 4015, 'enthusiasm': 4016, 'novak': 4017, 'mtv': 4018, 'simultan': 4019, 'what': 4020, 'peck': 4021, 'championship': 4022, 'sergeant': 4023, 'coup': 4024, 'drip': 4025, 'profit': 4026, '1933': 4027, 'cassidi': 4028, 'distort': 4029, 'minimum': 4030, 'crown': 4031, 'angela': 4032, 'randomli': 4033, 'ponder': 4034, 'thompson': 4035, 'gestur': 4036, 'showdown': 4037, 'session': 4038, 'glanc': 4039, 'unleash': 4040, 'eastern': 4041, 'peril': 4042, 'orlean': 4043, 'testament': 4044, '13th': 4045, 'campaign': 4046, 'nun': 4047, 'beatl': 4048, 'preserv': 4049, 'pamela': 4050, 'israel': 4051, 'iraq': 4052, 'zizek': 4053, 'valentin': 4054, 'petti': 4055, 'spain': 4056, 'empathi': 4057, 'valley': 4058, 'cooki': 4059, 'perpetu': 4060, 'bro': 4061, 'travesti': 4062, 'stake': 4063, 'climat': 4064, 'regist': 4065, 'stroke': 4066, 'shootout': 4067, 'crawl': 4068, 'buster': 4069, 'sabrina': 4070, '1984': 4071, 'unimagin': 4072, 'represent': 4073, 'kurosawa': 4074, 'din': 4075, 'realm': 4076, 'jan': 4077, 'rout': 4078, 'reson': 4079, 'brenda': 4080, 'miyazaki': 4081, 'wig': 4082, 'quinn': 4083, 'gentleman': 4084, 'cream': 4085, 'han': 4086, 'scotland': 4087, 'exposit': 4088, 'crow': 4089, 'calib': 4090, 'restrain': 4091, 'mon': 4092, 'contradict': 4093, 'fido': 4094, 'cloud': 4095, 'pole': 4096, 'perceiv': 4097, 'warrant': 4098, 'traumat': 4099, 'absent': 4100, '1997': 4101, 'pretens': 4102, '1987': 4103, 'sucker': 4104, 'soderbergh': 4105, 'monoton': 4106, 'meryl': 4107, 'wax': 4108, 'delic': 4109, 'josh': 4110, 'compromis': 4111, 'unsatisfi': 4112, 'femm': 4113, 'demis': 4114, 'tacki': 4115, 'painter': 4116, 'crawford': 4117, 'fuller': 4118, 'unseen': 4119, 'dana': 4120, 'sammi': 4121, 'distress': 4122, 'abomin': 4123, 'ross': 4124, 'stargat': 4125, 'greg': 4126, 'shoddi': 4127, 'passabl': 4128, 'baldwin': 4129, 'mclaglen': 4130, 'shaki': 4131, 'businessman': 4132, 'darren': 4133, 'censor': 4134, 'ustinov': 4135, 'spacey': 4136, 'derang': 4137, 'geek': 4138, 'wholli': 4139, 'primit': 4140, 'expedit': 4141, 'fenc': 4142, 'tarantino': 4143, 'norm': 4144, 'judgment': 4145, 'anchor': 4146, 'jewel': 4147, 'click': 4148, 'exclus': 4149, '1993': 4150, 'valid': 4151, 'unravel': 4152, 'dee': 4153, 'verbal': 4154, 'tech': 4155, 'deceas': 4156, 'kumar': 4157, 'deniro': 4158, 'reluct': 4159, 'seal': 4160, 'correctli': 4161, 'clash': 4162, 'polici': 4163, 'sid': 4164, 'austin': 4165, 'uncov': 4166, 'antonioni': 4167, 'nathan': 4168, 'fog': 4169, 'accuraci': 4170, 'furiou': 4171, '3000': 4172, 'fabric': 4173, 'sheet': 4174, 'patienc': 4175, 'debt': 4176, '1971': 4177, 'logan': 4178, 'bake': 4179, 'temper': 4180, 'unfair': 4181, 'wang': 4182, 'murray': 4183, 'sustain': 4184, 'trait': 4185, 'fought': 4186, 'slam': 4187, '2008': 4188, 'conduct': 4189, 'tax': 4190, 'nicola': 4191, 'dreck': 4192, 'wretch': 4193, '1995': 4194, 'joel': 4195, 'fart': 4196, 'roller': 4197, 'sunni': 4198, 'hallucin': 4199, 'behold': 4200, 'pocket': 4201, 'shanghai': 4202, 'clerk': 4203, 'malon': 4204, 'enforc': 4205, 'mode': 4206, 'ritual': 4207, 'alec': 4208, 'vanc': 4209, 'seldom': 4210, 'sand': 4211, 'darn': 4212, 'crippl': 4213, 'preston': 4214, 'pete': 4215, 'stark': 4216, 'fundament': 4217, 'phil': 4218, 'squad': 4219, 'bias': 4220, 'conscious': 4221, 'outlin': 4222, 'soup': 4223, 'despis': 4224, 'exhaust': 4225, 'guitar': 4226, 'legaci': 4227, 'preposter': 4228, 'sweep': 4229, 'shell': 4230, 'divid': 4231, 'critiqu': 4232, 'rita': 4233, 'schedul': 4234, 'grief': 4235, 'robber': 4236, 'penni': 4237, 'stuart': 4238, 'bridget': 4239, 'technicolor': 4240, 'isabel': 4241, 'scriptwrit': 4242, 'clau': 4243, 'runner': 4244, 'helpless': 4245, 'tactic': 4246, 'canyon': 4247, 'boyl': 4248, 'palanc': 4249, 'alley': 4250, 'kansa': 4251, 'russia': 4252, 'sugar': 4253, 'gregori': 4254, 'delv': 4255, 'culmin': 4256, 'inabl': 4257, 'rehash': 4258, 'alicia': 4259, 'downey': 4260, 'newman': 4261, 'restrict': 4262, 'marc': 4263, 'unexpectedli': 4264, 'invad': 4265, 'jacket': 4266, 'consciou': 4267, 'liberti': 4268, 'drove': 4269, 'bloom': 4270, 'jodi': 4271, 'vomit': 4272, 'flair': 4273, 'lacklust': 4274, 'sentinel': 4275, 'passag': 4276, 'rear': 4277, 'agenc': 4278, 'propos': 4279, 'connor': 4280, 'cigarett': 4281, 'sniper': 4282, 'implic': 4283, 'feat': 4284, 'cap': 4285, 'improb': 4286, 'tripe': 4287, 'rod': 4288, 'vet': 4289, 'delet': 4290, 'arrow': 4291, '1936': 4292, 'rampag': 4293, 'aesthet': 4294, 'wrench': 4295, 'asylum': 4296, 'karl': 4297, 'behaviour': 4298, 'rehears': 4299, 'mccoy': 4300, 'awhil': 4301, 'sharon': 4302, 'ladder': 4303, 'pale': 4304, '22': 4305, 'bacon': 4306, 'kolchak': 4307, 'chainsaw': 4308, 'foxx': 4309, 'yeti': 4310, 'tendenc': 4311, 'horn': 4312, 'lush': 4313, 'newcom': 4314, 'amazon': 4315, 'tasteless': 4316, 'lurk': 4317, 'wagner': 4318, 'underneath': 4319, '1978': 4320, 'globe': 4321, 'tomorrow': 4322, 'conscienc': 4323, 'weav': 4324, 'paramount': 4325, 'hackney': 4326, 'rumor': 4327, 'basing': 4328, 'rhythm': 4329, 'suffici': 4330, 'financ': 4331, 'elit': 4332, 'hungri': 4333, '19th': 4334, 'shortcom': 4335, 'filler': 4336, 'visitor': 4337, 'loneli': 4338, 'stream': 4339, 'scoop': 4340, 'aristocrat': 4341, 'prank': 4342, 'coaster': 4343, 'spice': 4344, 'thunderbird': 4345, '1988': 4346, '1920': 4347, 'paradis': 4348, 'hulk': 4349, 'wildli': 4350, 'sung': 4351, 'el': 4352, 'minu': 4353, 'suspicion': 4354, 'fright': 4355, 'rub': 4356, 'iv': 4357, 'grudg': 4358, 'secondari': 4359, 'worship': 4360, 'smell': 4361, 'straightforward': 4362, 'penn': 4363, 'choppi': 4364, 'dirt': 4365, 'cancer': 4366, 'lectur': 4367, 'leigh': 4368, 'counterpart': 4369, 'ingeni': 4370, 'couch': 4371, 'brit': 4372, 'heist': 4373, '75': 4374, '1939': 4375, 'minist': 4376, '1989': 4377, 'beverli': 4378, 'impos': 4379, 'ram': 4380, 'abrupt': 4381, 'atroc': 4382, 'en': 4383, 'curli': 4384, 'immers': 4385, 'quietli': 4386, 'recogniz': 4387, 'literari': 4388, 'inmat': 4389, 'standout': 4390, 'tierney': 4391, 'entranc': 4392, 'naughti': 4393, 'posey': 4394, 'bread': 4395, 'chamberlain': 4396, 'hopkin': 4397, 'chavez': 4398, 'teas': 4399, 'paxton': 4400, 'springer': 4401, 'wwe': 4402, 'attenborough': 4403, 'skeptic': 4404, 'injuri': 4405, 'heartfelt': 4406, 'nemesi': 4407, 'nolan': 4408, 'ace': 4409, 'bernard': 4410, 'morbid': 4411, 'transcend': 4412, 'enthral': 4413, '1986': 4414, 'convert': 4415, 'moreov': 4416, 'quaid': 4417, 'cattl': 4418, 'lindsay': 4419, 'sassi': 4420, 'clan': 4421, 'missil': 4422, 'yearn': 4423, 'geni': 4424, 'misguid': 4425, 'policeman': 4426, 'variat': 4427, 'net': 4428, 'entitl': 4429, 'duel': 4430, 'watcher': 4431, 'esther': 4432, 'laurenc': 4433, 'sublim': 4434, 'ratso': 4435, 'characteris': 4436, 'steadi': 4437, 'reliabl': 4438, 'vader': 4439, 'kidman': 4440, 'bye': 4441, 'moder': 4442, 'diari': 4443, 'poe': 4444, 'brood': 4445, 'buzz': 4446, 'kitti': 4447, 'spiral': 4448, 'hopelessli': 4449, 'rosemari': 4450, 'graini': 4451, 'tyler': 4452, 'cruelti': 4453, 'youngest': 4454, 'grin': 4455, '1979': 4456, 'puppi': 4457, 'egg': 4458, 'setup': 4459, 'uncut': 4460, 'out': 4461, 'dont': 4462, 'bean': 4463, 'artsi': 4464, 'hk': 4465, 'obstacl': 4466, 'enabl': 4467, 'carlito': 4468, 'unexplain': 4469, 'facil': 4470, 'mytholog': 4471, 'weather': 4472, 'kline': 4473, 'narrow': 4474, 'clueless': 4475, 'christin': 4476, 'disastr': 4477, 'acquaint': 4478, 'bewar': 4479, 'brendan': 4480, 'bronson': 4481, 'baffl': 4482, 'decept': 4483, 'athlet': 4484, 'gina': 4485, 'exterior': 4486, 'oblig': 4487, 'effici': 4488, 'spontan': 4489, 'hammi': 4490, 'underworld': 4491, 'hain': 4492, 'niec': 4493, 'despic': 4494, 'bounc': 4495, 'fuel': 4496, 'sweat': 4497, '1969': 4498, 'heap': 4499, 'gillian': 4500, 'martha': 4501, 'preming': 4502, 'patricia': 4503, 'loath': 4504, 'taboo': 4505, 'tick': 4506, 'rome': 4507, 'goof': 4508, '19': 4509, 'suprem': 4510, 'candl': 4511, 'hepburn': 4512, 'dandi': 4513, 'insipid': 4514, 'outlaw': 4515, 'dilemma': 4516, 'angst': 4517, 'shatter': 4518, 'housewif': 4519, 'viewpoint': 4520, 'mermaid': 4521, 'circu': 4522, 'biker': 4523, 'astound': 4524, 'mayhem': 4525, 'injur': 4526, 'preachi': 4527, 'uh': 4528, 'trigger': 4529, 'lester': 4530, 'sleepwalk': 4531, 'enlist': 4532, 'virtu': 4533, 'fontain': 4534, 'renaiss': 4535, 'headach': 4536, 'loi': 4537, 'harmless': 4538, 'sooner': 4539, 'scar': 4540, 'analyz': 4541, '73': 4542, 'redund': 4543, 'camcord': 4544, 'filth': 4545, 'surgeri': 4546, 'contempt': 4547, 'immatur': 4548, 'scorses': 4549, 'gere': 4550, 'stair': 4551, 'hostag': 4552, 'whore': 4553, 'fluff': 4554, 'intric': 4555, 'dish': 4556, 'amor': 4557, 'hooker': 4558, 'hokey': 4559, 'boston': 4560, 'sox': 4561, 'ariel': 4562, 'guin': 4563, 'steer': 4564, 'spade': 4565, 'tripl': 4566, 'bent': 4567, 'oldest': 4568, 'foolish': 4569, 'zoom': 4570, 'glorifi': 4571, 'claustrophob': 4572, 'phenomenon': 4573, 'salt': 4574, 'stimul': 4575, 'idol': 4576, 'slimi': 4577, 'overlong': 4578, 'ebert': 4579, 'cassavet': 4580, 'dismal': 4581, 'macho': 4582, 'corbett': 4583, 'schlock': 4584, 'astronaut': 4585, 'trivia': 4586, 'cohen': 4587, 'spree': 4588, 'joker': 4589, 'nolt': 4590, 'zane': 4591, 'proport': 4592, 'perman': 4593, 'alvin': 4594, 'fascist': 4595, 'gasp': 4596, 'keen': 4597, 'widescreen': 4598, 'obligatori': 4599, 'mutual': 4600, 'shield': 4601, 'flashi': 4602, 'flirt': 4603, 'gabl': 4604, 'margin': 4605, 'harold': 4606, 'naschi': 4607, '1976': 4608, 'flag': 4609, 'frantic': 4610, 'transplant': 4611, 'cush': 4612, 'rhyme': 4613, '1981': 4614, 'radiat': 4615, 'conquer': 4616, 'corman': 4617, 'down': 4618, 'preced': 4619, 'mount': 4620, 'dwarf': 4621, 'antagonist': 4622, 'shred': 4623, 'messi': 4624, 'strongest': 4625, 'assert': 4626, 'remad': 4627, 'beard': 4628, 'cow': 4629, 'spinal': 4630, 'faint': 4631, 'muscl': 4632, 'vaniti': 4633, 'info': 4634, 'deer': 4635, 'www': 4636, 'departur': 4637, 'mobil': 4638, 'brush': 4639, 'boob': 4640, 'sensual': 4641, 'discern': 4642, 'bachelor': 4643, '1945': 4644, 'danish': 4645, 'interestingli': 4646, 'hara': 4647, '28': 4648, 'neurot': 4649, 'barn': 4650, 'off': 4651, 'scandal': 4652, 'archiv': 4653, 'raj': 4654, 'inflict': 4655, 'wield': 4656, 'ritchi': 4657, 'persuad': 4658, 'fishburn': 4659, 'divin': 4660, 'flock': 4661, 'resum': 4662, 'someday': 4663, 'triangl': 4664, 'carey': 4665, '95': 4666, 'instruct': 4667, 'bitten': 4668, 'claud': 4669, 'strive': 4670, 'mol': 4671, 'repris': 4672, 'aborigin': 4673, 'frontier': 4674, 'miracul': 4675, 'carlo': 4676, 'proclaim': 4677, 'heartwarm': 4678, 'fragil': 4679, 'senior': 4680, 'undermin': 4681, 'bate': 4682, 'anton': 4683, 'earnest': 4684, 'biblic': 4685, 'hapless': 4686, 'cher': 4687, 'harrison': 4688, 'rot': 4689, 'pixar': 4690, 'melissa': 4691, 'dim': 4692, 'mobster': 4693, 'dylan': 4694, 'europa': 4695, 'recit': 4696, 'cycl': 4697, 'hug': 4698, 'ish': 4699, 'dame': 4700, 'casino': 4701, 'timberlak': 4702, 'luka': 4703, 'prophet': 4704, 'clad': 4705, 'loretta': 4706, 'traffic': 4707, 'cliffhang': 4708, 'banter': 4709, 'hilar': 4710, 'submit': 4711, 'cb': 4712, 'jade': 4713, 'neill': 4714, 'kathryn': 4715, 'pacif': 4716, 'parson': 4717, 'helm': 4718, 'axe': 4719, 'artwork': 4720, 'vibrant': 4721, 'colin': 4722, 'pickford': 4723, 'wendigo': 4724, 'electron': 4725, 'feast': 4726, 'articl': 4727, 'illus': 4728, 'flavor': 4729, 'vile': 4730, 'static': 4731, 'northern': 4732, 'http': 4733, 'isra': 4734, 'pc': 4735, 'lucil': 4736, 'estrang': 4737, 'choke': 4738, 'rooki': 4739, 'vanessa': 4740, 'redneck': 4741, 'cerebr': 4742, 'marlon': 4743, 'trier': 4744, 'token': 4745, 'wardrob': 4746, 'seedi': 4747, 'eli': 4748, 'nope': 4749, 'blatantli': 4750, 'akin': 4751, 'aris': 4752, 'antholog': 4753, 'uma': 4754, 'foil': 4755, 'misfortun': 4756, 'orphan': 4757, 'toronto': 4758, 'mason': 4759, 'mathieu': 4760, 'milo': 4761, 'breakfast': 4762, 'alexandr': 4763, 'lui': 4764, 'venom': 4765, 'shepherd': 4766, 'bikini': 4767, 'razor': 4768, 'legitim': 4769, 'holocaust': 4770, 'bondag': 4771, 'winchest': 4772, 'jordan': 4773, 'sicken': 4774, 'jo': 4775, 'nightclub': 4776, 'charlton': 4777, 'ceremoni': 4778, 'boyer': 4779, 'feminin': 4780, 'peer': 4781, 'glare': 4782, 'ideolog': 4783, 'fifth': 4784, 'deem': 4785, 'audrey': 4786, 'cartoonish': 4787, 'dudley': 4788, 'affleck': 4789, 'huston': 4790, 'magician': 4791, 'clinic': 4792, 'swept': 4793, 'frog': 4794, 'tack': 4795, 'shorter': 4796, 'psych': 4797, 'gunga': 4798, 'linear': 4799, 'retriev': 4800, 'abund': 4801, 'oppon': 4802, 'comprehend': 4803, 'outdat': 4804, 'turd': 4805, 'wrestler': 4806, 'styliz': 4807, 'disregard': 4808, 'gilbert': 4809, 'knightley': 4810, 'highway': 4811, 'howl': 4812, 'smack': 4813, 'leather': 4814, 'durat': 4815, 'newer': 4816, 'corn': 4817, 'evolut': 4818, 'uniformli': 4819, '1991': 4820, 'compris': 4821, 'lighter': 4822, 'greet': 4823, 'einstein': 4824, 'toe': 4825, '1994': 4826, 'deliver': 4827, 'energet': 4828, 'cemeteri': 4829, 'snatch': 4830, 'bogu': 4831, 'sleaz': 4832, 'plate': 4833, 'client': 4834, 'monument': 4835, 'cuban': 4836, 'spawn': 4837, 'chip': 4838, 'boo': 4839, 'summar': 4840, 'collector': 4841, 'tara': 4842, '4th': 4843, 'breakdown': 4844, 'moe': 4845, 'conrad': 4846, 'braveheart': 4847, 'bastard': 4848, 'lavish': 4849, 'senat': 4850, 'spine': 4851, 'mitch': 4852, 'btw': 4853, 'phenomen': 4854, 'lifeless': 4855, 'whack': 4856, 'goldsworthi': 4857, 'potter': 4858, 'salman': 4859, 'inaccuraci': 4860, 'belli': 4861, 'lex': 4862, 'capot': 4863, 'jedi': 4864, 'signal': 4865, 'randolph': 4866, 'bulk': 4867, 'alleg': 4868, 'ol': 4869, 'eleven': 4870, 'firmli': 4871, 'constitut': 4872, 'pronounc': 4873, 'undertak': 4874, 'appl': 4875, 'nina': 4876, 'historian': 4877, 'wtf': 4878, 'embark': 4879, 'jam': 4880, 'fluid': 4881, 'bori': 4882, 'jule': 4883, 'sorrow': 4884, 'spectacl': 4885, 'neatli': 4886, 'occup': 4887, 'trauma': 4888, 'mcqueen': 4889, 'ie': 4890, 'creek': 4891, 'replay': 4892, '1974': 4893, 'cecil': 4894, 'jare': 4895, 'kazan': 4896, '1977': 4897, 'armstrong': 4898, 'judd': 4899, 'healthi': 4900, 'liu': 4901, 'luxuri': 4902, 'gilliam': 4903, 'clara': 4904, 'outright': 4905, 'undead': 4906, 'kent': 4907, 'evelyn': 4908, 'inaccur': 4909, 'subtli': 4910, 'propheci': 4911, 'decapit': 4912, 'forgiven': 4913, 'truman': 4914, 'antonio': 4915, 'carmen': 4916, 'sidewalk': 4917, 'cape': 4918, 'comb': 4919, 'congratul': 4920, 'miniseri': 4921, 'curtain': 4922, 'mum': 4923, 'groan': 4924, 'vignett': 4925, 'galaxi': 4926, 'vain': 4927, 'knee': 4928, 'comprehens': 4929, 'id': 4930, 'tokyo': 4931, 'relentless': 4932, 'spray': 4933, 'bsg': 4934, 'inclus': 4935, 'pepper': 4936, 'unattract': 4937, 'roar': 4938, 'kiddi': 4939, 'unsuspect': 4940, 'walt': 4941, '1985': 4942, 'poker': 4943, 'porter': 4944, 'palm': 4945, 'genet': 4946, 'conan': 4947, 'abound': 4948, 'miami': 4949, 'fruit': 4950, 'lanc': 4951, 'pioneer': 4952, 'lauren': 4953, 'paula': 4954, 'meal': 4955, 'ash': 4956, 'aussi': 4957, 'blur': 4958, 'basket': 4959, 'goldblum': 4960, 'rosario': 4961, 'sacrif': 4962, 'bait': 4963, 'vastli': 4964, 'profil': 4965, 'hackman': 4966, 'sophi': 4967, 'frontal': 4968, 'drone': 4969, 'reincarn': 4970, 'playboy': 4971, 'victorian': 4972, 'assort': 4973, 'incorrect': 4974, 'monti': 4975, 'handicap': 4976, 'optimist': 4977, 'epitom': 4978, 'verg': 4979, 'hostil': 4980, 'masterson': 4981, 'omin': 4982, 'substanti': 4983, 'detach': 4984, 'bravo': 4985, 'sparkl': 4986, 'ingrid': 4987, 'turtl': 4988, 'scariest': 4989, 'jill': 4990, 'ghetto': 4991, 'weaker': 4992, '21st': 4993, 'evan': 4994, 'growth': 4995, 'motorcycl': 4996, 'rapidli': 4997, 'weari': 4998, 'macabr': 4999}
</code>
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use._____no_output_____
<code>
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)_____no_output_____with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)_____no_output_____
</code>
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`._____no_output_____
<code>
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)_____no_output_____train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)_____no_output_____
</code>
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?_____no_output_____
<code>
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[100])
print()
print(len(train_X[100]))[ 38 21 1 947 172 68 11 144 35 371 11 124 267 1
47 304 1046 123 2433 2216 761 841 2 17 7 81 32 26
9 346 39 729 17 796 831 354 794 37 729 1 399 355
1 1737 60 188 11 121 440 30 356 947 172 11 22 171
236 226 7 287 761 426 541 1 727 1837 4974 124 922 796
243 9 58 99 60 53 2 106 7 2538 1 1 856 542
399 503 18 346 1431 25 33 2539 1 593 4 4 6 223
12 18 141 1327 59 30 32 108 4 319 1 286 241 973
211 78 62 346 593 1330 207 30 313 152 203 763 124 142
2433 2216 33 1174 327 68 10 11 99 26 2 2216 47 770
218 11 30 56 47 281 2216 2 32 1712 2 4 95 26
60 1 2382 1364 5 32 7 19 152 9 2 563 1948 908
152 2216 354 4445 2318 557 6 172 27 1930 84 208 14 29
611 1 1099 950 34 50 974 168 1 161 9 2216 33 2
27 20 7 2216 32 2 34 683 33 9 7 32 278 210
426 36 38 16 38 129 47 22 47 2 979 1 251 1378
1 189 251 146 236 231 50 936 99 2 99 66 16 558
66 39 1687 1946 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
500
</code>
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?_____no_output_____**Answer:** Passing the training and testing sets through mentioned functions has its own set of advantages and disadvantages. For instance, preprocessing the data removes noise from datasets which makes the model more accurate and the training process more computationally efficient since it doesn't deal with useless data. However, truncating the data might affect the model understanding of a review sentiment as the indicating words of a particular review might be at the very end of it. _____no_output_____## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review._____no_output_____
<code>
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)_____no_output_____
</code>
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model._____no_output_____
<code>
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()_____no_output_____input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)_____no_output_____
</code>
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory._____no_output_____## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below._____no_output_____
<code>
!pygmentize train/model.py[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [34mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
</code>
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving._____no_output_____
<code>
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)_____no_output_____
</code>
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later._____no_output_____
<code>
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
model.zero_grad()
output = model(batch_X)
loss = loss_fn(output, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))_____no_output_____
</code>
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose._____no_output_____
<code>
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)Epoch: 1, BCELoss: 0.6950652122497558
Epoch: 2, BCELoss: 0.6860353350639343
Epoch: 3, BCELoss: 0.678533959388733
Epoch: 4, BCELoss: 0.6703558683395385
Epoch: 5, BCELoss: 0.6605463266372681
</code>
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run._____no_output_____### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file._____no_output_____
<code>
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 9,
'hidden_dim': 200,
})_____no_output_____estimator.fit({'training': input_data})'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
</code>
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model._____no_output_____
<code>
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.p2.xlarge')Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
</code>
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is._____no_output_____
<code>
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)_____no_output_____# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions_____no_output_____predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]_____no_output_____from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)_____no_output_____
</code>
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?_____no_output_____**Answer:** Recall that the XGBoost model accuracy is 0.85696, we can say that the difference between the two models in terms of accuracy isn't that great, however, I'd prefer the LSTM implementation more than the BoW's implementation since LSTM's keep track of previous inputs._____no_output_____### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model._____no_output_____
<code>
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'_____no_output_____
</code>
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`._____no_output_____
<code>
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_review_words = review_to_words(test_review)
test_review_words, length = convert_and_pad(word_dict, test_review_words)
test_data = np.array([[length] + test_review_words])_____no_output_____
</code>
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review._____no_output_____
<code>
predictor.predict(test_data)_____no_output_____
</code>
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive._____no_output_____### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it._____no_output_____
<code>
estimator.delete_endpoint()estimator.delete_endpoint() will be deprecated in SageMaker Python SDK v2. Please use the delete_endpoint() function on your predictor instead.
</code>
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided._____no_output_____
<code>
!pygmentize serve/predict.py[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36moptim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[36mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[36mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[36mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[36mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output.[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[36mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [34mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X, data_len = convert_and_pad(model.word_dict, review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
[34mwith[39;49;00m torch.no_grad():
output = model.forward(data)
result = np.round(output.numpy())
[34mreturn[39;49;00m result
</code>
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file._____no_output_____### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data._____no_output_____
<code>
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
</code>
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive._____no_output_____
<code>
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results_____no_output_____ground, results = test_reviews()Starting pos files
Starting neg files
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)_____no_output_____
</code>
As an additional test, we can try sending the `test_review` that we looked at earlier._____no_output_____
<code>
predictor.predict(test_review)_____no_output_____
</code>
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back._____no_output_____## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below._____no_output_____
<code>
predictor.endpoint_____no_output_____
</code>
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**._____no_output_____## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission._____no_output_____Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?_____no_output_____**Answer:**
**_Example Review:_** In my opinion the greatest comedy ever made! Daniels and Carrey work magic together in this film that more than 20 years later is finally considered a classic. When it first came out it was labeled as complete garbage and toilet humour. But to those people I say it is the best garbage and toilet humour you could ask for. What do you expect to see when the title of the film is ''Dumb and Dumber''. The quick back and forth between the two lead actors and not so subtle chirping often goes unnoticed because your'e laughing so hard from the previous scene.the jokes and dialogue are so good and Carey and Daniels deliver them with such authenticity. It's because of this reason that I am able to watch this movie countless times and still be entertained, as I listen to funny remarks that I missed on the previous 100 viewings. What's truly great about this film is that even people who say they hate it will always without fail crack a laugh or two when re-watching it. They just don't want to admit that they find this nonsense to be funny...but it is. More than 20 years later and I still have not seen a buddy comedy that comes close to matching it. _[Source: Dumb and Dumber, IMDB]_
**_Predicted Sentiment:_** POSITIVE_____no_output_____### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill._____no_output_____
<code>
predictor.delete_endpoint()_____no_output_____
</code>
|
{
"repository": "naderabdalghani/udacity-deep-learning-nanodegree",
"path": "sagemaker-deployment/Project/solution/SageMaker Project.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 176730,
"hexsha": "cb6394c4416e1730f6dc36a9c2564b063fbcb313",
"max_line_length": 76621,
"avg_line_length": 87.9691388751,
"alphanum_fraction": 0.6135800373
}
|
# Notebook from mchierici/embo2019popgen
Path: notebooks/chierici_practical_part2.ipynb
<div>
<h1 style="text-align: center;">Machine learning from scratch - Part II</h1>
<h2 style="text-align: center;">EMBO practical course on population genomics 2019 @ Procida, Italy</h2>
<div>
---
### Authors: Marco Chierici & Margherita Francescatto
### _FBK/MPBA_
---_____no_output_____**Recap.** We are using a subset of the SEQC neuroblastoma data set [Zhang et al, Genome Biology, 2015] consisting of 272 samples (136 training, 136 test). The data was preprocessed a bit to facilitate the progress of the tutorial._____no_output_____We start by loading the modules we need to process the data._____no_output_____
<code>
import numpy as np
import pylab as pl ## for plotting
import pandas as pd ## for reading text files and manipulating data frames
from sklearn import neighbors ## kNN classifier
from sklearn import svm ## SVM classifier
from sklearn.ensemble import RandomForestClassifier ## RF classifier
from sklearn.model_selection import cross_val_score ## needed to train in CV
%matplotlib inline
np.random.seed(42) ## set random seed just in case_____no_output_____
</code>
Define files to read:_____no_output_____
<code>
## for convenience, define the data directory as a variable
DATA_DIR = "../data/" #"/path/to/your/data"_____no_output_____DATA_TR = DATA_DIR + "MAV-G_272_tr.txt.gz"
DATA_TS = DATA_DIR + "MAV-G_272_ts.txt.gz"
LABS_TR = DATA_DIR + "labels_tr.txt"
LABS_TS = DATA_DIR + "labels_ts.txt"_____no_output_____
</code>
Read in the files as _pandas dataframes_ (they are conceptually like R data frames):_____no_output_____
<code>
data_tr = pd.read_csv(DATA_TR, sep = "\t")
data_ts = pd.read_csv(DATA_TS, sep = "\t")_____no_output_____
</code>
Since we already looked at the data in the first part of the dataset, we move directly to the juicy stuff._____no_output_____We drop the first column from the train and test expression sets, since it's just the sample IDs..._____no_output_____
<code>
data_tr = data_tr.drop('sampleID',axis =1)
data_ts = data_ts.drop('sampleID',axis =1)_____no_output_____
</code>
...and store the data into Numpy arrays._____no_output_____
<code>
x_tr = data_tr.values
x_ts = data_ts.values_____no_output_____
</code>
Now we read in the files containing labels and select the column with the CLASS target to do our first round of analyses._____no_output_____
<code>
labs_tr = pd.read_csv(LABS_TR, sep = "\t")
labs_ts = pd.read_csv(LABS_TS, sep = "\t")
class_lab_tr = labs_tr[['CLASS']]
class_lab_ts = labs_ts[['CLASS']]
y_tr = class_lab_tr.values.ravel()
y_ts = class_lab_ts.values.ravel()_____no_output_____
</code>
In the previous part of the tutorial, we focused on the k-NN classifiers. In the previous lecture, however, we explored theoretical aspects related to two other broadly used classifiers: Support Vector Machines (SVMs) and Random Forests (RFs). In this second part of tutorial, the first thing we want to do is assessing how these two alternative classification methods perform on our neuroblastoma dataset._____no_output_____We start with SVM. We first rescale the data, import the relevant model and create an instance of the SVM classifier._____no_output_____
<code>
from sklearn.preprocessing import MinMaxScaler
## first you need to create a "scaler" object
scaler = MinMaxScaler(feature_range=(-1,1))
## then you actually scale data by fitting the scaler object on the data
scaler.fit(x_tr)
x_tr = scaler.transform(x_tr)
x_ts = scaler.transform(x_ts)_____no_output_____## import support vector classifier (SVC) and create an instance
from sklearn.svm import SVC
svc = SVC(random_state=42, verbose=1, kernel='linear')_____no_output_____
</code>
Note that the specification _kernel = 'linear'_ implies that a linear kernel will be used. If you remember from the lecture, this means that a linear function is used to define the decision boundaries of the classifier. Alternatives include _‘poly’_ and _‘rbf’_ for polynomial or gaussian kernels respectively. Scikit-learn offers an alternative implementation of linear SVMs. You can find more details in Scikit User Guide. _____no_output_____As previously done with the k-NN classifier, we fit the SVM and get the predictions for the test data._____no_output_____
<code>
## fit the model and get the predictions
svc.fit(x_tr, y_tr)
class_pred_ts = svc.predict(x_ts)_____no_output_____
</code>
Now we give a look at the classification metrics introduced in the first part of the tutorial. to access the functions, we need to load the metrics module._____no_output_____
<code>
from sklearn import metrics
print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts))
print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts))
print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts))_____no_output_____
</code>
We can also give a look at the classification report._____no_output_____
<code>
print(metrics.classification_report(class_lab_ts, class_pred_ts))_____no_output_____
</code>
Exercise: **one-shot Random Forest classification**. _Hint:_ the RF classifier is implemented in the Scikit learn class RandomForestClassifier, from _sklearn.ensemble_ module._____no_output_____
<code>
## space for exercise
from sklearn.ensemble import RandomForestClassifier as RFC
clf = RFC(n_estimators=500)
clf.fit(x_tr, y_tr)
y_pred_rfc = clf.predict(x_ts)
print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, y_pred_rfc))
print('ACC = ', metrics.accuracy_score(class_lab_ts, y_pred_rfc))
print('SENS = ', metrics.recall_score(class_lab_ts, y_pred_rfc))
print(metrics.classification_report(class_lab_ts, y_pred_rfc))_____no_output_____
</code>
## Parameter tuning_____no_output_____As mentioned in the lecture, Scikit learn offers a very useful and flexible tool for parameter tuning called _GridSearchCV_. While the tool is very sophisticated and efficient, it is useful to at least try an example _by hand_ to understand what is happening in the background.
For this example we use a linear SVM and try to tune the C parameter. You might remember from the lectures that the paramenter C essentially controls how much we want to avoid misclassifying each training example. Large values of C result in smaller margins, i.e. closer fitting to the training data. As mentioned in the classes, the drawback is over-fitting, resulting in poor generalization._____no_output_____
<code>
## define the sequence of C values we want to use in the search of the best one
C_list = [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1]
for C in C_list:
print('C = ', C)
svc = svm.SVC(kernel='linear', C=C)
svc.fit(x_tr, class_lab_tr.values.ravel())
class_pred_ts = svc.predict(x_ts)
print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts))
print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts))
print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts), "\n")_____no_output_____
</code>
From C = 1e-03 the classification performance reaches a plateau. C = 1e-04 yields the highest MCC on the test set: when tuning the parameters we would consider this as the best choice for the problem._____no_output_____**Exercise:** as you already saw in the lectures, there are many parameters that can be tuned, also when considering only one simple classifier. For example, if you consider SVM with 'rbf' kernel, you could check performance changes with different values of C **and** gamma, for example using two nested loops._____no_output_____
<code>
## space for exercise_____no_output_____
</code>
As we mentioned, Scikit offers fully automated parameter tuning engine. We illustrate its power with a simple example on our data. We use GridSearchCV to search through a grid of C and gamma parameter options for SVM with 'rbf' kernel. In order to do this we define a small function svc_param_selection that does the work for us._____no_output_____
<code>
from sklearn.model_selection import GridSearchCV
def svc_param_selection(X, y, nfolds):
Cs = [0.001, 0.01, 0.1, 1, 10]
gammas = [0.001, 0.01, 0.1, 1, 'auto']
param_grid = {'C': Cs, 'gamma' : gammas}
grid_search = GridSearchCV(svm.SVC(kernel='rbf'), param_grid, cv=nfolds, n_jobs=4)
grid_search.fit(X, y)
grid_search.best_params_
return grid_search.best_params_
svc_param_selection(x_tr, y_tr, 5)_____no_output_____
</code>
## Feature ranking_____no_output_____As mentioned in the lecture, random forests have a built in tool for feature ranking_____no_output_____
<code>
# Build a forest and compute the feature importances
rf = RandomForestClassifier(n_estimators=250)
rf.fit(x_tr, y_tr)_____no_output_____
</code>
For the sake of completeness make the predictions and check the classification performance._____no_output_____
<code>
class_pred_ts = rf.predict(x_ts)
print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts))
print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts))
print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts))_____no_output_____
</code>
Now extract the feature importances and display the first 10:_____no_output_____
<code>
importances = rf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking (top 10 features):")
for f in range(10):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))_____no_output_____
</code>
Would be nice to know to which genes they actually correspond. If you remember the gene names are the column names of the pandas dataframe containing the training/test data._____no_output_____
<code>
columnsNamesArr = data_tr.columns.values
for i in range(10):
print(columnsNamesArr[indices[i]])_____no_output_____
</code>
## Extra exercises_____no_output_____The classification task considered so far (CLASS) is quite easy, since the classes reflect extreme disease outcomes (favorable vs unfavorable).
A more interesting task could be the prediction of Event-Free Survival (EFS). To do this, an extended version of the dataset is provided in the `/data/marco` directory:_____no_output_____
<code>
DATA_TR = DATA_DIR + "MAV-G_498_tr.txt.gz"
DATA_TS = DATA_DIR + "MAV-G_498_ts.txt.gz"
LABS_TR = DATA_DIR + "labels_full_tr.txt"
LABS_TS = DATA_DIR + "labels_full_ts.txt"_____no_output_____
</code>
Read the data in and prepare the `x_tr`, `x_ts`, `y_tr`, `y_ts` Numpy arrays, as before, using "EFS" as target variable._____no_output_____Recalling concepts from the first practical, perform an explorative PCA analyisis, plotting the first two components._____no_output_____Train a kNN classifier in one-shot mode: fit the model on the training set and predict the labels on the test set. Compute performance metrics using the provided true labels of the test set._____no_output_____Experiment with different classifier(s) and/or different parameters._____no_output_____Try tuning the parameters (e.g. using GridSearchCV) and find the optimal parameter set._____no_output_____Using the optimal parameters, run a (iterated) cross-validation on the training set; compute the average cross-validation metrics._____no_output_____Using the optimal parameters, train a model on the whole training set and predict the labels of the test set. Compute the metrics and compare them with the average cross-validation metrics. What do you expect? Use the trained model to rank the features and inspect the top ones._____no_output_____
|
{
"repository": "mchierici/embo2019popgen",
"path": "notebooks/chierici_practical_part2.ipynb",
"matched_keywords": [
"genomics",
"biology"
],
"stars": null,
"size": 20766,
"hexsha": "cb9166a423d0dec44280fd6533489c41219342c2",
"max_line_length": 431,
"avg_line_length": 27.7620320856,
"alphanum_fraction": 0.5941442743
}
|
# Notebook from ayush9pandey/sbmlReduce
Path: examples/toggle-switch example.ipynb
<code>
from autoreduce import *
import numpy as np
from sympy import symbols_____no_output_____# Post conservation law and other approximations phenomenological model at the RNA level
n = 4 # Number of states
nouts = 2 # Number of outputs
# Inputs by user
x_init = np.zeros(n)
n = 4 # Number of states
timepoints_ode = np.linspace(0, 100, 100)
C = [[0, 0, 1, 0], [0, 0, 0, 1]]
nstates_tol = 3
error_tol = 0.3
# System dynamics symbolically
# params = [100, 50, 10, 5, 5, 0.02, 0.02, 0.01, 0.01]
# params = [1, 1, 5, 0.1, 0.2, 1, 1, 100, 100] # Parameter set for which reduction doesn't work
# K,b_t,b_l,d_t,d_l,del_t,del_l,beta_t,beta_l = params
x0 = symbols('x0')
x1 = symbols('x1')
x2 = symbols('x2')
x3 = symbols('x3')
x = [x0, x1, x2, x3]
K = symbols('K')
b_t = symbols('b_t')
b_l = symbols('b_l')
d_t = symbols('d_t')
d_l = symbols('d_l')
del_t = symbols('del_t')
del_l = symbols('del_l')
beta_t = symbols('beta_t')
beta_l = symbols('beta_l')
params = [K,b_t,b_l,d_t,d_l,del_t,del_l,beta_t,beta_l]
f0 = K * b_t**2/(b_t**2 + x[3]**2) - d_t * x[0]
f1 = K * b_l**2/(b_l**2 + x[2]**2) - d_l * x[1]
f2 = beta_t * x[0] - del_t * x[2]
f3 = beta_l * x[1] - del_l * x[3]
f = [f0,f1,f2,f3]
# parameter values
params_values = [100, 50, 10, 5, 5, 0.02, 0.02, 0.01, 0.01]
sys = System(x, f, params = params, params_values = params_values, C = C, x_init = x_init)_____no_output_____from autoreduce.utils import get_ODE
sys_ode = get_ODE(sys, timepoints_ode)
sol = sys_ode.solve_system().T
try:
import matplotlib.pyplot as plt
plt.plot(timepoints_ode, np.transpose(np.array(C)@sol))
plt.xlabel('Time')
plt.ylabel('[Outputs]')
plt.show()
except:
print('Plotting libraries missing.')_____no_output_____from autoreduce.utils import get_SSM
timepoints_ssm = np.linspace(0,100,100)
sys_ssm = get_SSM(sys, timepoints_ssm)
Ss = sys_ssm.compute_SSM() # len(timepoints) x len(params) x len(states)
out_Ss = []
for i in range(len(params)):
out_Ss.append((np.array(C)@(Ss[:,i,:].T)))
out_Ss = np.reshape(np.array(out_Ss), (len(timepoints_ssm), len(params), nouts))SSM Progress: |██████████████████████████████████████████████████| 100.0% Complete
try:
import seaborn as sn
import matplotlib.pyplot as plt
for j in range(nouts):
sn.heatmap(out_Ss[:,:,j].T)
plt.xlabel('Time')
plt.ylabel('Parameters')
plt.title('Sensitivity of output[{0}] with respect to all parameters'.format(j))
plt.show()
except:
print('Plotting libraries missing.')_____no_output_____from autoreduce.utils import get_reducible
timepoints_ssm = np.linspace(0,100,10)
timepoints_ode = np.linspace(0, 100, 100)
sys_reduce = get_reducible(sys, timepoints_ode, timepoints_ssm)
results = sys_reduce.reduce_simple()Successful time-scale separation solution obtained with states: [x2, x3]!
SSM Progress: |██████████████████████████████████████████████████| 100.0% Complete
SSM Progress: |██████████████████████████████████████████████████| 100.0% Complete
list(results.keys())[0].f[1]_____no_output_____reduced_system, collapsed_system = sys_reduce.solve_timescale_separation([x0,x1], fast_states = [x3, x2])Successful time-scale separation solution obtained with states: [x0, x1]!
reduced_system.f[1]_____no_output_____
</code>
|
{
"repository": "ayush9pandey/sbmlReduce",
"path": "examples/toggle-switch example.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 3,
"size": 54836,
"hexsha": "cb91c32fc9f27e1dfc23e8e622e1b0a71a6ed615",
"max_line_length": 15896,
"avg_line_length": 180.9768976898,
"alphanum_fraction": 0.8885586111
}
|
# Notebook from EmmaK0822/web_scraping
Path: mission_to_mars.ipynb
<code>
# Use Splinter to navigate the sites when needed and BeautifulSoup to help find and parse out the necessary data.
from splinter import Browser
from bs4 import BeautifulSoup
import pandas as pd
from selenium import webdriver_____no_output_____
</code>
# NASA Mars News_____no_output_____
<code>
executable_path = {"executable_path": "chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest"
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, "html.parser")_____no_output_____print(soup.prettify())<!DOCTYPE html>
<html class="no-flash cookies geolocation svg picture canvas video webgl srcdoc supports no-hiddenscroll fullscreen flexbox cssanimations flexboxlegacy no-flexboxtweener csstransforms csstransforms3d csstransitions preserve3d -webkit- no-touchevents" lang="en" style="" xml:lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<script src="//api-public.addthis.com/url/shares.json?url=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F%3Fpage%3D0%26per_page%3D40%26order%3Dpublish_date%2Bdesc%252Ccreated_at%2Bdesc%26search%3D%26category%3D19%252C165%252C184%252C204%26blank_scope%3DLatest&callback=_ate.cbs.rcb_g68x0" type="text/javascript">
</script>
<script src="//www.reddit.com/api/info.json?url=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F%3Fpage%3D0%26per_page%3D40%26order%3Dpublish_date%2Bdesc%252Ccreated_at%2Bdesc%26search%3D%26category%3D19%252C165%252C184%252C204%26blank_scope%3DLatest&jsonp=_ate.cbs.rcb_av1x0" type="text/javascript">
</script>
<script src="//graph.facebook.com/?id=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F%3Fpage%3D0%26per_page%3D40%26order%3Dpublish_date%2Bdesc%252Ccreated_at%2Bdesc%26search%3D%26category%3D19%252C165%252C184%252C204%26blank_scope%3DLatest&callback=_ate.cbs.rcb_522f0" type="text/javascript">
</script>
<script src="https://bam.nr-data.net/1/5e33925808?a=59562082&v=1071.385e752&to=JVcPR0MLWApSRU1eAQVVEhxSC1oSUlkWbBMHXwRAHhdcCUA%3D&rst=3176&ref=https://mars.nasa.gov/news/&ap=236&be=726&fe=2502&dc=1601&af=err,xhr,stn,ins&perf=%7B%22timing%22:%7B%22of%22:1532218140052,%22n%22:0,%22f%22:3,%22dn%22:4,%22dne%22:104,%22c%22:104,%22s%22:120,%22ce%22:204,%22rq%22:205,%22rp%22:621,%22rpe%22:638,%22dl%22:641,%22di%22:1601,%22ds%22:1601,%22de%22:1953,%22dc%22:2501,%22l%22:2501,%22le%22:2570%7D,%22navigation%22:%7B%7D%7D&jsonp=NREUM.setToken" type="text/javascript">
</script>
<script src="//m.addthis.com/live/red_lojson/300lo.json?si=5b53cb1e6480f157&bkl=0&bl=1&pdt=1792&sid=5b53cb1e6480f157&pub=ra-5a690e4c1320e328&rev=v8.3.25-wp&ln=en&pc=men&cb=0&ab=-&dp=mars.nasa.gov&fp=news%2F%3Fpage%3D0%26per_page%3D40%26order%3Dpublish_date%2Bdesc%252Ccreated_at%2Bdesc%26search%3D%26category%3D19%252C165%252C184%252C204%26blank_scope%3DLatest&fr=&of=1&pd=0&irt=0&vcl=0&md=0&ct=1&tct=0&abt=0&cdn=0&pi=1&rb=0&gen=100&chr=UTF-8&mk=Mars%2Cmissions%2CNASA%2Crover%2CCuriosity%2COpportunity%2CInSight%2CMars%20Reconnaissance%20Orbiter%2Cfacts&colc=1532218142611&jsl=1&skipb=1&callback=addthis.cbs.oln9_8039031381306510" type="text/javascript">
</script>
<script src="//m.addthisedge.com/live/boost/ra-5a690e4c1320e328/_ate.track.config_resp" type="text/javascript">
</script>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<!-- Always force latest IE rendering engine or request Chrome Frame -->
<meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible"/>
<script src="https://js-agent.newrelic.com/nr-1071.min.js">
</script>
<script async="" src="https://www.google-analytics.com/analytics.js">
</script>
<script type="text/javascript">
window.NREUM||(NREUM={});NREUM.info={"beacon":"bam.nr-data.net","errorBeacon":"bam.nr-data.net","licenseKey":"5e33925808","applicationID":"59562082","transactionName":"JVcPR0MLWApSRU1eAQVVEhxSC1oSUlkWbBMHXwRAHhdcCUA=","queueTime":0,"applicationTime":236,"agent":""}
</script>
<script type="text/javascript">
(window.NREUM||(NREUM={})).loader_config={xpid:"VQcPUlZTDxAFXVRUBQEPVA=="};window.NREUM||(NREUM={}),__nr_require=function(t,n,e){function r(e){if(!n[e]){var o=n[e]={exports:{}};t[e][0].call(o.exports,function(n){var o=t[e][1][n];return r(o||n)},o,o.exports)}return n[e].exports}if("function"==typeof __nr_require)return __nr_require;for(var o=0;o<e.length;o++)r(e[o]);return r}({1:[function(t,n,e){function r(t){try{s.console&&console.log(t)}catch(n){}}var o,i=t("ee"),a=t(15),s={};try{o=localStorage.getItem("__nr_flags").split(","),console&&"function"==typeof console.log&&(s.console=!0,o.indexOf("dev")!==-1&&(s.dev=!0),o.indexOf("nr_dev")!==-1&&(s.nrDev=!0))}catch(c){}s.nrDev&&i.on("internal-error",function(t){r(t.stack)}),s.dev&&i.on("fn-err",function(t,n,e){r(e.stack)}),s.dev&&(r("NR AGENT IN DEVELOPMENT MODE"),r("flags: "+a(s,function(t,n){return t}).join(", ")))},{}],2:[function(t,n,e){function r(t,n,e,r,s){try{p?p-=1:o(s||new UncaughtException(t,n,e),!0)}catch(f){try{i("ierr",[f,c.now(),!0])}catch(d){}}return"function"==typeof u&&u.apply(this,a(arguments))}function UncaughtException(t,n,e){this.message=t||"Uncaught error with no additional information",this.sourceURL=n,this.line=e}function o(t,n){var e=n?null:c.now();i("err",[t,e])}var i=t("handle"),a=t(16),s=t("ee"),c=t("loader"),f=t("gos"),u=window.onerror,d=!1,l="nr@seenError",p=0;c.features.err=!0,t(1),window.onerror=r;try{throw new Error}catch(h){"stack"in h&&(t(8),t(7),"addEventListener"in window&&t(5),c.xhrWrappable&&t(9),d=!0)}s.on("fn-start",function(t,n,e){d&&(p+=1)}),s.on("fn-err",function(t,n,e){d&&!e[l]&&(f(e,l,function(){return!0}),this.thrown=!0,o(e))}),s.on("fn-end",function(){d&&!this.thrown&&p>0&&(p-=1)}),s.on("internal-error",function(t){i("ierr",[t,c.now(),!0])})},{}],3:[function(t,n,e){t("loader").features.ins=!0},{}],4:[function(t,n,e){function r(t){}if(window.performance&&window.performance.timing&&window.performance.getEntriesByType){var o=t("ee"),i=t("handle"),a=t(8),s=t(7),c="learResourceTimings",f="addEventListener",u="resourcetimingbufferfull",d="bstResource",l="resource",p="-start",h="-end",m="fn"+p,w="fn"+h,v="bstTimer",y="pushState",g=t("loader");g.features.stn=!0,t(6);var b=NREUM.o.EV;o.on(m,function(t,n){var e=t[0];e instanceof b&&(this.bstStart=g.now())}),o.on(w,function(t,n){var e=t[0];e instanceof b&&i("bst",[e,n,this.bstStart,g.now()])}),a.on(m,function(t,n,e){this.bstStart=g.now(),this.bstType=e}),a.on(w,function(t,n){i(v,[n,this.bstStart,g.now(),this.bstType])}),s.on(m,function(){this.bstStart=g.now()}),s.on(w,function(t,n){i(v,[n,this.bstStart,g.now(),"requestAnimationFrame"])}),o.on(y+p,function(t){this.time=g.now(),this.startPath=location.pathname+location.hash}),o.on(y+h,function(t){i("bstHist",[location.pathname+location.hash,this.startPath,this.time])}),f in window.performance&&(window.performance["c"+c]?window.performance[f](u,function(t){i(d,[window.performance.getEntriesByType(l)]),window.performance["c"+c]()},!1):window.performance[f]("webkit"+u,function(t){i(d,[window.performance.getEntriesByType(l)]),window.performance["webkitC"+c]()},!1)),document[f]("scroll",r,{passive:!0}),document[f]("keypress",r,!1),document[f]("click",r,!1)}},{}],5:[function(t,n,e){function r(t){for(var n=t;n&&!n.hasOwnProperty(u);)n=Object.getPrototypeOf(n);n&&o(n)}function o(t){s.inPlace(t,[u,d],"-",i)}function i(t,n){return t[1]}var a=t("ee").get("events"),s=t(18)(a,!0),c=t("gos"),f=XMLHttpRequest,u="addEventListener",d="removeEventListener";n.exports=a,"getPrototypeOf"in Object?(r(document),r(window),r(f.prototype)):f.prototype.hasOwnProperty(u)&&(o(window),o(f.prototype)),a.on(u+"-start",function(t,n){var e=t[1],r=c(e,"nr@wrapped",function(){function t(){if("function"==typeof e.handleEvent)return e.handleEvent.apply(e,arguments)}var n={object:t,"function":e}[typeof e];return n?s(n,"fn-",null,n.name||"anonymous"):e});this.wrapped=t[1]=r}),a.on(d+"-start",function(t){t[1]=this.wrapped||t[1]})},{}],6:[function(t,n,e){var r=t("ee").get("history"),o=t(18)(r);n.exports=r,o.inPlace(window.history,["pushState","replaceState"],"-")},{}],7:[function(t,n,e){var r=t("ee").get("raf"),o=t(18)(r),i="equestAnimationFrame";n.exports=r,o.inPlace(window,["r"+i,"mozR"+i,"webkitR"+i,"msR"+i],"raf-"),r.on("raf-start",function(t){t[0]=o(t[0],"fn-")})},{}],8:[function(t,n,e){function r(t,n,e){t[0]=a(t[0],"fn-",null,e)}function o(t,n,e){this.method=e,this.timerDuration=isNaN(t[1])?0:+t[1],t[0]=a(t[0],"fn-",this,e)}var i=t("ee").get("timer"),a=t(18)(i),s="setTimeout",c="setInterval",f="clearTimeout",u="-start",d="-";n.exports=i,a.inPlace(window,[s,"setImmediate"],s+d),a.inPlace(window,[c],c+d),a.inPlace(window,[f,"clearImmediate"],f+d),i.on(c+u,r),i.on(s+u,o)},{}],9:[function(t,n,e){function r(t,n){d.inPlace(n,["onreadystatechange"],"fn-",s)}function o(){var t=this,n=u.context(t);t.readyState>3&&!n.resolved&&(n.resolved=!0,u.emit("xhr-resolved",[],t)),d.inPlace(t,y,"fn-",s)}function i(t){g.push(t),h&&(x?x.then(a):w?w(a):(E=-E,O.data=E))}function a(){for(var t=0;t<g.length;t++)r([],g[t]);g.length&&(g=[])}function s(t,n){return n}function c(t,n){for(var e in t)n[e]=t[e];return n}t(5);var f=t("ee"),u=f.get("xhr"),d=t(18)(u),l=NREUM.o,p=l.XHR,h=l.MO,m=l.PR,w=l.SI,v="readystatechange",y=["onload","onerror","onabort","onloadstart","onloadend","onprogress","ontimeout"],g=[];n.exports=u;var b=window.XMLHttpRequest=function(t){var n=new p(t);try{u.emit("new-xhr",[n],n),n.addEventListener(v,o,!1)}catch(e){try{u.emit("internal-error",[e])}catch(r){}}return n};if(c(p,b),b.prototype=p.prototype,d.inPlace(b.prototype,["open","send"],"-xhr-",s),u.on("send-xhr-start",function(t,n){r(t,n),i(n)}),u.on("open-xhr-start",r),h){var x=m&&m.resolve();if(!w&&!m){var E=1,O=document.createTextNode(E);new h(a).observe(O,{characterData:!0})}}else f.on("fn-end",function(t){t[0]&&t[0].type===v||a()})},{}],10:[function(t,n,e){function r(t){var n=this.params,e=this.metrics;if(!this.ended){this.ended=!0;for(var r=0;r<d;r++)t.removeEventListener(u[r],this.listener,!1);if(!n.aborted){if(e.duration=a.now()-this.startTime,4===t.readyState){n.status=t.status;var i=o(t,this.lastSize);if(i&&(e.rxSize=i),this.sameOrigin){var c=t.getResponseHeader("X-NewRelic-App-Data");c&&(n.cat=c.split(", ").pop())}}else n.status=0;e.cbTime=this.cbTime,f.emit("xhr-done",[t],t),s("xhr",[n,e,this.startTime])}}}function o(t,n){var e=t.responseType;if("json"===e&&null!==n)return n;var r="arraybuffer"===e||"blob"===e||"json"===e?t.response:t.responseText;return h(r)}function i(t,n){var e=c(n),r=t.params;r.host=e.hostname+":"+e.port,r.pathname=e.pathname,t.sameOrigin=e.sameOrigin}var a=t("loader");if(a.xhrWrappable){var s=t("handle"),c=t(11),f=t("ee"),u=["load","error","abort","timeout"],d=u.length,l=t("id"),p=t(14),h=t(13),m=window.XMLHttpRequest;a.features.xhr=!0,t(9),f.on("new-xhr",function(t){var n=this;n.totalCbs=0,n.called=0,n.cbTime=0,n.end=r,n.ended=!1,n.xhrGuids={},n.lastSize=null,p&&(p>34||p<10)||window.opera||t.addEventListener("progress",function(t){n.lastSize=t.loaded},!1)}),f.on("open-xhr-start",function(t){this.params={method:t[0]},i(this,t[1]),this.metrics={}}),f.on("open-xhr-end",function(t,n){"loader_config"in NREUM&&"xpid"in NREUM.loader_config&&this.sameOrigin&&n.setRequestHeader("X-NewRelic-ID",NREUM.loader_config.xpid)}),f.on("send-xhr-start",function(t,n){var e=this.metrics,r=t[0],o=this;if(e&&r){var i=h(r);i&&(e.txSize=i)}this.startTime=a.now(),this.listener=function(t){try{"abort"===t.type&&(o.params.aborted=!0),("load"!==t.type||o.called===o.totalCbs&&(o.onloadCalled||"function"!=typeof n.onload))&&o.end(n)}catch(e){try{f.emit("internal-error",[e])}catch(r){}}};for(var s=0;s<d;s++)n.addEventListener(u[s],this.listener,!1)}),f.on("xhr-cb-time",function(t,n,e){this.cbTime+=t,n?this.onloadCalled=!0:this.called+=1,this.called!==this.totalCbs||!this.onloadCalled&&"function"==typeof e.onload||this.end(e)}),f.on("xhr-load-added",function(t,n){var e=""+l(t)+!!n;this.xhrGuids&&!this.xhrGuids[e]&&(this.xhrGuids[e]=!0,this.totalCbs+=1)}),f.on("xhr-load-removed",function(t,n){var e=""+l(t)+!!n;this.xhrGuids&&this.xhrGuids[e]&&(delete this.xhrGuids[e],this.totalCbs-=1)}),f.on("addEventListener-end",function(t,n){n instanceof m&&"load"===t[0]&&f.emit("xhr-load-added",[t[1],t[2]],n)}),f.on("removeEventListener-end",function(t,n){n instanceof m&&"load"===t[0]&&f.emit("xhr-load-removed",[t[1],t[2]],n)}),f.on("fn-start",function(t,n,e){n instanceof m&&("onload"===e&&(this.onload=!0),("load"===(t[0]&&t[0].type)||this.onload)&&(this.xhrCbStart=a.now()))}),f.on("fn-end",function(t,n){this.xhrCbStart&&f.emit("xhr-cb-time",[a.now()-this.xhrCbStart,this.onload,n],n)})}},{}],11:[function(t,n,e){n.exports=function(t){var n=document.createElement("a"),e=window.location,r={};n.href=t,r.port=n.port;var o=n.href.split("://");!r.port&&o[1]&&(r.port=o[1].split("/")[0].split("@").pop().split(":")[1]),r.port&&"0"!==r.port||(r.port="https"===o[0]?"443":"80"),r.hostname=n.hostname||e.hostname,r.pathname=n.pathname,r.protocol=o[0],"/"!==r.pathname.charAt(0)&&(r.pathname="/"+r.pathname);var i=!n.protocol||":"===n.protocol||n.protocol===e.protocol,a=n.hostname===document.domain&&n.port===e.port;return r.sameOrigin=i&&(!n.hostname||a),r}},{}],12:[function(t,n,e){function r(){}function o(t,n,e){return function(){return i(t,[f.now()].concat(s(arguments)),n?null:this,e),n?void 0:this}}var i=t("handle"),a=t(15),s=t(16),c=t("ee").get("tracer"),f=t("loader"),u=NREUM;"undefined"==typeof window.newrelic&&(newrelic=u);var d=["setPageViewName","setCustomAttribute","setErrorHandler","finished","addToTrace","inlineHit","addRelease"],l="api-",p=l+"ixn-";a(d,function(t,n){u[n]=o(l+n,!0,"api")}),u.addPageAction=o(l+"addPageAction",!0),u.setCurrentRouteName=o(l+"routeName",!0),n.exports=newrelic,u.interaction=function(){return(new r).get()};var h=r.prototype={createTracer:function(t,n){var e={},r=this,o="function"==typeof n;return i(p+"tracer",[f.now(),t,e],r),function(){if(c.emit((o?"":"no-")+"fn-start",[f.now(),r,o],e),o)try{return n.apply(this,arguments)}catch(t){throw c.emit("fn-err",[arguments,this,t],e),t}finally{c.emit("fn-end",[f.now()],e)}}}};a("setName,setAttribute,save,ignore,onEnd,getContext,end,get".split(","),function(t,n){h[n]=o(p+n)}),newrelic.noticeError=function(t){"string"==typeof t&&(t=new Error(t)),i("err",[t,f.now()])}},{}],13:[function(t,n,e){n.exports=function(t){if("string"==typeof t&&t.length)return t.length;if("object"==typeof t){if("undefined"!=typeof ArrayBuffer&&t instanceof ArrayBuffer&&t.byteLength)return t.byteLength;if("undefined"!=typeof Blob&&t instanceof Blob&&t.size)return t.size;if(!("undefined"!=typeof FormData&&t instanceof FormData))try{return JSON.stringify(t).length}catch(n){return}}}},{}],14:[function(t,n,e){var r=0,o=navigator.userAgent.match(/Firefox[\/\s](\d+\.\d+)/);o&&(r=+o[1]),n.exports=r},{}],15:[function(t,n,e){function r(t,n){var e=[],r="",i=0;for(r in t)o.call(t,r)&&(e[i]=n(r,t[r]),i+=1);return e}var o=Object.prototype.hasOwnProperty;n.exports=r},{}],16:[function(t,n,e){function r(t,n,e){n||(n=0),"undefined"==typeof e&&(e=t?t.length:0);for(var r=-1,o=e-n||0,i=Array(o<0?0:o);++r<o;)i[r]=t[n+r];return i}n.exports=r},{}],17:[function(t,n,e){n.exports={exists:"undefined"!=typeof window.performance&&window.performance.timing&&"undefined"!=typeof window.performance.timing.navigationStart}},{}],18:[function(t,n,e){function r(t){return!(t&&t instanceof Function&&t.apply&&!t[a])}var o=t("ee"),i=t(16),a="nr@original",s=Object.prototype.hasOwnProperty,c=!1;n.exports=function(t,n){function e(t,n,e,o){function nrWrapper(){var r,a,s,c;try{a=this,r=i(arguments),s="function"==typeof e?e(r,a):e||{}}catch(f){l([f,"",[r,a,o],s])}u(n+"start",[r,a,o],s);try{return c=t.apply(a,r)}catch(d){throw u(n+"err",[r,a,d],s),d}finally{u(n+"end",[r,a,c],s)}}return r(t)?t:(n||(n=""),nrWrapper[a]=t,d(t,nrWrapper),nrWrapper)}function f(t,n,o,i){o||(o="");var a,s,c,f="-"===o.charAt(0);for(c=0;c<n.length;c++)s=n[c],a=t[s],r(a)||(t[s]=e(a,f?s+o:o,i,s))}function u(e,r,o){if(!c||n){var i=c;c=!0;try{t.emit(e,r,o,n)}catch(a){l([a,e,r,o])}c=i}}function d(t,n){if(Object.defineProperty&&Object.keys)try{var e=Object.keys(t);return e.forEach(function(e){Object.defineProperty(n,e,{get:function(){return t[e]},set:function(n){return t[e]=n,n}})}),n}catch(r){l([r])}for(var o in t)s.call(t,o)&&(n[o]=t[o]);return n}function l(n){try{t.emit("internal-error",n)}catch(e){}}return t||(t=o),e.inPlace=f,e.flag=a,e}},{}],ee:[function(t,n,e){function r(){}function o(t){function n(t){return t&&t instanceof r?t:t?c(t,s,i):i()}function e(e,r,o,i){if(!l.aborted||i){t&&t(e,r,o);for(var a=n(o),s=h(e),c=s.length,f=0;f<c;f++)s[f].apply(a,r);var d=u[y[e]];return d&&d.push([g,e,r,a]),a}}function p(t,n){v[t]=h(t).concat(n)}function h(t){return v[t]||[]}function m(t){return d[t]=d[t]||o(e)}function w(t,n){f(t,function(t,e){n=n||"feature",y[e]=n,n in u||(u[n]=[])})}var v={},y={},g={on:p,emit:e,get:m,listeners:h,context:n,buffer:w,abort:a,aborted:!1};return g}function i(){return new r}function a(){(u.api||u.feature)&&(l.aborted=!0,u=l.backlog={})}var s="nr@context",c=t("gos"),f=t(15),u={},d={},l=n.exports=o();l.backlog=u},{}],gos:[function(t,n,e){function r(t,n,e){if(o.call(t,n))return t[n];var r=e();if(Object.defineProperty&&Object.keys)try{return Object.defineProperty(t,n,{value:r,writable:!0,enumerable:!1}),r}catch(i){}return t[n]=r,r}var o=Object.prototype.hasOwnProperty;n.exports=r},{}],handle:[function(t,n,e){function r(t,n,e,r){o.buffer([t],r),o.emit(t,n,e)}var o=t("ee").get("handle");n.exports=r,r.ee=o},{}],id:[function(t,n,e){function r(t){var n=typeof t;return!t||"object"!==n&&"function"!==n?-1:t===window?0:a(t,i,function(){return o++})}var o=1,i="nr@id",a=t("gos");n.exports=r},{}],loader:[function(t,n,e){function r(){if(!x++){var t=b.info=NREUM.info,n=l.getElementsByTagName("script")[0];if(setTimeout(u.abort,3e4),!(t&&t.licenseKey&&t.applicationID&&n))return u.abort();f(y,function(n,e){t[n]||(t[n]=e)}),c("mark",["onload",a()+b.offset],null,"api");var e=l.createElement("script");e.src="https://"+t.agent,n.parentNode.insertBefore(e,n)}}function o(){"complete"===l.readyState&&i()}function i(){c("mark",["domContent",a()+b.offset],null,"api")}function a(){return E.exists&&performance.now?Math.round(performance.now()):(s=Math.max((new Date).getTime(),s))-b.offset}var s=(new Date).getTime(),c=t("handle"),f=t(15),u=t("ee"),d=window,l=d.document,p="addEventListener",h="attachEvent",m=d.XMLHttpRequest,w=m&&m.prototype;NREUM.o={ST:setTimeout,SI:d.setImmediate,CT:clearTimeout,XHR:m,REQ:d.Request,EV:d.Event,PR:d.Promise,MO:d.MutationObserver};var v=""+location,y={beacon:"bam.nr-data.net",errorBeacon:"bam.nr-data.net",agent:"js-agent.newrelic.com/nr-1071.min.js"},g=m&&w&&w[p]&&!/CriOS/.test(navigator.userAgent),b=n.exports={offset:s,now:a,origin:v,features:{},xhrWrappable:g};t(12),l[p]?(l[p]("DOMContentLoaded",i,!1),d[p]("load",r,!1)):(l[h]("onreadystatechange",o),d[h]("onload",r)),c("mark",["firstbyte",s],null,"api");var x=0,E=t(17)},{}]},{},["loader",2,10,4,3]);
</script>
<!-- Responsiveness -->
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<!-- Favicon -->
<link href="/apple-touch-icon.png" rel="apple-touch-icon" sizes="180x180"/>
<link href="/favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/>
<link href="/favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/>
<link href="/manifest.json" rel="manifest"/>
<link color="#e48b55" href="/safari-pinned-tab.svg" rel="mask-icon"/>
<meta content="#000000" name="theme-color"/>
<meta content="authenticity_token" name="csrf-param"/>
<meta content="0MrFxUAngZGm7nCQHmc3XB7LGFHHdo+oGVvlXvrLtMw=" name="csrf-token"/>
<title>
News – NASA’s Mars Exploration Program
</title>
<meta content="NASA’s Mars Exploration Program " property="og:site_name"/>
<meta content="mars.nasa.gov" name="author"/>
<meta content="Mars, missions, NASA, rover, Curiosity, Opportunity, InSight, Mars Reconnaissance Orbiter, facts" name="keywords"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." name="description"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." property="og:description"/>
<meta content="News – NASA’s Mars Exploration Program " property="og:title"/>
<meta content="https://mars.nasa.gov/news?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest" property="og:url"/>
<meta content="article" property="og:type"/>
<meta content="2017-09-22 19:53:22 UTC" property="og:updated_time"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_142497main_PIA03154-200.jpg" property="og:image"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_142497main_PIA03154-200.jpg" name="twitter:image"/>
<link href="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_142497main_PIA03154-200.jpg" rel="image_src"/>
<meta content="195570401081308" property="fb:app_id"/>
<style data-href="https://fonts.googleapis.com/css?family=Montserrat:200,300,400,500,600,700|Raleway:300,400" media="">
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_aZA3gTD_u50.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_aZA3g3D_u50.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_aZA3gbD_u50.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_aZA3gfD_u50.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_aZA3gnD_g.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_cJD3gTD_u50.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_cJD3g3D_u50.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_cJD3gbD_u50.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_cJD3gfD_u50.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_cJD3gnD_g.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v12/JTUSjIg1_i6t8kCHKm459WRhyzbi.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v12/JTUSjIg1_i6t8kCHKm459W1hyzbi.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v12/JTUSjIg1_i6t8kCHKm459WZhyzbi.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v12/JTUSjIg1_i6t8kCHKm459Wdhyzbi.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v12/JTUSjIg1_i6t8kCHKm459Wlhyw.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_ZpC3gTD_u50.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_ZpC3g3D_u50.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_ZpC3gbD_u50.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_ZpC3gfD_u50.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_ZpC3gnD_g.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_bZF3gTD_u50.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_bZF3g3D_u50.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_bZF3gbD_u50.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_bZF3gfD_u50.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_bZF3gnD_g.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_dJE3gTD_u50.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_dJE3g3D_u50.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_dJE3gbD_u50.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_dJE3gfD_u50.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v12/JTURjIg1_i6t8kCHKm45_dJE3gnD_g.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* latin-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: local('Raleway Light'), local('Raleway-Light'), url(https://fonts.gstatic.com/s/raleway/v12/1Ptrg8zYS_SKggPNwIYqWqhPAMif.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: local('Raleway Light'), local('Raleway-Light'), url(https://fonts.gstatic.com/s/raleway/v12/1Ptrg8zYS_SKggPNwIYqWqZPAA.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* latin-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: local('Raleway'), local('Raleway-Regular'), url(https://fonts.gstatic.com/s/raleway/v12/1Ptug8zYS_SKggPNyCMIT5lu.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: local('Raleway'), local('Raleway-Regular'), url(https://fonts.gstatic.com/s/raleway/v12/1Ptug8zYS_SKggPNyC0ITw.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
</style>
<style data-href="/assets/public_manifest-e05f97cdafb31d988d511960c6c62e08ece83d37c15daa3b9b991e2bc176a1e3.css" media="all">
@charset "UTF-8";.ui-helper-hidden{display:none}.ui-helper-hidden-accessible{position:absolute !important;clip:rect(1px 1px 1px 1px);clip:rect(1px, 1px, 1px, 1px)}.ui-helper-reset{margin:0;padding:0;border:0;outline:0;line-height:1.3;text-decoration:none;font-size:100%;list-style:none}.ui-helper-clearfix:before,.ui-helper-clearfix:after{content:"";display:table}.ui-helper-clearfix:after{clear:both}.ui-helper-clearfix{zoom:1}.ui-helper-zfix{width:100%;height:100%;top:0;left:0;position:absolute;opacity:0;filter:Alpha(Opacity=0)}.ui-state-disabled{cursor:default !important}.ui-icon{display:block;text-indent:-99999px;overflow:hidden;background-repeat:no-repeat}.ui-widget-overlay{position:absolute;top:0;left:0;width:100%;height:100%}.ui-slider{position:relative;text-align:left}.ui-slider .ui-slider-handle{position:absolute;z-index:2;width:6.2em;height:.7em;cursor:default}.ui-slider .ui-slider-range{position:absolute;z-index:1;font-size:.7em;display:block;border:0;background-position:0 0}.ui-slider-horizontal{height:.8em}.ui-slider-horizontal .ui-slider-handle{top:0;margin-left:0}.ui-slider-horizontal .ui-slider-range{top:0;height:100%}.ui-slider-horizontal .ui-slider-range-min{left:0}.ui-slider-horizontal .ui-slider-range-max{right:0}.ui-slider-vertical{width:.8em;height:100px}.ui-slider-vertical .ui-slider-handle{left:-.3em;margin-left:0;margin-bottom:-.6em}.ui-slider-vertical .ui-slider-range{left:0;width:100%}.ui-slider-vertical .ui-slider-range-min{bottom:0}.ui-slider-vertical .ui-slider-range-max{top:0}.ui-widget{font-family:Segoe UI,Arial,sans-serif;font-size:1.1em}.ui-widget .ui-widget{font-size:1em}.ui-widget input,.ui-widget select,.ui-widget textarea,.ui-widget button{font-family:Segoe UI,Arial,sans-serif;font-size:1em}.ui-widget-content{border:1px solid #666666;background:#000000;color:#ffffff}.ui-widget-content a{color:#ffffff}.ui-widget-header{border:1px solid #333333;background:#333333;color:#ffffff;font-weight:700}.ui-widget-header a{color:#ffffff}.ui-state-default,.ui-widget-content .ui-state-default,.ui-widget-header .ui-state-default{border:1px solid #666666;background:#555555;font-weight:700;color:#eeeeee}.ui-state-default a,.ui-state-default a:link,.ui-state-default a:visited{color:#eeeeee;text-decoration:none}.ui-state-hover a,.ui-state-hover a:hover{color:#ffffff;text-decoration:none}.ui-state-active,.ui-widget-content .ui-state-active,.ui-widget-header .ui-state-active{border:1px solid #ffaf0f;background:#f58400;font-weight:700;color:#ffffff}.ui-state-active a,.ui-state-active a:link,.ui-state-active a:visited{color:#ffffff;text-decoration:none}.ui-state-highlight,.ui-widget-content .ui-state-highlight,.ui-widget-header .ui-state-highlight{border:1px solid #cccccc;background:#eeeeee;color:#2e7db2}.ui-state-highlight a,.ui-widget-content .ui-state-highlight a,.ui-widget-header .ui-state-highlight a{color:#2e7db2}.ui-state-error,.ui-widget-content .ui-state-error,.ui-widget-header .ui-state-error{border:1px solid #ffb73d;background:#ffc73d;color:#111111}.ui-state-error a,.ui-widget-content .ui-state-error a,.ui-widget-header .ui-state-error a{color:#111111}.ui-state-error-text,.ui-widget-content .ui-state-error-text,.ui-widget-header .ui-state-error-text{color:#111111}.ui-priority-primary,.ui-widget-content .ui-priority-primary,.ui-widget-header .ui-priority-primary{font-weight:700}.ui-priority-secondary,.ui-widget-content .ui-priority-secondary,.ui-widget-header .ui-priority-secondary{opacity:.7;filter:Alpha(Opacity=70);font-weight:400}.ui-state-disabled,.ui-widget-content .ui-state-disabled,.ui-widget-header .ui-state-disabled{opacity:.35;filter:Alpha(Opacity=35);background-image:none}.ui-icon{width:16px;height:16px}.ui-icon-carat-1-n{background-position:0 0}.ui-icon-carat-1-ne{background-position:-16px 0}.ui-icon-carat-1-e{background-position:-32px 0}.ui-icon-carat-1-se{background-position:-48px 0}.ui-icon-carat-1-s{background-position:-64px 0}.ui-icon-carat-1-sw{background-position:-80px 0}.ui-icon-carat-1-w{background-position:-96px 0}.ui-icon-carat-1-nw{background-position:-112px 0}.ui-icon-carat-2-n-s{background-position:-128px 0}.ui-icon-carat-2-e-w{background-position:-144px 0}.ui-icon-triangle-1-n{background-position:0 -16px}.ui-icon-triangle-1-ne{background-position:-16px -16px}.ui-icon-triangle-1-e{background-position:-32px -16px}.ui-icon-triangle-1-se{background-position:-48px -16px}.ui-icon-triangle-1-s{background-position:-64px -16px}.ui-icon-triangle-1-sw{background-position:-80px -16px}.ui-icon-triangle-1-w{background-position:-96px -16px}.ui-icon-triangle-1-nw{background-position:-112px -16px}.ui-icon-triangle-2-n-s{background-position:-128px -16px}.ui-icon-triangle-2-e-w{background-position:-144px -16px}.ui-icon-arrow-1-n{background-position:0 -32px}.ui-icon-arrow-1-ne{background-position:-16px -32px}.ui-icon-arrow-1-e{background-position:-32px -32px}.ui-icon-arrow-1-se{background-position:-48px -32px}.ui-icon-arrow-1-s{background-position:-64px -32px}.ui-icon-arrow-1-sw{background-position:-80px -32px}.ui-icon-arrow-1-w{background-position:-96px -32px}.ui-icon-arrow-1-nw{background-position:-112px -32px}.ui-icon-arrow-2-n-s{background-position:-128px -32px}.ui-icon-arrow-2-ne-sw{background-position:-144px -32px}.ui-icon-arrow-2-e-w{background-position:-160px -32px}.ui-icon-arrow-2-se-nw{background-position:-176px -32px}.ui-icon-arrowstop-1-n{background-position:-192px -32px}.ui-icon-arrowstop-1-e{background-position:-208px -32px}.ui-icon-arrowstop-1-s{background-position:-224px -32px}.ui-icon-arrowstop-1-w{background-position:-240px -32px}.ui-icon-arrowthick-1-n{background-position:0 -48px}.ui-icon-arrowthick-1-ne{background-position:-16px -48px}.ui-icon-arrowthick-1-e{background-position:-32px -48px}.ui-icon-arrowthick-1-se{background-position:-48px -48px}.ui-icon-arrowthick-1-s{background-position:-64px -48px}.ui-icon-arrowthick-1-sw{background-position:-80px -48px}.ui-icon-arrowthick-1-w{background-position:-96px -48px}.ui-icon-arrowthick-1-nw{background-position:-112px -48px}.ui-icon-arrowthick-2-n-s{background-position:-128px -48px}.ui-icon-arrowthick-2-ne-sw{background-position:-144px -48px}.ui-icon-arrowthick-2-e-w{background-position:-160px -48px}.ui-icon-arrowthick-2-se-nw{background-position:-176px -48px}.ui-icon-arrowthickstop-1-n{background-position:-192px -48px}.ui-icon-arrowthickstop-1-e{background-position:-208px -48px}.ui-icon-arrowthickstop-1-s{background-position:-224px -48px}.ui-icon-arrowthickstop-1-w{background-position:-240px -48px}.ui-icon-arrowreturnthick-1-w{background-position:0 -64px}.ui-icon-arrowreturnthick-1-n{background-position:-16px -64px}.ui-icon-arrowreturnthick-1-e{background-position:-32px -64px}.ui-icon-arrowreturnthick-1-s{background-position:-48px -64px}.ui-icon-arrowreturn-1-w{background-position:-64px -64px}.ui-icon-arrowreturn-1-n{background-position:-80px -64px}.ui-icon-arrowreturn-1-e{background-position:-96px -64px}.ui-icon-arrowreturn-1-s{background-position:-112px -64px}.ui-icon-arrowrefresh-1-w{background-position:-128px -64px}.ui-icon-arrowrefresh-1-n{background-position:-144px -64px}.ui-icon-arrowrefresh-1-e{background-position:-160px -64px}.ui-icon-arrowrefresh-1-s{background-position:-176px -64px}.ui-icon-arrow-4{background-position:0 -80px}.ui-icon-arrow-4-diag{background-position:-16px -80px}.ui-icon-extlink{background-position:-32px -80px}.ui-icon-newwin{background-position:-48px -80px}.ui-icon-refresh{background-position:-64px -80px}.ui-icon-shuffle{background-position:-80px -80px}.ui-icon-transfer-e-w{background-position:-96px -80px}.ui-icon-transferthick-e-w{background-position:-112px -80px}.ui-icon-folder-collapsed{background-position:0 -96px}.ui-icon-folder-open{background-position:-16px -96px}.ui-icon-document{background-position:-32px -96px}.ui-icon-document-b{background-position:-48px -96px}.ui-icon-note{background-position:-64px -96px}.ui-icon-mail-closed{background-position:-80px -96px}.ui-icon-mail-open{background-position:-96px -96px}.ui-icon-suitcase{background-position:-112px -96px}.ui-icon-comment{background-position:-128px -96px}.ui-icon-person{background-position:-144px -96px}.ui-icon-print{background-position:-160px -96px}.ui-icon-trash{background-position:-176px -96px}.ui-icon-locked{background-position:-192px -96px}.ui-icon-unlocked{background-position:-208px -96px}.ui-icon-bookmark{background-position:-224px -96px}.ui-icon-tag{background-position:-240px -96px}.ui-icon-home{background-position:0 -112px}.ui-icon-flag{background-position:-16px -112px}.ui-icon-calendar{background-position:-32px -112px}.ui-icon-cart{background-position:-48px -112px}.ui-icon-pencil{background-position:-64px -112px}.ui-icon-clock{background-position:-80px -112px}.ui-icon-disk{background-position:-96px -112px}.ui-icon-calculator{background-position:-112px -112px}.ui-icon-zoomin{background-position:-128px -112px}.ui-icon-zoomout{background-position:-144px -112px}.ui-icon-search{background-position:-160px -112px}.ui-icon-wrench{background-position:-176px -112px}.ui-icon-gear{background-position:-192px -112px}.ui-icon-heart{background-position:-208px -112px}.ui-icon-star{background-position:-224px -112px}.ui-icon-link{background-position:-240px -112px}.ui-icon-cancel{background-position:0 -128px}.ui-icon-plus{background-position:-16px -128px}.ui-icon-plusthick{background-position:-32px -128px}.ui-icon-minus{background-position:-48px -128px}.ui-icon-minusthick{background-position:-64px -128px}.ui-icon-close{background-position:-80px -128px}.ui-icon-closethick{background-position:-96px -128px}.ui-icon-key{background-position:-112px -128px}.ui-icon-lightbulb{background-position:-128px -128px}.ui-icon-scissors{background-position:-144px -128px}.ui-icon-clipboard{background-position:-160px -128px}.ui-icon-copy{background-position:-176px -128px}.ui-icon-contact{background-position:-192px -128px}.ui-icon-image{background-position:-208px -128px}.ui-icon-video{background-position:-224px -128px}.ui-icon-script{background-position:-240px -128px}.ui-icon-alert{background-position:0 -144px}.ui-icon-info{background-position:-16px -144px}.ui-icon-notice{background-position:-32px -144px}.ui-icon-help{background-position:-48px -144px}.ui-icon-check{background-position:-64px -144px}.ui-icon-bullet{background-position:-80px -144px}.ui-icon-radio-on{background-position:-96px -144px}.ui-icon-radio-off{background-position:-112px -144px}.ui-icon-pin-w{background-position:-128px -144px}.ui-icon-pin-s{background-position:-144px -144px}.ui-icon-play{background-position:0 -160px}.ui-icon-pause{background-position:-16px -160px}.ui-icon-seek-next{background-position:-32px -160px}.ui-icon-seek-prev{background-position:-48px -160px}.ui-icon-seek-end{background-position:-64px -160px}.ui-icon-seek-start{background-position:-80px -160px}.ui-icon-seek-first{background-position:-80px -160px}.ui-icon-stop{background-position:-96px -160px}.ui-icon-eject{background-position:-112px -160px}.ui-icon-volume-off{background-position:-128px -160px}.ui-icon-volume-on{background-position:-144px -160px}.ui-icon-power{background-position:0 -176px}.ui-icon-signal-diag{background-position:-16px -176px}.ui-icon-signal{background-position:-32px -176px}.ui-icon-battery-0{background-position:-48px -176px}.ui-icon-battery-1{background-position:-64px -176px}.ui-icon-battery-2{background-position:-80px -176px}.ui-icon-battery-3{background-position:-96px -176px}.ui-icon-circle-plus{background-position:0 -192px}.ui-icon-circle-minus{background-position:-16px -192px}.ui-icon-circle-close{background-position:-32px -192px}.ui-icon-circle-triangle-e{background-position:-48px -192px}.ui-icon-circle-triangle-s{background-position:-64px -192px}.ui-icon-circle-triangle-w{background-position:-80px -192px}.ui-icon-circle-triangle-n{background-position:-96px -192px}.ui-icon-circle-arrow-e{background-position:-112px -192px}.ui-icon-circle-arrow-s{background-position:-128px -192px}.ui-icon-circle-arrow-w{background-position:-144px -192px}.ui-icon-circle-arrow-n{background-position:-160px -192px}.ui-icon-circle-zoomin{background-position:-176px -192px}.ui-icon-circle-zoomout{background-position:-192px -192px}.ui-icon-circle-check{background-position:-208px -192px}.ui-icon-circlesmall-plus{background-position:0 -208px}.ui-icon-circlesmall-minus{background-position:-16px -208px}.ui-icon-circlesmall-close{background-position:-32px -208px}.ui-icon-squaresmall-plus{background-position:-48px -208px}.ui-icon-squaresmall-minus{background-position:-64px -208px}.ui-icon-squaresmall-close{background-position:-80px -208px}.ui-icon-grip-dotted-vertical{background-position:0 -224px}.ui-icon-grip-dotted-horizontal{background-position:-16px -224px}.ui-icon-grip-solid-vertical{background-position:-32px -224px}.ui-icon-grip-solid-horizontal{background-position:-48px -224px}.ui-icon-gripsmall-diagonal-se{background-position:-64px -224px}.ui-icon-grip-diagonal-se{background-position:-80px -224px}.ui-corner-all,.ui-corner-top,.ui-corner-left,.ui-corner-tl{-moz-border-radius-topleft:6px;-webkit-border-top-left-radius:6px;-khtml-border-top-left-radius:6px;border-top-left-radius:6px}.ui-corner-all,.ui-corner-top,.ui-corner-right,.ui-corner-tr{-moz-border-radius-topright:6px;-webkit-border-top-right-radius:6px;-khtml-border-top-right-radius:6px;border-top-right-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-left,.ui-corner-bl{-moz-border-radius-bottomleft:6px;-webkit-border-bottom-left-radius:6px;-khtml-border-bottom-left-radius:6px;border-bottom-left-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-right,.ui-corner-br{-moz-border-radius-bottomright:6px;-webkit-border-bottom-right-radius:6px;-khtml-border-bottom-right-radius:6px;border-bottom-right-radius:6px}#simplemodal-overlay{background-color:#000}#simplemodal-container{height:360px;width:600px;color:#fff;background-color:#000;border:0;padding:0}#simplemodal-container .simplemodal-data{padding:20px 50px}.slick-slider{position:relative;display:block;box-sizing:border-box;-moz-box-sizing:border-box;-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-ms-touch-action:pan-y;touch-action:pan-y;-webkit-tap-highlight-color:transparent}.slick-list{position:relative;overflow:hidden;display:block;margin:0;padding:0}.slick-list:focus{outline:none}.slick-loading .slick-list{background:#fff url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/ajax-loader.gif") center center no-repeat}.slick-list.dragging{cursor:pointer;cursor:hand}.slick-slider .slick-list,.slick-track,.slick-slide,.slick-slide img{-webkit-transform:translate3d(0, 0, 0);-moz-transform:translate3d(0, 0, 0);-ms-transform:translate3d(0, 0, 0);-o-transform:translate3d(0, 0, 0);transform:translate3d(0, 0, 0)}.slick-track{position:relative;left:0;top:0;display:block;zoom:1}.slick-track:before,.slick-track:after{content:"";display:table}.slick-track:after{clear:both}.slick-loading .slick-track{visibility:hidden}.slick-slide{float:left;height:100%;min-height:1px;display:none}.slick-slide img{display:block}.slick-slide.slick-loading img{display:none}.slick-slide.dragging img{pointer-events:none}.slick-initialized .slick-slide{display:block}.slick-loading .slick-slide{visibility:hidden}.slick-vertical .slick-slide{display:block;height:auto;border:1px solid transparent}@font-face{font-family:"slick";src:url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.eot");src:url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.eot?#iefix") format("embedded-opentype"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.woff") format("woff"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.ttf") format("truetype"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.svg#slick") format("svg");font-weight:normal;font-style:normal}.slick-prev,.slick-next{position:absolute;display:block;height:20px;width:20px;line-height:0;font-size:0;cursor:pointer;background:transparent;color:transparent;top:50%;margin-top:-10px;padding:0;border:none;outline:none}.slick-prev:hover,.slick-prev:focus,.slick-next:hover,.slick-next:focus{outline:none;background:transparent;color:transparent}.slick-prev:hover:before,.slick-prev:focus:before,.slick-next:hover:before,.slick-next:focus:before{opacity:1}.slick-prev.slick-disabled:before,.slick-next.slick-disabled:before{opacity:0.25}.slick-prev:before,.slick-next:before{font-family:"slick";font-size:20px;line-height:1;color:white;opacity:0.75;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.slick-prev{left:-25px}.slick-prev:before{content:"\2190"}.slick-next{right:-25px}.slick-next:before{content:"\2192"}.slick-slider{margin-bottom:30px}.slick-dots{position:absolute;bottom:-45px;list-style:none;display:block;text-align:center;padding:0;width:100%}.slick-dots li{position:relative;display:inline-block;height:20px;width:20px;margin:0 5px;padding:0;cursor:pointer}.slick-dots li button{border:0;background:transparent;display:block;height:20px;width:20px;outline:none;line-height:0;font-size:0;color:transparent;padding:5px;cursor:pointer}.slick-dots li button:hover,.slick-dots li button:focus{outline:none}.slick-dots li button:hover:before,.slick-dots li button:focus:before{opacity:1}.slick-dots li button:before{position:absolute;top:0;left:0;content:"\2022";width:20px;height:20px;font-family:"slick";font-size:6px;line-height:20px;text-align:center;color:black;opacity:0.25;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.slick-dots li.slick-active button:before{color:black;opacity:0.75}[dir="rtl"] .slick-next{right:auto;left:-25px}[dir="rtl"] .slick-next:before{content:"\2190"}[dir="rtl"] .slick-prev{right:-25px;left:auto}[dir="rtl"] .slick-prev:before{content:"\2192"}[dir="rtl"] .slick-slide{float:right}@font-face{font-family:"foundation-icons";src:url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.eot");src:url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.eot?#iefix") format("embedded-opentype"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.woff") format("woff"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.ttf") format("truetype"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.svg#fontcustom") format("svg");font-weight:normal;font-style:normal}.fi-address-book:before,.fi-alert:before,.fi-align-center:before,.fi-align-justify:before,.fi-align-left:before,.fi-align-right:before,.fi-anchor:before,.fi-annotate:before,.fi-archive:before,.fi-arrow-down:before,.fi-arrow-left:before,.fi-arrow-right:before,.fi-arrow-up:before,.fi-arrows-compress:before,.fi-arrows-expand:before,.fi-arrows-in:before,.fi-arrows-out:before,.fi-asl:before,.fi-asterisk:before,.fi-at-sign:before,.fi-background-color:before,.fi-battery-empty:before,.fi-battery-full:before,.fi-battery-half:before,.fi-bitcoin-circle:before,.fi-bitcoin:before,.fi-blind:before,.fi-bluetooth:before,.fi-bold:before,.fi-book-bookmark:before,.fi-book:before,.fi-bookmark:before,.fi-braille:before,.fi-burst-new:before,.fi-burst-sale:before,.fi-burst:before,.fi-calendar:before,.fi-camera:before,.fi-check:before,.fi-checkbox:before,.fi-clipboard-notes:before,.fi-clipboard-pencil:before,.fi-clipboard:before,.fi-clock:before,.fi-closed-caption:before,.fi-cloud:before,.fi-comment-minus:before,.fi-comment-quotes:before,.fi-comment-video:before,.fi-comment:before,.fi-comments:before,.fi-compass:before,.fi-contrast:before,.fi-credit-card:before,.fi-crop:before,.fi-crown:before,.fi-css3:before,.fi-database:before,.fi-die-five:before,.fi-die-four:before,.fi-die-one:before,.fi-die-six:before,.fi-die-three:before,.fi-die-two:before,.fi-dislike:before,.fi-dollar-bill:before,.fi-dollar:before,.fi-download:before,.fi-eject:before,.fi-elevator:before,.fi-euro:before,.fi-eye:before,.fi-fast-forward:before,.fi-female-symbol:before,.fi-female:before,.fi-filter:before,.fi-first-aid:before,.fi-flag:before,.fi-folder-add:before,.fi-folder-lock:before,.fi-folder:before,.fi-foot:before,.fi-foundation:before,.fi-graph-bar:before,.fi-graph-horizontal:before,.fi-graph-pie:before,.fi-graph-trend:before,.fi-guide-dog:before,.fi-hearing-aid:before,.fi-heart:before,.fi-home:before,.fi-html5:before,.fi-indent-less:before,.fi-indent-more:before,.fi-info:before,.fi-italic:before,.fi-key:before,.fi-laptop:before,.fi-layout:before,.fi-lightbulb:before,.fi-like:before,.fi-link:before,.fi-list-bullet:before,.fi-list-number:before,.fi-list-thumbnails:before,.fi-list:before,.fi-lock:before,.fi-loop:before,.fi-magnifying-glass:before,.fi-mail:before,.fi-male-female:before,.fi-male-symbol:before,.fi-male:before,.fi-map:before,.fi-marker:before,.fi-megaphone:before,.fi-microphone:before,.fi-minus-circle:before,.fi-minus:before,.fi-mobile-signal:before,.fi-mobile:before,.fi-monitor:before,.fi-mountains:before,.fi-music:before,.fi-next:before,.fi-no-dogs:before,.fi-no-smoking:before,.fi-page-add:before,.fi-page-copy:before,.fi-page-csv:before,.fi-page-delete:before,.fi-page-doc:before,.fi-page-edit:before,.fi-page-export-csv:before,.fi-page-export-doc:before,.fi-page-export-pdf:before,.fi-page-export:before,.fi-page-filled:before,.fi-page-multiple:before,.fi-page-pdf:before,.fi-page-remove:before,.fi-page-search:before,.fi-page:before,.fi-paint-bucket:before,.fi-paperclip:before,.fi-pause:before,.fi-paw:before,.fi-paypal:before,.fi-pencil:before,.fi-photo:before,.fi-play-circle:before,.fi-play-video:before,.fi-play:before,.fi-plus:before,.fi-pound:before,.fi-power:before,.fi-previous:before,.fi-price-tag:before,.fi-pricetag-multiple:before,.fi-print:before,.fi-prohibited:before,.fi-projection-screen:before,.fi-puzzle:before,.fi-quote:before,.fi-record:before,.fi-refresh:before,.fi-results-demographics:before,.fi-results:before,.fi-rewind-ten:before,.fi-rewind:before,.fi-rss:before,.fi-safety-cone:before,.fi-save:before,.fi-share:before,.fi-sheriff-badge:before,.fi-shield:before,.fi-shopping-bag:before,.fi-shopping-cart:before,.fi-shuffle:before,.fi-skull:before,.fi-social-500px:before,.fi-social-adobe:before,.fi-social-amazon:before,.fi-social-android:before,.fi-social-apple:before,.fi-social-behance:before,.fi-social-bing:before,.fi-social-blogger:before,.fi-social-delicious:before,.fi-social-designer-news:before,.fi-social-deviant-art:before,.fi-social-digg:before,.fi-social-dribbble:before,.fi-social-drive:before,.fi-social-dropbox:before,.fi-social-evernote:before,.fi-social-facebook:before,.fi-social-flickr:before,.fi-social-forrst:before,.fi-social-foursquare:before,.fi-social-game-center:before,.fi-social-github:before,.fi-social-google-plus:before,.fi-social-hacker-news:before,.fi-social-hi5:before,.fi-social-instagram:before,.fi-social-joomla:before,.fi-social-lastfm:before,.fi-social-linkedin:before,.fi-social-medium:before,.fi-social-myspace:before,.fi-social-orkut:before,.fi-social-path:before,.fi-social-picasa:before,.fi-social-pinterest:before,.fi-social-rdio:before,.fi-social-reddit:before,.fi-social-skillshare:before,.fi-social-skype:before,.fi-social-smashing-mag:before,.fi-social-snapchat:before,.fi-social-spotify:before,.fi-social-squidoo:before,.fi-social-stack-overflow:before,.fi-social-steam:before,.fi-social-stumbleupon:before,.fi-social-treehouse:before,.fi-social-tumblr:before,.fi-social-twitter:before,.fi-social-vimeo:before,.fi-social-windows:before,.fi-social-xbox:before,.fi-social-yahoo:before,.fi-social-yelp:before,.fi-social-youtube:before,.fi-social-zerply:before,.fi-social-zurb:before,.fi-sound:before,.fi-star:before,.fi-stop:before,.fi-strikethrough:before,.fi-subscript:before,.fi-superscript:before,.fi-tablet-landscape:before,.fi-tablet-portrait:before,.fi-target-two:before,.fi-target:before,.fi-telephone-accessible:before,.fi-telephone:before,.fi-text-color:before,.fi-thumbnails:before,.fi-ticket:before,.fi-torso-business:before,.fi-torso-female:before,.fi-torso:before,.fi-torsos-all-female:before,.fi-torsos-all:before,.fi-torsos-female-male:before,.fi-torsos-male-female:before,.fi-torsos:before,.fi-trash:before,.fi-trees:before,.fi-trophy:before,.fi-underline:before,.fi-universal-access:before,.fi-unlink:before,.fi-unlock:before,.fi-upload-cloud:before,.fi-upload:before,.fi-usb:before,.fi-video:before,.fi-volume-none:before,.fi-volume-strike:before,.fi-volume:before,.fi-web:before,.fi-wheelchair:before,.fi-widget:before,.fi-wrench:before,.fi-x-circle:before,.fi-x:before,.fi-yen:before,.fi-zoom-in:before,.fi-zoom-out:before{font-family:"foundation-icons";font-style:normal;font-weight:normal;font-variant:normal;text-transform:none;line-height:1;-webkit-font-smoothing:antialiased;display:inline-block;text-decoration:inherit}.fi-address-book:before{content:"\f100"}.fi-alert:before{content:"\f101"}.fi-align-center:before{content:"\f102"}.fi-align-justify:before{content:"\f103"}.fi-align-left:before{content:"\f104"}.fi-align-right:before{content:"\f105"}.fi-anchor:before{content:"\f106"}.fi-annotate:before{content:"\f107"}.fi-archive:before{content:"\f108"}.fi-arrow-down:before{content:"\f109"}.fi-arrow-left:before{content:"\f10a"}.fi-arrow-right:before{content:"\f10b"}.fi-arrow-up:before{content:"\f10c"}.fi-arrows-compress:before{content:"\f10d"}.fi-arrows-expand:before{content:"\f10e"}.fi-arrows-in:before{content:"\f10f"}.fi-arrows-out:before{content:"\f110"}.fi-asl:before{content:"\f111"}.fi-asterisk:before{content:"\f112"}.fi-at-sign:before{content:"\f113"}.fi-background-color:before{content:"\f114"}.fi-battery-empty:before{content:"\f115"}.fi-battery-full:before{content:"\f116"}.fi-battery-half:before{content:"\f117"}.fi-bitcoin-circle:before{content:"\f118"}.fi-bitcoin:before{content:"\f119"}.fi-blind:before{content:"\f11a"}.fi-bluetooth:before{content:"\f11b"}.fi-bold:before{content:"\f11c"}.fi-book-bookmark:before{content:"\f11d"}.fi-book:before{content:"\f11e"}.fi-bookmark:before{content:"\f11f"}.fi-braille:before{content:"\f120"}.fi-burst-new:before{content:"\f121"}.fi-burst-sale:before{content:"\f122"}.fi-burst:before{content:"\f123"}.fi-calendar:before{content:"\f124"}.fi-camera:before{content:"\f125"}.fi-check:before{content:"\f126"}.fi-checkbox:before{content:"\f127"}.fi-clipboard-notes:before{content:"\f128"}.fi-clipboard-pencil:before{content:"\f129"}.fi-clipboard:before{content:"\f12a"}.fi-clock:before{content:"\f12b"}.fi-closed-caption:before{content:"\f12c"}.fi-cloud:before{content:"\f12d"}.fi-comment-minus:before{content:"\f12e"}.fi-comment-quotes:before{content:"\f12f"}.fi-comment-video:before{content:"\f130"}.fi-comment:before{content:"\f131"}.fi-comments:before{content:"\f132"}.fi-compass:before{content:"\f133"}.fi-contrast:before{content:"\f134"}.fi-credit-card:before{content:"\f135"}.fi-crop:before{content:"\f136"}.fi-crown:before{content:"\f137"}.fi-css3:before{content:"\f138"}.fi-database:before{content:"\f139"}.fi-die-five:before{content:"\f13a"}.fi-die-four:before{content:"\f13b"}.fi-die-one:before{content:"\f13c"}.fi-die-six:before{content:"\f13d"}.fi-die-three:before{content:"\f13e"}.fi-die-two:before{content:"\f13f"}.fi-dislike:before{content:"\f140"}.fi-dollar-bill:before{content:"\f141"}.fi-dollar:before{content:"\f142"}.fi-download:before{content:"\f143"}.fi-eject:before{content:"\f144"}.fi-elevator:before{content:"\f145"}.fi-euro:before{content:"\f146"}.fi-eye:before{content:"\f147"}.fi-fast-forward:before{content:"\f148"}.fi-female-symbol:before{content:"\f149"}.fi-female:before{content:"\f14a"}.fi-filter:before{content:"\f14b"}.fi-first-aid:before{content:"\f14c"}.fi-flag:before{content:"\f14d"}.fi-folder-add:before{content:"\f14e"}.fi-folder-lock:before{content:"\f14f"}.fi-folder:before{content:"\f150"}.fi-foot:before{content:"\f151"}.fi-foundation:before{content:"\f152"}.fi-graph-bar:before{content:"\f153"}.fi-graph-horizontal:before{content:"\f154"}.fi-graph-pie:before{content:"\f155"}.fi-graph-trend:before{content:"\f156"}.fi-guide-dog:before{content:"\f157"}.fi-hearing-aid:before{content:"\f158"}.fi-heart:before{content:"\f159"}.fi-home:before{content:"\f15a"}.fi-html5:before{content:"\f15b"}.fi-indent-less:before{content:"\f15c"}.fi-indent-more:before{content:"\f15d"}.fi-info:before{content:"\f15e"}.fi-italic:before{content:"\f15f"}.fi-key:before{content:"\f160"}.fi-laptop:before{content:"\f161"}.fi-layout:before{content:"\f162"}.fi-lightbulb:before{content:"\f163"}.fi-like:before{content:"\f164"}.fi-link:before{content:"\f165"}.fi-list-bullet:before{content:"\f166"}.fi-list-number:before{content:"\f167"}.fi-list-thumbnails:before{content:"\f168"}.fi-list:before{content:"\f169"}.fi-lock:before{content:"\f16a"}.fi-loop:before{content:"\f16b"}.fi-magnifying-glass:before{content:"\f16c"}.fi-mail:before{content:"\f16d"}.fi-male-female:before{content:"\f16e"}.fi-male-symbol:before{content:"\f16f"}.fi-male:before{content:"\f170"}.fi-map:before{content:"\f171"}.fi-marker:before{content:"\f172"}.fi-megaphone:before{content:"\f173"}.fi-microphone:before{content:"\f174"}.fi-minus-circle:before{content:"\f175"}.fi-minus:before{content:"\f176"}.fi-mobile-signal:before{content:"\f177"}.fi-mobile:before{content:"\f178"}.fi-monitor:before{content:"\f179"}.fi-mountains:before{content:"\f17a"}.fi-music:before{content:"\f17b"}.fi-next:before{content:"\f17c"}.fi-no-dogs:before{content:"\f17d"}.fi-no-smoking:before{content:"\f17e"}.fi-page-add:before{content:"\f17f"}.fi-page-copy:before{content:"\f180"}.fi-page-csv:before{content:"\f181"}.fi-page-delete:before{content:"\f182"}.fi-page-doc:before{content:"\f183"}.fi-page-edit:before{content:"\f184"}.fi-page-export-csv:before{content:"\f185"}.fi-page-export-doc:before{content:"\f186"}.fi-page-export-pdf:before{content:"\f187"}.fi-page-export:before{content:"\f188"}.fi-page-filled:before{content:"\f189"}.fi-page-multiple:before{content:"\f18a"}.fi-page-pdf:before{content:"\f18b"}.fi-page-remove:before{content:"\f18c"}.fi-page-search:before{content:"\f18d"}.fi-page:before{content:"\f18e"}.fi-paint-bucket:before{content:"\f18f"}.fi-paperclip:before{content:"\f190"}.fi-pause:before{content:"\f191"}.fi-paw:before{content:"\f192"}.fi-paypal:before{content:"\f193"}.fi-pencil:before{content:"\f194"}.fi-photo:before{content:"\f195"}.fi-play-circle:before{content:"\f196"}.fi-play-video:before{content:"\f197"}.fi-play:before{content:"\f198"}.fi-plus:before{content:"\f199"}.fi-pound:before{content:"\f19a"}.fi-power:before{content:"\f19b"}.fi-previous:before{content:"\f19c"}.fi-price-tag:before{content:"\f19d"}.fi-pricetag-multiple:before{content:"\f19e"}.fi-print:before{content:"\f19f"}.fi-prohibited:before{content:"\f1a0"}.fi-projection-screen:before{content:"\f1a1"}.fi-puzzle:before{content:"\f1a2"}.fi-quote:before{content:"\f1a3"}.fi-record:before{content:"\f1a4"}.fi-refresh:before{content:"\f1a5"}.fi-results-demographics:before{content:"\f1a6"}.fi-results:before{content:"\f1a7"}.fi-rewind-ten:before{content:"\f1a8"}.fi-rewind:before{content:"\f1a9"}.fi-rss:before{content:"\f1aa"}.fi-safety-cone:before{content:"\f1ab"}.fi-save:before{content:"\f1ac"}.fi-share:before{content:"\f1ad"}.fi-sheriff-badge:before{content:"\f1ae"}.fi-shield:before{content:"\f1af"}.fi-shopping-bag:before{content:"\f1b0"}.fi-shopping-cart:before{content:"\f1b1"}.fi-shuffle:before{content:"\f1b2"}.fi-skull:before{content:"\f1b3"}.fi-social-500px:before{content:"\f1b4"}.fi-social-adobe:before{content:"\f1b5"}.fi-social-amazon:before{content:"\f1b6"}.fi-social-android:before{content:"\f1b7"}.fi-social-apple:before{content:"\f1b8"}.fi-social-behance:before{content:"\f1b9"}.fi-social-bing:before{content:"\f1ba"}.fi-social-blogger:before{content:"\f1bb"}.fi-social-delicious:before{content:"\f1bc"}.fi-social-designer-news:before{content:"\f1bd"}.fi-social-deviant-art:before{content:"\f1be"}.fi-social-digg:before{content:"\f1bf"}.fi-social-dribbble:before{content:"\f1c0"}.fi-social-drive:before{content:"\f1c1"}.fi-social-dropbox:before{content:"\f1c2"}.fi-social-evernote:before{content:"\f1c3"}.fi-social-facebook:before{content:"\f1c4"}.fi-social-flickr:before{content:"\f1c5"}.fi-social-forrst:before{content:"\f1c6"}.fi-social-foursquare:before{content:"\f1c7"}.fi-social-game-center:before{content:"\f1c8"}.fi-social-github:before{content:"\f1c9"}.fi-social-google-plus:before{content:"\f1ca"}.fi-social-hacker-news:before{content:"\f1cb"}.fi-social-hi5:before{content:"\f1cc"}.fi-social-instagram:before{content:"\f1cd"}.fi-social-joomla:before{content:"\f1ce"}.fi-social-lastfm:before{content:"\f1cf"}.fi-social-linkedin:before{content:"\f1d0"}.fi-social-medium:before{content:"\f1d1"}.fi-social-myspace:before{content:"\f1d2"}.fi-social-orkut:before{content:"\f1d3"}.fi-social-path:before{content:"\f1d4"}.fi-social-picasa:before{content:"\f1d5"}.fi-social-pinterest:before{content:"\f1d6"}.fi-social-rdio:before{content:"\f1d7"}.fi-social-reddit:before{content:"\f1d8"}.fi-social-skillshare:before{content:"\f1d9"}.fi-social-skype:before{content:"\f1da"}.fi-social-smashing-mag:before{content:"\f1db"}.fi-social-snapchat:before{content:"\f1dc"}.fi-social-spotify:before{content:"\f1dd"}.fi-social-squidoo:before{content:"\f1de"}.fi-social-stack-overflow:before{content:"\f1df"}.fi-social-steam:before{content:"\f1e0"}.fi-social-stumbleupon:before{content:"\f1e1"}.fi-social-treehouse:before{content:"\f1e2"}.fi-social-tumblr:before{content:"\f1e3"}.fi-social-twitter:before{content:"\f1e4"}.fi-social-vimeo:before{content:"\f1e5"}.fi-social-windows:before{content:"\f1e6"}.fi-social-xbox:before{content:"\f1e7"}.fi-social-yahoo:before{content:"\f1e8"}.fi-social-yelp:before{content:"\f1e9"}.fi-social-youtube:before{content:"\f1ea"}.fi-social-zerply:before{content:"\f1eb"}.fi-social-zurb:before{content:"\f1ec"}.fi-sound:before{content:"\f1ed"}.fi-star:before{content:"\f1ee"}.fi-stop:before{content:"\f1ef"}.fi-strikethrough:before{content:"\f1f0"}.fi-subscript:before{content:"\f1f1"}.fi-superscript:before{content:"\f1f2"}.fi-tablet-landscape:before{content:"\f1f3"}.fi-tablet-portrait:before{content:"\f1f4"}.fi-target-two:before{content:"\f1f5"}.fi-target:before{content:"\f1f6"}.fi-telephone-accessible:before{content:"\f1f7"}.fi-telephone:before{content:"\f1f8"}.fi-text-color:before{content:"\f1f9"}.fi-thumbnails:before{content:"\f1fa"}.fi-ticket:before{content:"\f1fb"}.fi-torso-business:before{content:"\f1fc"}.fi-torso-female:before{content:"\f1fd"}.fi-torso:before{content:"\f1fe"}.fi-torsos-all-female:before{content:"\f1ff"}.fi-torsos-all:before{content:"\f200"}.fi-torsos-female-male:before{content:"\f201"}.fi-torsos-male-female:before{content:"\f202"}.fi-torsos:before{content:"\f203"}.fi-trash:before{content:"\f204"}.fi-trees:before{content:"\f205"}.fi-trophy:before{content:"\f206"}.fi-underline:before{content:"\f207"}.fi-universal-access:before{content:"\f208"}.fi-unlink:before{content:"\f209"}.fi-unlock:before{content:"\f20a"}.fi-upload-cloud:before{content:"\f20b"}.fi-upload:before{content:"\f20c"}.fi-usb:before{content:"\f20d"}.fi-video:before{content:"\f20e"}.fi-volume-none:before{content:"\f20f"}.fi-volume-strike:before{content:"\f210"}.fi-volume:before{content:"\f211"}.fi-web:before{content:"\f212"}.fi-wheelchair:before{content:"\f213"}.fi-widget:before{content:"\f214"}.fi-wrench:before{content:"\f215"}.fi-x-circle:before{content:"\f216"}.fi-x:before{content:"\f217"}.fi-yen:before{content:"\f218"}.fi-zoom-in:before{content:"\f219"}.fi-zoom-out:before{content:"\f21a"}/*! normalize.css v1.1.2 | MIT License | git.io/normalize */article,aside,details,figcaption,figure,footer,header,hgroup,main,nav,section,summary{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}audio:not([controls]){display:none;height:0}[hidden]{display:none}html{font-size:100%;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}html,button,input,select,textarea{font-family:sans-serif}body{margin:0}a:focus{outline:thin dotted}a:active,a:hover{outline:0}h1{font-size:2em;margin:0.67em 0}h2{font-size:1.5em;margin:0.83em 0}h3{font-size:1.17em;margin:1em 0}h4{font-size:1em;margin:1.33em 0}h5{font-size:0.83em;margin:1.67em 0}h6{font-size:0.67em;margin:2.33em 0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:bold}blockquote{margin:1em 40px}dfn{font-style:italic}hr{-moz-box-sizing:content-box;box-sizing:content-box;height:0}mark{background:#ff0;color:#000}p,pre{margin:1em 0}code,kbd,pre,samp{font-family:monospace, serif;_font-family:'courier new', monospace;font-size:1em}pre{white-space:pre;white-space:pre-wrap;word-wrap:break-word}q{quotes:none}q:before,q:after{content:'';content:none}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-0.5em}sub{bottom:-0.25em}dl,menu,ol,ul{margin:1em 0}dd{margin:0 0 0 40px}menu,ol,ul{padding:0 0 0 40px}nav ul,nav ol{list-style:none;list-style-image:none}img{border:0;-ms-interpolation-mode:bicubic}svg:not(:root){overflow:hidden}figure{margin:0}form{margin:0}fieldset{border:1px solid #c0c0c0;margin:0 2px;padding:0.35em 0.625em 0.75em}legend{border:0;padding:0;white-space:normal;*margin-left:-7px}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,select{text-transform:none}button,html input[type="button"],input[type="reset"],input[type="submit"]{-webkit-appearance:button;cursor:pointer;*overflow:visible}button[disabled],html input[disabled]{cursor:default}input[type="checkbox"],input[type="radio"]{box-sizing:border-box;padding:0;*height:13px;*width:13px}input[type="search"]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}input[type="search"]::-webkit-search-cancel-button,input[type="search"]::-webkit-search-decoration{-webkit-appearance:none}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}textarea{overflow:auto;vertical-align:top}table{border-collapse:collapse;border-spacing:0}html,button,input,select,textarea{color:#222}body{font-size:1em;line-height:1.4}::-moz-selection{background:#b3d4fc;text-shadow:none}::selection{background:#b3d4fc;text-shadow:none}hr{display:block;height:1px;border:0;border-top:1px solid #ccc;margin:1em 0;padding:0}audio,canvas,img,video{vertical-align:middle}fieldset{border:0;margin:0;padding:0}textarea{resize:vertical}.browsehappy{margin:0.2em 0;background:#ccc;color:#000;padding:0.2em 0}.ir{background-color:transparent;border:0;overflow:hidden;*text-indent:-9999px}.ir:before{content:"";display:block;width:0;height:150%}.hidden{display:none !important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.clearfix:before,.body_form fieldset.radio_buttons .option:before,#site_footer .sitemap_directory:before,.main_area_sitemap .sitemap_directory:before,article:before,.grid_gallery.list_view li.slide:before,.main_carousel .slick-nav:before,.main_carousel.module .slick-slider .content_body:before,.advanced_search .filter_bar .search_row:before,.content_page #primary_column:before,#secondary_column aside.list_view_module li:before,.wysiwyg_content .related_content_module ul:before,#secondary_column .related_content_module ul:before,.wysiwyg_content .related_content_module li:before,#secondary_column .related_content_module li:before,blockquote:before,.faq_section ul.q_and_a .text.answer:before,ul.item_list:before,ul.item_list>li:before,ul.item_list .list_content:before,.clearfix:after,.body_form fieldset.radio_buttons .option:after,#site_footer .sitemap_directory:after,.main_area_sitemap .sitemap_directory:after,article:after,.grid_gallery.list_view li.slide:after,.main_carousel .slick-nav:after,.main_carousel.module .slick-slider .content_body:after,.advanced_search .filter_bar .search_row:after,.content_page #primary_column:after,#secondary_column aside.list_view_module li:after,.wysiwyg_content .related_content_module ul:after,#secondary_column .related_content_module ul:after,.wysiwyg_content .related_content_module li:after,#secondary_column .related_content_module li:after,blockquote:after,.faq_section ul.q_and_a .text.answer:after,ul.item_list:after,ul.item_list>li:after,ul.item_list .list_content:after{content:" ";display:table}.clearfix:after,.body_form fieldset.radio_buttons .option:after,#site_footer .sitemap_directory:after,.main_area_sitemap .sitemap_directory:after,article:after,.grid_gallery.list_view li.slide:after,.main_carousel .slick-nav:after,.main_carousel.module .slick-slider .content_body:after,.advanced_search .filter_bar .search_row:after,.content_page #primary_column:after,#secondary_column aside.list_view_module li:after,.wysiwyg_content .related_content_module ul:after,#secondary_column .related_content_module ul:after,.wysiwyg_content .related_content_module li:after,#secondary_column .related_content_module li:after,blockquote:after,.faq_section ul.q_and_a .text.answer:after,ul.item_list:after,ul.item_list>li:after,ul.item_list .list_content:after{clear:both}.clearfix,.body_form fieldset.radio_buttons .option,#site_footer .sitemap_directory,.main_area_sitemap .sitemap_directory,article,.grid_gallery.list_view li.slide,.main_carousel .slick-nav,.main_carousel.module .slick-slider .content_body,.advanced_search .filter_bar .search_row,.content_page #primary_column,#secondary_column aside.list_view_module li,.wysiwyg_content .related_content_module ul,#secondary_column .related_content_module ul,.wysiwyg_content .related_content_module li,#secondary_column .related_content_module li,blockquote,.faq_section ul.q_and_a .text.answer,ul.item_list,ul.item_list>li,ul.item_list .list_content{*zoom:1}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h2,h3{page-break-after:avoid}}html,button,input,select,textarea{color:#3c3c3c}.browsehappy{background:white;color:#333;padding:1em;position:absolute;top:0;left:0;z-index:9999;width:100%;height:100%}html.touch.-webkit-{-webkit-tap-highlight-color:transparent}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{position:absolute}.site_header_area .brand_area{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:100%;display:inline-block;width:54px;height:54px}.site_header_area .brand_area .brand1{height:100%;float:left;text-indent:-9999px}.site_header_area .brand_area .brand2{display:block;float:left;height:100%;text-indent:-9999px}.site_header_area .brand_area a.top_logo,.site_header_area .brand_area a.sub_logo{width:100%;float:left}.site_header_area .brand_area a.top_logo{height:39%;width:30%}.site_header_area .brand_area a.sub_logo{height:45%}.site_header_area .brand_area a.single_logo{width:100%;float:left;height:82%}.site_header_area .brand_area .nasa_logo{width:100%;height:100%;display:block}*,*:before,*:after{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}body{margin-left:auto;margin-right:auto;margin-top:0;background-color:white}@media (max-width: 1023px){body.nav_overlay_true{overflow:hidden}}img{width:100%}p{line-height:1.4em;margin-bottom:17px;margin-top:0;font-size:16px;color:#222}@media (min-width: 600px), print{p{font-size:18px}}@media (min-width: 769px), print{p{margin-bottom:20px;font-size:16px}}@media (min-width: 1024px), print{p{font-size:17px}}@media (min-width: 1200px){p{font-size:18px}}a{text-decoration:none;color:#257cdf}a:hover{text-decoration:underline}a[name]{position:relative;display:block;visibility:hidden;margin:0;padding:0}@media (max-width: 1023px){a[name]{top:-58px}}@media (max-width: 1023px) and (min-width: 480px){a[name]{top:-58px}}@media (max-width: 1023px) and (min-width: 600px), print and (max-width: 1023px){a[name]{top:-58px}}@media (max-width: 1023px) and (min-width: 769px), print and (max-width: 1023px){a[name]{top:-70px}}@media (min-width: 1024px){a[name]{top:-47px}}dl,menu,ol,ul{margin:0;padding:0}ul{list-style-type:none}ol{list-style-position:inside}hr,.gradient_line,.related.module .gradient_line_module_top{clear:both;margin:1em 0}.print_only{display:none}@font-face{font-family:'Whitney';src:url("https://mars.nasa.gov/assets/fonts/Whitney-Book.otf")}@font-face{font-family:'Whitney-Bold';src:url("https://mars.nasa.gov/assets/fonts/Whitney-Bold.otf")}@font-face{font-family:'WhitneyCondensed-Bold';src:url("https://mars.nasa.gov/assets/fonts/WhitneyCondensed-Bold.otf")}.button,.outline_button,.primary_media_feature .floating_text_area .button,.banner_header_overlay .button{font-weight:700;display:inline-block;margin-bottom:.5em;margin-left:auto;margin-right:auto;background-color:#3b788b;color:white;line-height:1em;border:0;text-decoration:none;border-radius:4px;cursor:pointer;text-shadow:none;font-size:13px;padding:12px 24px;text-transform:uppercase;white-space:nowrap}@media (min-width: 769px), print{.button,.outline_button,.primary_media_feature .floating_text_area .button,.banner_header_overlay .button{font-size:14px}}.button:hover,.outline_button:hover,.primary_media_feature .floating_text_area .button:hover{background-color:#5097ad;text-decoration:none}.outline_button,.primary_media_feature .floating_text_area .button,.primary_media_feature .floating_text_area .outline_button,.banner_header_overlay .button,.banner_header_overlay .outline_button{border-radius:10px;border:2px solid white;background:none;color:#FFF}.outline_button:hover,.primary_media_feature .floating_text_area .button:hover,.primary_media_feature .floating_text_area .outline_button:hover,.banner_header_overlay .button:hover{background-color:#5097ad;border-color:#5097ad}.section_search,.overlay_search{color:white;display:inline-block;position:relative}.section_search .search_field,.overlay_search .search_field{color:white;background-color:#282828;background-color:rgba(255,255,255,0.1);font-weight:500;font-size:16px;border:none;border-radius:4px;height:40px;padding-left:1.1em;padding-right:40px;width:155px}.section_search .search_field.placeholder,.overlay_search .search_field.placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field:-moz-placeholder,.overlay_search .search_field:-moz-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field::-moz-placeholder,.overlay_search .search_field::-moz-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field::-webkit-input-placeholder,.overlay_search .search_field::-webkit-input-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field:-ms-input-placeholder,.overlay_search .search_field:-ms-input-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_submit,.overlay_search .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -127px -5px;background-size:300px;position:absolute;right:-5px;top:-3px;border:none;margin-left:-44px;opacity:.8}.section_search .search_submit:hover,.overlay_search .search_submit:hover,.section_search .search_submit.active,.overlay_search .search_submit.active,.section_search .search_submit.current,.overlay_search .search_submit.current{background-position:-127px -5px}.section_search .search_field{background-color:#F3F4F8;color:#222}.section_search .search_field.placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field:-moz-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field::-moz-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field::-webkit-input-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field:-ms-input-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -127px -54px;background-size:300px;opacity:.6}.section_search .search_submit:hover,.section_search .search_submit.active,.section_search .search_submit.current{background-position:-127px -54px}form.nav_search .search_field{padding-right:20px;height:34px}form.nav_search input:-webkit-autofill,form.overlay_search input:-webkit-autofill{-webkit-box-shadow:0 0 0px 1000px #989898 inset;-webkit-text-fill-color:white !important}.overlay_search .search_field{color:white;background-color:rgba(255,255,255,0.3)}.overlay_search .search_field.placeholder{color:white}.overlay_search .search_field:-moz-placeholder{color:white}.overlay_search .search_field::-moz-placeholder{color:white}.overlay_search .search_field::-webkit-input-placeholder{color:white}.overlay_search .search_field:-ms-input-placeholder{color:white}.overlay_search label.search_label{display:none}.overlay_search .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -131px -5px;background-size:300px}.overlay_search .search_submit:hover,.overlay_search .search_submit.active,.overlay_search .search_submit.current{background-position:-131px -5px}.body_form label{display:block;margin-bottom:.3em}.body_form input:not([type="submit"]):not([type="reset"]),.body_form textarea{font-size:16px}.body_form input[type="text"]:not(#recaptcha_response_field),.body_form input[type="tel"],.body_form input[type="email"]{height:40px}.body_form input:not(#recaptcha_response_field):not(.inline_button):not([type="submit"]):not([type="radio"]):not([type="checkbox"]),.body_form textarea{width:100%;border:1px solid #a7a8a8;background-color:white;border-radius:4px;padding:10px 12px}.body_form input,.body_form textarea{margin-bottom:1em}.body_form .button,.body_form .outline_button,.body_form .primary_media_feature .floating_text_area .button,.primary_media_feature .floating_text_area .body_form .button{margin-top:1em}.body_form select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin-bottom:1em}.body_form select::-ms-expand{display:none}.body_form select option{padding:0.5em 1em}.body_form label{font-weight:700}.body_form .radio_title{margin-bottom:.5em;font-weight:700}.body_form fieldset.radio_buttons .option{white-space:nowrap;margin-bottom:1em}.body_form fieldset.radio_buttons label{white-space:normal;vertical-align:middle;display:inline}.body_form fieldset.radio_buttons input[type="radio"],.body_form fieldset.radio_buttons input[type="checkbox"]{display:inline-block;margin:0 .5em 0 0;vertical-align:middle}.body_form fieldset.radio_buttons input[type="radio"]+label,.body_form fieldset.radio_buttons input[type="checkbox"]+label{font-weight:400}.body_form .centered{text-align:center}@media (max-width: 480px){#recaptcha_widget_div{overflow:hidden}#recaptcha_widget_div #recaptcha_area{margin:0 auto}}.event_location,.event_date{margin-bottom:1em}.site_header_area{height:58px}@media (min-width: 600px), print{.site_header_area{height:58px}}@media (min-width: 600px), print{.site_header_area{height:58px}}@media (min-width: 769px), print{.site_header_area{height:70px}}@media (min-width: 1024px), print{.site_header_area{height:74px}}@media (min-width: 1200px){.site_header_area{height:82px}}@media (min-width: 1700px){.site_header_area{height:88px}}.site_header_area .brand_area{top:8px;margin-left:8px;height:49px;width:260px;transition:width .3s, height .3s}.site_header_area .site_logo_container{top:16px;margin-left:0;width:189px}.site_header_area .menu_button,.site_header_area #modal_close{top:8px;right:8px}@media (min-width: 769px), print{.site_header_area .brand_area{top:10px;margin-left:12px;height:60px;width:313px}.site_header_area .site_logo_container{top:23px;margin-left:13px;width:189px}.site_header_area .menu_button,.site_header_area #modal_close{top:12px;right:12px}}@media (min-width: 1024px), print{.site_header_area .brand_area{top:12px;margin-left:10px;height:54px;width:288px}.site_header_area .site_logo_container{width:120px}}@media (min-width: 1200px){.site_header_area .brand_area{top:14px;margin-left:17px;height:60px;width:338px}.site_header_area .site_logo_container{top:30px;width:210px}}@media (min-width: 1700px){.site_header_area .brand_area{top:14px;margin-left:30px}.site_header_area .site_logo_container{top:30px;width:246px}}@media (min-width: 769px), print{#home:not(.nav_is_fixed) .brand_area{height:69px;width:375px}}@media (min-width: 1024px), print{#home:not(.nav_is_fixed) .brand_area{width:300px;height:55px}}@media (min-width: 1200px){#home:not(.nav_is_fixed) .brand_area{width:368px;height:68px}}@media (min-width: 1700px){#home:not(.nav_is_fixed) .brand_area{width:420px;height:78px}}#home .site_header_area{background-color:transparent}#home.nav_overlay_true .site_header_area,#home.nav_is_fixed .site_header_area{background-color:#5a2017}.site_header_area{background-color:#5a2017;width:100%;position:absolute;z-index:21}.site_header_area.opaque{transition:background-color .5s ease-in-out}.main_feature_present .site_header_area{background-color:transparent}.main_feature_present .site_header_area.opaque{background-color:transparent}#home.nav_is_fixed .site_header_area{z-index:42}.nav_is_fixed .site_header_area{box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);transition:background-color .5s ease-in-out;background-color:#5a2017}.site_header_area .site_header{width:100%;height:100%}.site_header_area .brand_area{position:relative;display:inline-block;z-index:100;background-image:url("https://mars.nasa.gov/assets/[email protected]")}.site_header_area .brand_area .brand1{width:22%}.site_header_area .brand_area .brand2{width:78%}@media (min-width: 769px){.site_header_area .brand_area .brand2{display:block}}.site_header_area .site_logo_container{position:relative;display:inline-block;vertical-align:top;z-index:100}@media (min-width: 769px), print{.site_header_area .site_logo_container:before{content:"";height:110%;background-color:rgba(255,255,255,0.4);width:1px;position:absolute;left:-9px;top:0px}}@media (min-width: 1024px), print{.site_header_area .site_logo_container:before{height:150%;top:1px}}@media (min-width: 1200px){.site_header_area .site_logo_container:before{height:110%;top:-2px}}@media (min-width: 1700px){.site_header_area .site_logo_container:before{top:0px}}.site_header_area .site_logo_container a{display:block}.site_header_area .site_logo_container a:hover{text-decoration:none}.site_header_area .site_logo_container img.site_logo{display:block;position:relative;width:130px;top:3px}@media (min-width: 769px), print{.site_header_area .site_logo_container img.site_logo{width:170px;top:0px}}@media (min-width: 1024px), print{.site_header_area .site_logo_container img.site_logo{width:120px;top:6px}}@media (min-width: 1200px){.site_header_area .site_logo_container img.site_logo{width:188px;top:0}}@media (min-width: 1700px){.site_header_area .site_logo_container img.site_logo{width:215px}}.site_header_area .site_logo_container img.site_logo_truncated{display:none}@media (min-width: 1024px), print{.site_header_area .site_logo_container img.site_logo_truncated{display:block}}@media (min-width: 1200px){.site_header_area .site_logo_container img.site_logo_truncated{display:none}}.site_header_area img.site_logo_black{display:none}.site_header_area form.nav_search{display:inline-block;vertical-align:middle;margin-right:1em}.header_mask{display:none}@media (min-width: 1024px){.header_mask{height:58px;display:block}}@media (min-width: 1024px) and (min-width: 600px), print and (min-width: 1024px){.header_mask{height:58px}}@media (min-width: 1024px) and (min-width: 600px), print and (min-width: 1024px){.header_mask{height:58px}}@media (min-width: 1024px) and (min-width: 769px), print and (min-width: 1024px){.header_mask{height:70px}}@media (min-width: 1024px) and (min-width: 1024px), print and (min-width: 1024px){.header_mask{height:74px}}@media (min-width: 1024px) and (min-width: 1200px){.header_mask{height:82px}}@media (min-width: 1024px) and (min-width: 1700px){.header_mask{height:88px}}@media (max-width: 1023px){#sticky_nav_spacer{height:58px}}@media (max-width: 1023px) and (min-width: 480px){#sticky_nav_spacer{height:58px}}@media (max-width: 1023px) and (min-width: 600px), print and (max-width: 1023px){#sticky_nav_spacer{height:58px}}@media (max-width: 1023px) and (min-width: 769px), print and (max-width: 1023px){#sticky_nav_spacer{height:70px}}@media (max-width: 1023px){.main_feature_present #sticky_nav_spacer{display:none}}.site_header_area .menu_icon{color:transparent;font-size:0}@media (max-width: 1023px){.site_header_area{position:fixed}.fixfixed .site_header_area{position:absolute;box-shadow:none}.nav_is_fixed .site_header_area{box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}.nav_overlay_true .site_header_area{background-color:#5a2017;transition:none}.site_header_area img.grace_logo_black{display:none}.site_header_area .right_header_container{width:300px}.site_header_area .right_header_container .menu_button{position:absolute;vertical-align:middle;padding:10px;text-decoration:none;-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.site_header_area .right_header_container .menu_button .menu_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0 0;background-size:300px}.site_header_area .right_header_container .menu_button .menu_icon:hover,.site_header_area .right_header_container .menu_button .menu_icon.active,.site_header_area .right_header_container .menu_button .menu_icon.current{background-position:0 0}.site_header_area .right_header_container #modal_close{display:none;position:absolute;padding:10px;text-decoration:none;-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.site_header_area .right_header_container #modal_close .modal_close_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.site_header_area .right_header_container #modal_close .modal_close_icon:hover,.site_header_area .right_header_container #modal_close .modal_close_icon.active,.site_header_area .right_header_container #modal_close .modal_close_icon.current{background-position:-25px 0}.site_header_area .right_header_container form.nav_search{display:none}.site_header_area.menu_open #modal_close{display:inline-block}}@media (min-width: 1024px){.site_header_area{display:block !important}.site_header_area form.nav_search{display:inline-block;max-width:216px}.site_header_area form.nav_search .search_field{width:37px;padding-right:0;padding-left:0;height:34px}.site_header_area form.nav_search .search_open{padding-left:.8em;padding-right:38px}.no-touchevents .nav_is_fixed .site_header_area{bottom:auto;top:0;position:fixed;width:100%;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);margin-top:0px}}#site_footer{padding:0;background:black;background-size:100%;position:relative;line-height:1.4}@media (min-width: 600px), print{#site_footer{background:#000 url("https://mars.nasa.gov/assets/footer_bg.png") center no-repeat;background-size:cover}}@media (min-width: 1200px){#site_footer{background-position:center 70%}}#site_footer .gradient_line,#site_footer .related.module .gradient_line_module_top,.related.module #site_footer .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#a7abd2;background:-moz-linear-gradient(left, rgba(167,171,210,0), #a7abd2, rgba(167,171,210,0));background:-webkit-linear-gradient(left, rgba(167,171,210,0), #a7abd2, rgba(167,171,210,0));background:linear-gradient(left, rgba(167,171,210,0), #a7abd2, rgba(167,171,210,0));width:90%}@media (min-width: 769px), print{#site_footer .gradient_line,#site_footer .related.module .gradient_line_module_top,.related.module #site_footer .gradient_line_module_top{width:50%}}#site_footer .footer_line{display:none;margin-left:auto;margin-right:auto;content:" ";width:85%;height:1px;clear:both;background-color:rgba(255,255,255,0.25)}@media (min-width: 600px), print{#site_footer .footer_line{display:block;width:65%}}.upper_footer{padding:2em 0 0em;width:100%;margin:0 auto}@media (min-width: 600px), print{.upper_footer{padding:4em 0 4em}}@media (min-width: 769px), print{.upper_footer{width:85%}}@media (min-width: 1024px), print{.upper_footer{width:65%}}.upper_footer .share,.upper_footer .footer_newsletter{text-align:center;margin-bottom:2.7em}@media (min-width: 600px), print{.upper_footer .share,.upper_footer .footer_newsletter{margin-bottom:4em;width:100%;float:left}}.upper_footer .share h2,.upper_footer .footer_newsletter h2{font-size:1.8em;font-weight:300;margin-bottom:0.6em;color:#ccdeef;letter-spacing:-.035em}.lower_footer{padding-bottom:4em}@media (min-width: 769px), print{.lower_footer{padding-bottom:9em}}.lower_footer .nav_container{margin:0 auto 1em;position:relative;left:0;width:100%}@media (min-width: 769px), print{.lower_footer .nav_container{padding-top:0.5em}}.lower_footer nav{font-size:1em;text-transform:uppercase;text-align:center;margin-left:auto;margin-right:auto;color:#98c7fc}.lower_footer nav a{padding:0 .4em;font-weight:600;color:#98c7fc;font-size:.85em;text-decoration:none;line-height:2em}@media (min-width: 769px), print{.lower_footer nav a{padding:0 .6em}}.no-touchevents .lower_footer nav a:hover{color:white}.lower_footer nav li{display:inline}.lower_footer nav li:not(:last-child):after{content:"|"}.lower_footer .credits{position:relative;float:none;width:auto;text-align:center}.lower_footer .credits .footer_brands_top,.lower_footer .credits .staff,.lower_footer .credits p{color:#ccdeef;font-weight:700;font-size:1em;text-align:center;line-height:1.3em}@media (min-width: 769px), print{.lower_footer .credits .footer_brands_top,.lower_footer .credits .staff,.lower_footer .credits p{font-size:1em}}.lower_footer .credits .footer_brands_top p{font-weight:400}.lower_footer .credits .footer_brands_top p:last-child{margin-bottom:.4em}.lower_footer .credits .footer_brands{color:#ccdeef;margin-bottom:1em}.lower_footer .credits .footer_brands .caltech{font-weight:300}.lower_footer .credits .staff,.lower_footer .credits .staff p{line-height:1.6em;margin:.3em 0;font-weight:400}.lower_footer .credits a{color:#ccdeef;font-weight:700}.no-touchevents .lower_footer .credits a:hover{color:white}@media (max-width: 1023px){.nav_area{display:none;position:fixed;left:0;width:104%;overflow:hidden;height:100%;min-height:100%;background-color:#5a2017;z-index:10000;top:58px}}@media (max-width: 1023px) and (min-width: 480px){.nav_area{top:58px}}@media (max-width: 1023px) and (min-width: 600px), print and (max-width: 1023px){.nav_area{top:58px}}@media (max-width: 1023px) and (min-width: 769px), print and (max-width: 1023px){.nav_area{top:70px}}#site_nav_container .global_subnav_container{display:none}@media (min-width: 1024px){#site_nav_container .global_subnav_container{display:block !important}}@media (max-width: 1023px){#site_nav_container{width:100%;text-align:center;overflow-y:scroll;padding:0 8.8% 150px 4.8%;height:100%;min-height:100%;-webkit-overflow-scrolling:touch}#site_nav_container .site_nav{display:block}#site_nav_container ul.nav{margin-bottom:2em}#site_nav_container ul.nav>li{display:block;padding:1em 0 0}#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top{margin:1em 0 0 0}#site_nav_container ul.nav>li .arrow_box{padding:20px 20px;width:52px;float:right;cursor:pointer;margin:-0.4em -.8em 0 0;display:block;text-align:center}#site_nav_container ul.nav>li .arrow_box.reverse{transform:rotate(180deg);-ms-filter:"progid:DXImageTransform.Microsoft.Matrix(M11=-1, M12=1.2246063538223773e-16, M21=-1.2246063538223773e-16, M22=-1, SizingMethod='auto expand')"}#site_nav_container ul.nav>li .arrow_box .arrow_down{width:0;height:0;border-left:6px solid rgba(255,255,255,0);border-right:6px solid rgba(255,255,255,0);border-top:8px solid #fff;float:right}#site_nav_container .nav_title{margin-bottom:.3em;display:block;line-height:1.4em;font-weight:700;text-align:left;width:80%}#site_nav_container .nav_title a{font-size:1.2em;color:#FFF;display:block;width:100%;height:100%;padding:.4em .4em .4em 0}#site_nav_container .nav_title a:hover{text-decoration:none}#site_nav_container ul.subnav li{text-align:left}#site_nav_container ul.subnav a{color:#84B0DD;font-size:1em;line-height:1.4em;text-decoration:none;display:block;padding:.4em 0;font-weight:600}#site_nav_container ul.nav>li.admin_site_nav_item .arrow_box .arrow_up,#site_nav_container ul.nav>li.admin_site_nav_item .arrow_box .arrow_down{border-top-color:#F45F5F}.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_up,.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_down{border-top-color:white}#site_nav_container ul.nav>li.admin_site_nav_item .nav_title a,#site_nav_container ul.nav>li.admin_site_nav_item ul.subnav a{color:#F45F5F}.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .nav_title a:hover,.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item ul.subnav a:hover{color:white}#site_nav_container .overlay_search{margin-bottom:2em;width:100%;max-width:320px}#site_nav_container .overlay_search .search_field{width:100%}#site_nav_container .social_nav{color:white;display:block;background-color:#394862;font-size:1.3em;width:100%;border-radius:4px;padding:1.3em 0 1.7em;max-width:320px;margin:0 auto}#site_nav_container .social_nav .nav_title{margin-bottom:1em;text-align:center;width:auto}}@media (min-width: 1024px){.nav_area{width:100%;height:100%;float:right;bottom:0;right:0;position:absolute;text-align:right;z-index:50;display:block !important}}.no-touchevents .nav_is_fixed .fancybox-wrap #sticky_nav_spacer{display:none}.no-touchevents .nav_is_fixed .fancybox-wrap .nav_area{position:relative}@media (min-width: 1024px){#site_nav_container{padding:0;position:relative;display:inline-block !important;width:100%;height:100%;position:relative;bottom:0}}@media (min-width: 1024px) and (min-width: 1024px), print and (min-width: 1024px){#site_nav_container{bottom:0}}@media (min-width: 1024px) and (min-width: 1200px){#site_nav_container{bottom:0}}@media (min-width: 1024px){#site_nav_container .site_nav{width:100%;padding:0;position:relative;overflow-y:visible;min-height:0;height:100%;top:auto;left:auto;padding-top:22px}}@media (min-width: 1024px) and (min-width: 1200px){#site_nav_container .site_nav{padding-top:27px}}@media (min-width: 1024px) and (min-width: 1700px){#site_nav_container .site_nav{padding-right:0.8em}}@media (min-width: 1024px){#site_nav_container ul.nav{margin-bottom:0;display:inline-block;margin-right:.6em}#site_nav_container ul.nav>li{display:block}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.nav>li{display:inline-block;cursor:pointer;border-radius:2px;position:relative}#site_nav_container ul.nav>li:hover{background-color:#9a4739;border-bottom-left-radius:0;border-bottom-right-radius:0}.main_feature_present #site_nav_container ul.nav>li:hover{background-color:rgba(0,0,0,0.5)}.main_feature_present.nav_is_fixed #site_nav_container ul.nav>li:hover{background-color:#9a4739}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.nav>li .global_subnav_container{z-index:20;position:relative}}@media (min-width: 1024px){#site_nav_container ul.nav>li:hover .subnav{display:block}#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top{width:60%;margin-top:1.5em;margin-bottom:1.5em}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top{display:none}}@media (min-width: 1024px){#site_nav_container ul.nav>li:last-child{margin-left:14px}#site_nav_container ul.nav>li:last-child:before{content:"";border:1px solid rgba(124,113,110,0.6);position:absolute;height:20px;left:-8px;top:9px}#site_nav_container ul.nav>li:last-child .global_subnav_container>ul.subnav{border-top-left-radius:2px;right:-52px}#site_nav_container .nav_title{margin-bottom:0;display:block;line-height:1.4em;color:white}#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{display:block;font-size:.88rem;font-weight:600;padding:.5em 6px;color:white}}@media (min-width: 1024px) and (min-width: 1200px){#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{padding:.5em 0.8em;font-size:.9rem}}@media (min-width: 1024px) and (min-width: 1700px){#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{padding:.5em 1em}}@media (min-width: 1024px){#site_nav_container .nav_title a:hover,#site_nav_container .nav_title .main_nav_item:hover{text-decoration:none}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.subnav{padding:0.4em 0;margin-bottom:0;min-width:190px;display:none;position:absolute;margin:0;border-bottom-left-radius:2px;border-bottom-right-radius:2px;border-top-right-radius:2px;background-color:#9a4739}.main_feature_present #site_nav_container ul.subnav{background-color:rgba(0,0,0,0.5)}.main_feature_present.nav_is_fixed #site_nav_container ul.subnav{background-color:#9a4739}}@media (min-width: 1024px){#site_nav_container ul.subnav li{text-align:center}}@media (min-width: 1024px) and (min-width: 600px), print and (min-width: 1024px){#site_nav_container ul.subnav li{display:inline-block}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.subnav li{text-align:left;clear:both;display:block}#site_nav_container ul.subnav li:hover{background-color:rgba(0,0,0,0.3)}}@media (min-width: 1024px){#site_nav_container ul.subnav a{color:#84B0DD;font-size:1em;line-height:1.4em;text-decoration:none;display:block;padding:.4em 0;font-weight:600;white-space:nowrap}}@media (min-width: 1024px) and (min-width: 600px), print and (min-width: 1024px){#site_nav_container ul.subnav a{padding:.4em 1em}}@media (min-width: 1024px) and (min-width: 1024px){#site_nav_container ul.subnav a{white-space:normal;font-size:.85em;color:white;padding:.4em 1.1em}}@media (min-width: 1024px){.no-touchevents #site_nav_container ul.subnav a:hover{color:white}#site_nav_container .social_nav{display:none}#site_nav_container li.admin_site_nav_item{background-color:#D94F34}#site_nav_container li.admin_site_nav_item .nav_title a{color:white}#site_nav_container li.admin_site_nav_item:hover .nav_title,#site_nav_container li.admin_site_nav_item.current .nav_title{background-color:#FF7054 !important}#site_nav_container li.admin_site_nav_item:hover .subnav{display:block !important}#site_nav_container li.admin_site_nav_item ul.subnav{border:none;background-color:#D94F34}#site_nav_container li.admin_site_nav_item ul.subnav a{color:white}#site_nav_container li.admin_site_nav_item ul.subnav li{background-color:#D94F34;border:none}#site_nav_container li.admin_site_nav_item ul.subnav li:hover{background-color:#FF7054}}#site_nav_container .nav_title,#site_nav_container ul.subnav a{font-weight:600}#site_footer .sitemap{font-weight:400;z-index:10;position:relative;margin-bottom:2em}@media (min-width: large){#site_footer .sitemap .grid_layout{width:97%}}#site_footer .sitemap_directory{margin-bottom:2em}#site_footer .sitemap_directory .footer_sitemap_item{margin-bottom:1.8em}@media (min-width: 600px), print{#site_footer .sitemap_directory .footer_sitemap_item{margin-bottom:2em}}@media (min-width: 1024px), print{#site_footer .sitemap_directory .footer_sitemap_item{margin-left:10%}}#site_footer .sitemap_title{font-weight:400;text-transform:capitalize;font-size:1em;margin-bottom:.4em}#site_footer .sitemap_title a,#site_footer .sitemap_title .no_link_nav_item{color:white;text-decoration:none}@media (min-width: 600px), print{#site_footer .sitemap_title{font-size:1.1em;margin-bottom:.4em}}@media (min-width: 1024px), print{#site_footer .sitemap_title{font-size:1.1em}}#site_footer .sitemap_block{text-align:center;width:100%}@media (min-width: 600px), print{#site_footer .sitemap_block{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;width:25%;float:left;padding-left:1.66667%;padding-right:1.66667%;text-align:left}}@media (min-width: 1024px), print{#site_footer .sitemap_block{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;width:16.66667%;float:left;padding-left:1.66667%;padding-right:1.66667%}}#site_footer ul.subnav{margin-bottom:1em}#site_footer ul.subnav li{padding-left:1em;text-indent:-1em;margin:0 0 .25em 0}#site_footer ul.subnav a{color:#98c7fc;text-decoration:none;font-size:1em}@media (min-width: 600px), print{#site_footer ul.subnav a{font-size:.85em}}@media (min-width: 1024px), print{#site_footer ul.subnav a{font-size:.95em}}.no-touchevents #site_footer ul.subnav a:hover{color:white}@media (min-width: 600px), print{.main_area_sitemap .grid_layout{width:100%}}.main_area_sitemap .sitemap_directory{padding:2em 0 0}.main_area_sitemap .sitemap_directory .footer_sitemap_item{margin-bottom:1.8em}@media (min-width: 600px), print{.main_area_sitemap .sitemap_directory .footer_sitemap_item{margin-bottom:2em}}.main_area_sitemap .sitemap_block{text-align:center;width:100%}@media (min-width: 600px), print{.main_area_sitemap .sitemap_block{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;width:25%;float:left;padding-left:1.66667%;padding-right:1.66667%;text-align:left}}@media (min-width: 1024px), print{.main_area_sitemap .sitemap_block{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;width:16.66667%;float:left;padding-left:1.66667%;padding-right:1.66667%}}.main_area_sitemap .sitemap_block a{word-wrap:normal}.main_area_sitemap .sitemap_title{margin-top:0}.main_area_sitemap .sitemap_title a,.main_area_sitemap .sitemap_title .no_link_nav_item{color:#222}.main_area_sitemap .subnav a{display:block}@media (min-width: 600px), print{.main_area_sitemap .subnav a{padding-left:1em;text-indent:-1em;margin:.1em 0}}.social_icons{display:block}.social_icons .icon{width:44px !important;height:44px !important;display:inline-block;overflow:hidden}.social_icons .icon+.icon{margin-left:.7em}@media (min-width: 769px), print{.social_icons .icon+.icon{margin-left:.9em}}.social_icons .icon img{opacity:1 !important;height:100%;max-width:none}.triple_teaser .social_icons{max-width:188px;white-space:nowrap}@media (min-width: 769px), print{.triple_teaser .social_icons{max-width:none}}.triple_teaser .social_icons .icon{width:44px;height:44px}.triple_teaser .social_icons .icon+.icon{margin-left:.7em}@media (min-width: 600px), print{.triple_teaser .social_icons .icon{width:38px;height:38px}.triple_teaser .social_icons .icon+.icon{margin-left:.4em;margin-left:calc((100% - 152px)/3)}}@media (min-width: 769px), print{.triple_teaser .social_icons .icon{width:44px;height:44px}.triple_teaser .social_icons .icon+.icon{margin-left:.8em}}.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300m{padding:0 !important;float:none !important}#_atssh{display:none}#at4-share,#at4-soc{top:60%;bottom:auto}html,html a,select,input,button{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}html.no-touchevents{text-rendering:optimizeLegibility}html.no-touchevents html a,html.no-touchevents select,html.no-touchevents input,html.no-touchevents button{text-rendering:optimizeLegibility}input.placeholder,textarea.placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input:-moz-placeholder,textarea:-moz-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input::-moz-placeholder,textarea::-moz-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input::-webkit-input-placeholder,textarea::-webkit-input-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input:-ms-input-placeholder,textarea:-ms-input-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}html,button,input,select,textarea{font-family:"Montserrat",Helvetica,Arial,sans-serif;color:#222}html{min-height:100%}body{font-family:"Montserrat",Helvetica,Arial,sans-serif;font-weight:300;font-size:96%;line-height:1.4;min-height:100%;position:relative;background-color:transparent}body.noscroll{overflow-y:hidden}@media (min-width: 600px), print{body{font-size:98%}}@media (min-width: 769px), print{body{font-size:100%}}@media (min-width: 1024px), print{body{font-size:102%}}@media (min-width: 1200px){body{font-size:104%}}h1,h2,h3,h4,h5{line-height:1.2em}h1{letter-spacing:-.03em}h2{letter-spacing:-.03em}h3{letter-spacing:-.02em}h4{letter-spacing:-.02em}h1,h2,h3,h4,h5{margin:0}img{width:100%}img,embed,object,video{max-width:100%}.jwplayer video{max-width:none}i{font-style:italic}strong{font-weight:700}p{margin:1em 0;font-size:100%}.gradient_line,.related.module .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#b76b5f;background:-moz-linear-gradient(left, rgba(183,107,95,0), #b76b5f, rgba(183,107,95,0));background:-webkit-linear-gradient(left, rgba(183,107,95,0), #b76b5f, rgba(183,107,95,0));background:linear-gradient(left, rgba(183,107,95,0), #b76b5f, rgba(183,107,95,0))}.gradient_line_extra_margin{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#BEBEBE;background:-moz-linear-gradient(left, rgba(190,190,190,0), #bebebe, rgba(190,190,190,0));background:-webkit-linear-gradient(left, rgba(190,190,190,0), #bebebe, rgba(190,190,190,0));background:linear-gradient(left, rgba(190,190,190,0), #bebebe, rgba(190,190,190,0));margin:2em 0}@media (min-width: 769px), print{.gradient_line_extra_margin{margin:3em 0}}.module_title,.main_carousel.module .carousel_header .carousel_title,.media_feature_title,.sitemap_title,.nav_title,.article_title,.sidebar_title,#secondary_column .related_content_module .module_title,#secondary_column .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #secondary_column .related_content_module .carousel_title,.right_col .related_content_module .module_title,.right_col .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .right_col .related_content_module .carousel_title,.rollover_title{letter-spacing:-.02em}.module_title,.main_carousel.module .carousel_header .carousel_title{letter-spacing:-.02em}.rollover_title{font-size:2.34em;margin-bottom:0em}@media (min-width: 600px), print{.rollover_title{font-size:2.7em;margin-bottom:0em}}@media (min-width: 769px), print{.rollover_title{font-size:3.06em;margin-bottom:0em}}@media (min-width: 1024px), print{.rollover_title{font-size:3.24em;margin-bottom:0em}}@media (min-width: 1200px){.rollover_title{font-size:3.42em;margin-bottom:0em}}.content_title{letter-spacing:0;font-weight:600}.module_title,.main_carousel.module .carousel_header .carousel_title{font-size:1.69em;margin-bottom:.35em;text-align:center;font-weight:600}@media (min-width: 600px), print{.module_title,.main_carousel.module .carousel_header .carousel_title{font-size:1.95em;margin-bottom:.63em}}@media (min-width: 769px), print{.module_title,.main_carousel.module .carousel_header .carousel_title{font-size:2.21em;margin-bottom:.91em}}@media (min-width: 1024px), print{.module_title,.main_carousel.module .carousel_header .carousel_title{font-size:2.34em;margin-bottom:1.015em}}@media (min-width: 1200px){.module_title,.main_carousel.module .carousel_header .carousel_title{font-size:2.47em;margin-bottom:1.12em}}@media (min-width: 600px), print{.grid_gallery .module_title,.grid_gallery .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .grid_gallery .carousel_title{text-align:left;width:80%}}.module_title_small,.double_teaser .module_title,.double_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .double_teaser .carousel_title{font-size:1.4em}@media (min-width: 600px), print{.module_title_small,.double_teaser .module_title,.double_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .double_teaser .carousel_title{font-size:1.8em;margin-bottom:.85em}}.filter_bar .module_title_small,.filter_bar .double_teaser .module_title,.filter_bar .double_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .filter_bar .double_teaser .carousel_title{text-align:left;width:90%}@media (min-width: 600px), print{.filter_bar .module_title_small,.filter_bar .double_teaser .module_title,.filter_bar .double_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .filter_bar .double_teaser .carousel_title{text-align:center}}.category_title{font-size:.9em;font-weight:500;color:#f08d77;text-transform:uppercase;margin-bottom:6px}.multimedia_teaser .category_title{font-size:.8em}.primary_media_feature .media_feature_title{font-size:1.43em;margin-bottom:0em;font-weight:400;color:white}@media (min-width: 600px), print{.primary_media_feature .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.primary_media_feature .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.primary_media_feature .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.primary_media_feature .media_feature_title{font-size:2.09em;margin-bottom:0em}}.image_of_the_day .media_feature_title{font-size:1.43em;margin-bottom:0em;font-weight:600;color:white}@media (min-width: 600px), print{.image_of_the_day .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.image_of_the_day .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.image_of_the_day .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.image_of_the_day .media_feature_title{font-size:2.09em;margin-bottom:0em}}.multimedia_module_gallery .media_feature_title{font-size:1.43em;margin-bottom:0em;color:white;font-weight:600}@media (min-width: 600px), print{.multimedia_module_gallery .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.multimedia_module_gallery .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.multimedia_module_gallery .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.multimedia_module_gallery .media_feature_title{font-size:2.09em;margin-bottom:0em}}.article_title{font-size:1.82em;margin-bottom:0em;font-weight:600}@media (min-width: 600px), print{.article_title{font-size:2.1em;margin-bottom:0em}}@media (min-width: 769px), print{.article_title{font-size:2.38em;margin-bottom:0em}}@media (min-width: 1024px), print{.article_title{font-size:2.52em;margin-bottom:0em}}@media (min-width: 1200px){.article_title{font-size:2.66em;margin-bottom:0em}}.magic_shell_title,#iframe_overlay .magic_shell_title{font-family:WhitneyCondensed-Bold,Helvetica,Arial,sans-serif;background-color:#000;font-size:2.6em;font-weight:normal;padding:.9em .5em;text-align:center;line-height:.8}@media (min-width: 600px), print{.magic_shell_title,#iframe_overlay .magic_shell_title{padding:.9em .5em;text-align:left;font-size:2.6em}}.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{display:block}.magic_shell_title .parent_title a,#iframe_overlay .magic_shell_title .parent_title a{color:#f08d77;transition:color 400ms}.magic_shell_title .parent_title a:hover,#iframe_overlay .magic_shell_title .parent_title a:hover{text-decoration:none;color:white}@media (min-width: 600px), print{.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{display:inline;margin-right:0.1em}}.magic_shell_title .article_title,#iframe_overlay .magic_shell_title .article_title{color:#FFF;font-weight:normal}.magic_shell_title .article_title,#iframe_overlay .magic_shell_title .article_title,.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{font-size:.7em;text-transform:uppercase;letter-spacing:normal}.sidebar_title,#secondary_column .related_content_module .module_title,#secondary_column .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #secondary_column .related_content_module .carousel_title,.right_col .related_content_module .module_title,.right_col .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .right_col .related_content_module .carousel_title{font-size:1.55em;margin-bottom:0.6em;font-weight:700;margin-left:-1px}.links_module a{font-size:1em;cursor:pointer}.module{padding:2.5em 0 2.2em;position:relative}@media (min-width: 769px), print{.module{padding:4.8em 0 5em}}.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:95%}.grid_layout:after{content:" ";display:block;clear:both}@media (min-width: 600px), print{.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:95%}.grid_layout:after{content:" ";display:block;clear:both}}@media (min-width: 769px), print{.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:90%}.grid_layout:after{content:" ";display:block;clear:both}}@media (min-width: 1024px), print{.grid_layout{max-width:1200px;width:97%}.content_page .grid_layout{width:90%}}@media (max-width: 480px){.suggested_features .grid_layout,.news_teaser .grid_layout,.carousel_teaser .grid_layout{width:100%}.suggested_features .grid_layout header,.news_teaser .grid_layout header,.carousel_teaser .grid_layout header{margin-left:auto;margin-right:auto;width:95%}.suggested_features .grid_layout footer,.news_teaser .grid_layout footer,.carousel_teaser .grid_layout footer{margin-left:auto;margin-right:auto;width:95%}}.gradient_container_top,.gradient_container_bottom,.white_gradient_container_bottom{height:200px;width:100%;position:absolute;z-index:1}.homepage_carousel .gradient_container_top,.homepage_carousel .gradient_container_bottom,.homepage_carousel .white_gradient_container_bottom{z-index:7}.gradient_container_left{height:100%;width:70%;position:absolute;z-index:1}.gradient_container_top{background:-owg-linear-gradient(rgba(0,0,0,0.6), transparent);background:-webkit-linear-gradient(rgba(0,0,0,0.6), transparent);background:-moz-linear-gradient(rgba(0,0,0,0.6), transparent);background:-o-linear-gradient(rgba(0,0,0,0.6), transparent);background:linear-gradient(rgba(0,0,0,0.6), transparent);pointer-events:none;top:0}.gradient_container_left{background:-owg-linear-gradient(to right, rgba(0,0,0,0.6), transparent);background:-webkit-linear-gradient(to right, rgba(0,0,0,0.6), transparent);background:-moz-linear-gradient(to right, rgba(0,0,0,0.6), transparent);background:-o-linear-gradient(to right, rgba(0,0,0,0.6), transparent);background:linear-gradient(to right, rgba(0,0,0,0.6), transparent);left:0;top:0}.gradient_container_bottom{background:-owg-linear-gradient(transparent, rgba(0,0,0,0.7));background:-webkit-linear-gradient(transparent, rgba(0,0,0,0.7));background:-moz-linear-gradient(transparent, rgba(0,0,0,0.7));background:-o-linear-gradient(transparent, rgba(0,0,0,0.7));background:linear-gradient(transparent, rgba(0,0,0,0.7));pointer-events:none;bottom:0}.white_gradient_container_bottom{background:url("https://mars.nasa.gov/assets/white_gradient.png") repeat-x bottom left;bottom:0;height:100px;pointer-events:none}.gradient_bottom_grid{background-image:linear-gradient(to bottom, transparent 0%, #000 30%, #000 100%)}.grid_gallery .gallery_header{margin-bottom:2em}@media (min-width: 769px), print{.grid_gallery .gallery_header{margin-bottom:3em}}.grid_gallery .gallery_header .module_title,.grid_gallery .gallery_header .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .grid_gallery .gallery_header .carousel_title{margin-bottom:0.5em;text-align:left}.grid_gallery .list_date{font-size:.9em;margin-bottom:.4em;color:#5A5A5A}.grid_gallery.grid_view{background:white}.grid_gallery.grid_view .content_title{letter-spacing:-.03em;display:none}.grid_gallery.grid_view .image_and_description_container{min-height:0}.grid_gallery.grid_view .article_teaser_body{display:none}.grid_gallery.grid_view .list_date{display:none}.grid_gallery.grid_view .list_image{width:100%;float:none;margin:0}.grid_gallery.grid_view .bottom_gradient{color:#222;display:block;position:relative;margin-top:0.3rem;padding-bottom:0.4rem;text-align:left;min-height:52px}@media (min-width: 769px), print{.grid_gallery.grid_view .bottom_gradient{margin-top:.5rem;min-height:85px}}.grid_gallery.grid_view .bottom_gradient div{text-align:left}.grid_gallery.grid_view .bottom_gradient h3{font-weight:500;font-size:1em}.grid_gallery.grid_view li.slide{margin-bottom:.84034%;width:49.57983%;float:left}.grid_gallery.grid_view li.slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(2n+2){margin-left:50.42017%;margin-right:-100%;clear:none}@media (min-width: 600px), print{.grid_gallery.grid_view li.slide{margin-bottom:.84034%;width:32.77311%;float:left}.grid_gallery.grid_view li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}@media (min-width: 769px), print{.grid_gallery.grid_view li.slide{width:24.36975%;float:left}.grid_gallery.grid_view li.slide:nth-child(4n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(4n+2){margin-left:25.21008%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(4n+3){margin-left:50.42017%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(4n+4){margin-left:75.63025%;margin-right:-100%;clear:none}}@media (min-width: 1200px){.grid_gallery.grid_view li.slide{width:19.32773%;float:left}.grid_gallery.grid_view li.slide:nth-child(5n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(5n+2){margin-left:20.16807%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+3){margin-left:40.33613%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+4){margin-left:60.5042%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+5){margin-left:80.67227%;margin-right:-100%;clear:none}}.grid_gallery.grid_view li.slide a{text-decoration:none}.grid_gallery.list_view .list_image{float:right;margin-left:4%;margin-bottom:.5em;width:32%}@media (min-width: 600px), print{.grid_gallery.list_view .list_image{margin-left:0;margin-bottom:0;width:23.07692%;float:left;margin-right:2.5641%}}@media (min-width: 769px), print{.grid_gallery.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}@media (min-width: 1024px), print{.grid_gallery.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}.grid_gallery.list_view .list_text{width:auto}@media (min-width: 600px), print{.grid_gallery.list_view .list_text{width:74.35897%;float:right;margin-right:0}}@media (min-width: 769px), print{.grid_gallery.list_view .list_text{width:74.57627%;float:right;margin-right:0}}@media (min-width: 1024px), print{.grid_gallery.list_view .list_text{width:66.10169%;float:left;margin-right:1.69492%}}.grid_gallery.list_view .content_title a{text-decoration:none;cursor:pointer;color:#222}.grid_gallery.list_view .content_title a:hover{text-decoration:underline}.grid_gallery.list_view .content_title{display:block;font-size:1.17em;margin-bottom:.1em;margin-bottom:.2em;font-weight:700;color:#222;letter-spacing:-.035em}@media (min-width: 600px), print{.grid_gallery.list_view .content_title{font-size:1.35em;margin-bottom:.18em}}@media (min-width: 769px), print{.grid_gallery.list_view .content_title{font-size:1.53em;margin-bottom:.26em}}@media (min-width: 1024px), print{.grid_gallery.list_view .content_title{font-size:1.62em;margin-bottom:.29em}}@media (min-width: 1200px){.grid_gallery.list_view .content_title{font-size:1.71em;margin-bottom:.32em}}.grid_gallery.list_view .bottom_gradient{display:none}@media (min-width: 1024px), print{.grid_gallery.list_view .article_teaser_body{font-size:1.1em}}.grid_gallery.list_view li.slide:first-child{border-top:1px solid #CCC}.grid_gallery.list_view li.slide{border-bottom:1px solid #CCC;padding:1.2em 0}.grid_gallery.list_view li.slide a{text-decoration:none;cursor:pointer}.view_selectors{position:relative;margin:0 auto;text-align:center;width:106px;text-align:right}@media (min-width: 769px), print{.view_selectors{position:absolute;right:0;top:0;height:100%}}.view_selectors .nav_item{display:inline-block;position:relative;background-repeat:no-repeat;width:50px;height:50px;cursor:pointer;background-image:url("https://mars.nasa.gov/assets/[email protected]");background-size:125px;background-color:#eef2f6;border-radius:50%;-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.view_selectors .nav_item.list_icon{background-position:-12px -62px}.no-touchevents .view_selectors .nav_item.list_icon:hover{background-position:-12px -12px}.list_view .view_selectors .nav_item.list_icon{background-position:-12px -12px}.view_selectors .nav_item.grid_icon{background-position:-62px -62px}.no-touchevents .view_selectors .nav_item.grid_icon:hover{background-position:-62px -12px}.grid_view .view_selectors .nav_item.grid_icon{background-position:-62px -12px}.grid_gallery#more_section .module_title,.grid_gallery#more_section .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .grid_gallery#more_section .carousel_title{text-align:center;width:100%}.grid_gallery#more_section li.slide{margin-bottom:1.69492%;width:49.15254%;float:left}.grid_gallery#more_section li.slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery#more_section li.slide:nth-child(2n+2){margin-left:50.84746%;margin-right:-100%;clear:none}@media (min-width: 1200px){.grid_gallery#more_section li.slide{width:32.20339%;float:left}.grid_gallery#more_section li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery#more_section li.slide:nth-child(3n+2){margin-left:33.89831%;margin-right:-100%;clear:none}.grid_gallery#more_section li.slide:nth-child(3n+3){margin-left:67.79661%;margin-right:-100%;clear:none}}.grid_gallery#more_section li.slide .image_and_description_container{position:relative}.grid_gallery#more_section li.slide a.slide_title{padding-top:.6em;display:block;color:#222;font-weight:400}.grid_gallery#more_section li.slide:hover a.slide_title{color:#366599}.wysiwyg_content ul,ol{margin-left:1.6em}.feature_pages .wysiwyg_content ol,.feature_pages .wysiwyg_content ul{list-style-position:inside}#secondary_column ul,ol{margin-left:1.2em}.wysiwyg_content ul,#secondary_column ul{margin-left:1.6em;list-style-type:disc;list-style-position:outside}.wysiwyg_content ul ul,#secondary_column ul ul{list-style-type:circle;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ul ul ul,#secondary_column ul ul ul{list-style-type:square;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ul ul ul ul,#secondary_column ul ul ul ul{list-style-type:disc;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol,#secondary_column ol{margin-left:1.6em;list-style-type:decimal;list-style-position:outside}.wysiwyg_content ol ol,#secondary_column ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol ol ol,#secondary_column ol ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol ol ol ol,#secondary_column ol ol ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol,.wysiwyg_content ul,#secondary_column ol,#secondary_column ul{margin-bottom:2em}.wysiwyg_content ol:last-child,.wysiwyg_content ul:last-child,#secondary_column ol:last-child,#secondary_column ul:last-child{margin-bottom:0}.wysiwyg_content ol li,.wysiwyg_content ul li,#secondary_column ol li,#secondary_column ul li{margin-bottom:.5em}.wysiwyg_content .item_list_module,.wysiwyg_content .item_list,.wysiwyg_content .list_sublist,.wysiwyg_content .footnotes ul,.wysiwyg_content .sidebar_gallery,.wysiwyg_content .related_items,.wysiwyg_content .sitemap_directory ul,.wysiwyg_content .list_view_module ul,.wysiwyg_content .faq_topics ul,.wysiwyg_content .item_grid,.wysiwyg_content .related_content_module ul,.wysiwyg_content .sig_events_module ul,.wysiwyg_content ul.detailed_def_nav,#secondary_column .item_list_module,#secondary_column .item_list,#secondary_column .list_sublist,#secondary_column .footnotes ul,#secondary_column .sidebar_gallery,#secondary_column .related_items,#secondary_column .sitemap_directory ul,#secondary_column .list_view_module ul,#secondary_column .faq_topics ul,#secondary_column .item_grid,#secondary_column .related_content_module ul,#secondary_column .sig_events_module ul,#secondary_column ul.detailed_def_nav{margin-left:0;list-style-type:none;list-style-position:inside}.wysiwyg_content .item_list_module li,.wysiwyg_content .item_list li,.wysiwyg_content .list_sublist li,.wysiwyg_content .footnotes ul li,.wysiwyg_content .sidebar_gallery li,.wysiwyg_content .related_items li,.wysiwyg_content .sitemap_directory ul li,.wysiwyg_content .list_view_module ul li,.wysiwyg_content .faq_topics ul li,.wysiwyg_content .item_grid li,.wysiwyg_content .related_content_module ul li,.wysiwyg_content .sig_events_module ul li,.wysiwyg_content ul.detailed_def_nav li,#secondary_column .item_list_module li,#secondary_column .item_list li,#secondary_column .list_sublist li,#secondary_column .footnotes ul li,#secondary_column .sidebar_gallery li,#secondary_column .related_items li,#secondary_column .sitemap_directory ul li,#secondary_column .list_view_module ul li,#secondary_column .faq_topics ul li,#secondary_column .item_grid li,#secondary_column .related_content_module ul li,#secondary_column .sig_events_module ul li,#secondary_column ul.detailed_def_nav li{margin-bottom:0}.module header{margin-bottom:1em;position:relative}.module footer{text-align:center;position:relative}.module footer a.detail_link{text-transform:uppercase;font-size:.9em;font-weight:400}.module .module_title,.main_carousel.module .carousel_header .carousel_title{font-weight:300;color:#6d3007}.multimedia_teaser{-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;overflow:hidden}#secondary_column aside .multimedia_teaser{position:relative}#secondary_column aside .multimedia_teaser .text{position:absolute;width:100%;text-align:center;padding:0 1.4em 2em;bottom:0}#secondary_column aside .multimedia_teaser .text .category_title,#secondary_column aside .multimedia_teaser .text .media_feature_title{color:white}.multimedia_teaser{-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;overflow:hidden}.multimedia_teaser .util-carousel{margin-bottom:2em;width:190%}@media (min-width: 480px){.multimedia_teaser .util-carousel{width:90%}}@media (min-width: 769px), print{.multimedia_teaser .util-carousel{margin-bottom:3em}}.suggested_features.module{-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;background-color:#eef2f6}.related.module{-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;padding-top:1em}.related.module .module_title,.related.module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .related.module .carousel_title{text-align:left;font-size:2em}.related.module .gradient_line_module_top{margin:0 0 2em}.carousel_teaser.related .module_title,.carousel_teaser.related .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .carousel_teaser.related .carousel_title{text-align:center}@media (min-width: 600px), print{.carousel_teaser.related .module_title,.carousel_teaser.related .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .carousel_teaser.related .carousel_title{text-align:left;width:88%;margin-left:auto;margin-right:auto}}@media (min-width: 769px), print{.carousel_teaser.related .module_title,.carousel_teaser.related .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .carousel_teaser.related .carousel_title{width:88.5%}}@media (min-width: 1024px), print{.carousel_teaser.related .module_title,.carousel_teaser.related .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .carousel_teaser.related .carousel_title{width:89%}}section.site_teaser .img_col{width:100%;margin-bottom:1.5em}@media (min-width: 600px), print{section.site_teaser .img_col{width:40.78947%;float:left;margin-right:5.26316%;margin-bottom:0}}section.site_teaser .text_col{width:100%}@media (min-width: 600px), print{section.site_teaser .text_col{width:53.94737%;float:left;margin-right:5.26316%;float:right;margin-right:0}}section.site_teaser .text_col .category_title{font-size:0.9em}section.site_teaser .text_col p{margin:1em 0 1.7em}section.site_teaser .site_teaser_caption{margin:.5em 1em 0 0;text-align:right;font-size:.8em}section.site_teaser footer{text-align:center}@media (min-width: 600px), print{section.site_teaser footer{text-align:left}}section.site_teaser .button,section.site_teaser .outline_button,section.site_teaser .primary_media_feature .floating_text_area .button,.primary_media_feature .floating_text_area section.site_teaser .button{padding:0.8em 1.2em}section.more_bar{text-align:center;background-color:#4d91a6;color:black;height:36px;cursor:pointer;position:relative}section.more_bar .title,section.more_bar .arrow_down{display:inline-block;vertical-align:middle;margin-top:6px}section.more_bar .arrow_down{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -50px -125px;background-size:300px}section.more_bar .arrow_down:hover,section.more_bar .arrow_down.active,section.more_bar .arrow_down.current{background-position:-50px -125px}.inline_dashboard_item{display:inline}.inline_dashboard_item div{display:inline-block}.homepage_carousel .floating_text_area{width:100%;padding:1.4em;bottom:120px;text-align:center;margin-left:auto;margin-right:auto;color:white}@media (min-width: 769px), print{.homepage_carousel .floating_text_area{bottom:calc(125px + 3em)}}.homepage_carousel .floating_text_area .description{display:block;max-height:130px;overflow-y:auto;padding:0 1.4em;color:#ffffff;font-weight:300}.homepage_carousel .floating_text_area .description a{color:#69B9FF}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .description{display:block;line-height:1.4em;padding:0;max-height:none;overflow:hidden}}.touchevents .homepage_carousel .floating_text_area .description{display:none}.no-touchevents .homepage_carousel .floating_text_area .description{display:none}@media (min-width: 769px), print{.no-touchevents .homepage_carousel .floating_text_area .description{display:block !important}}.homepage_carousel .floating_text_area .description .detail_link{display:inline-block;color:#69B9FF;text-transform:none}.homepage_carousel .floating_text_area .description .detail_link:hover{text-decoration:none;color:#ffffff}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .description .detail_link{display:none}}@media (orientation: landscape){.homepage_carousel .floating_text_area .description{display:none !important}}.homepage_carousel .floating_text_area footer{margin:0}@media (min-width: 769px), print{.homepage_carousel .floating_text_area footer{margin:1.6em 0 0}}.homepage_carousel .floating_text_area .media_feature_title{color:white;margin-bottom:.4em;font-size:1.6em;width:70%;margin-left:auto;margin-right:auto;position:relative;font-weight:400}.homepage_carousel .floating_text_area .media_feature_title a{color:white;text-decoration:none}@media (min-width: 600px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:2em;width:80%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:1.5em;margin-bottom:.4em;width:100%}}@media (min-width: 1024px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:1.6em}}@media (min-width: 1200px){.homepage_carousel .floating_text_area .media_feature_title{font-size:1.8em}}@media (min-width: 1700px){.homepage_carousel .floating_text_area .media_feature_title{font-size:1.9em}}.homepage_carousel .floating_text_area .media_feature_title span.arrow{background:url("https://mars.nasa.gov/assets/[email protected]") center no-repeat;position:absolute;right:-25%;margin-top:-0.2em;background-size:12px;height:44px;width:44px;bottom:-9px}@media (min-width: 600px), print{.homepage_carousel .floating_text_area .media_feature_title span.arrow{margin-top:0;right:-10%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .media_feature_title span.arrow{display:none}}@media (orientation: landscape){.homepage_carousel .floating_text_area .media_feature_title span.arrow{display:none}}.homepage_carousel .floating_text_area .button,.homepage_carousel .floating_text_area .outline_button,.homepage_carousel .floating_text_area .button:hover,.homepage_carousel .floating_text_area .outline_button:hover{display:none;background-color:#3b788b;text-transform:none;font-size:16px;font-weight:400;border-radius:3px;padding:9px 16px}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .button,.homepage_carousel .floating_text_area .outline_button,.homepage_carousel .floating_text_area .button:hover,.homepage_carousel .floating_text_area .outline_button:hover{display:inline-block !important;color:white !important}}.no-touchevents .homepage_carousel .floating_text_area .button,.no-touchevents .homepage_carousel .floating_text_area .outline_button{transition:background 300ms}.no-touchevents .homepage_carousel .floating_text_area .button:hover,.no-touchevents .homepage_carousel .floating_text_area .outline_button:hover{background-color:#569bb1}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:1.5em}}@media (min-width: 1024px), print{.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:1.7em}}@media (min-width: 1200px){.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:2.0em}}@media (min-width: 1700px){.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:2.2em}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area{text-align:left;padding:1.4em;margin:0}.homepage_carousel .floating_text_area.no-box{position:relative;top:40%;transform:translateY(-50%);width:45%;max-width:500px}.homepage_carousel .floating_text_area.no-box.left{left:5%}.homepage_carousel .floating_text_area.no-box.right{left:51%}.homepage_carousel .floating_text_area.box{position:absolute;background-color:rgba(0,0,0,0.6);width:400px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 1024px), print and (min-width: 769px), print{.homepage_carousel .floating_text_area.box{width:480px}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.homepage_carousel .floating_text_area.box{width:530px}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.box.left{left:8%}.homepage_carousel .floating_text_area.box.right{right:8%}.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{transition:background-color .5s ease-out;width:40%;top:auto}.homepage_carousel .floating_text_area.expandable footer,.homepage_carousel .floating_text_area.expandable_light footer{margin-bottom:1em}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:415px}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:480px}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:600px}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable.left,.homepage_carousel .floating_text_area.expandable_light.left{left:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable.left,.homepage_carousel .floating_text_area.expandable_light.left{left:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable.right,.homepage_carousel .floating_text_area.expandable_light.right{right:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable.right,.homepage_carousel .floating_text_area.expandable_light.right{right:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable .description,.homepage_carousel .floating_text_area.expandable_light .description{max-height:0;overflow:hidden;transition:all .7s}.homepage_carousel .floating_text_area.expandable .media_feature_title:after,.homepage_carousel .floating_text_area.expandable_light .media_feature_title:after{content:url("https://mars.nasa.gov/assets/arrow_down_prompt.png");transition:opacity .25s;position:relative;top:-4px;left:10px;opacity:1}.homepage_carousel .floating_text_area.expandable .media_feature_title span.arrow,.homepage_carousel .floating_text_area.expandable_light .media_feature_title span.arrow{display:none}.homepage_carousel .floating_text_area.expandable:hover:before,.homepage_carousel .floating_text_area.expandable_light:hover:before{opacity:0}.homepage_carousel .floating_text_area.expandable:hover .description,.homepage_carousel .floating_text_area.expandable_light:hover .description{max-height:400px}.homepage_carousel .floating_text_area.expandable:hover .media_feature_title:after,.homepage_carousel .floating_text_area.expandable_light:hover .media_feature_title:after{opacity:0}.homepage_carousel .floating_text_area.expandable{background-color:rgba(0,0,0,0.6)}.homepage_carousel .floating_text_area.expandable_light{background-color:rgba(255,255,255,0.9)}.homepage_carousel .floating_text_area.expandable_light .media_feature_title,.homepage_carousel .floating_text_area.expandable_light .description{color:#452520}.homepage_carousel .floating_text_area.expandable_light .category_title{color:#d63e1c}.homepage_carousel .floating_text_area.expandable_light .media_feature_title:after{content:url("https://mars.nasa.gov/assets/arrow_down_light.png")}}.homepage_carousel .floating_text_area.open span.arrow{transform:rotate(180deg)}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.open .description{display:block}}.primary_media_feature .floating_text_area{bottom:2em}.banner_header_overlay{bottom:1em}.primary_media_feature .floating_text_area,.banner_header_overlay{position:absolute;z-index:12;color:white;width:100%;text-align:center;padding:0 1%}.primary_media_feature .floating_text_area .category_title,.banner_header_overlay .category_title{color:white;margin-bottom:0.7em}.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{margin:-.5em auto 1em}@media (min-width: 769px), print{.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{width:500px;margin-bottom:1.5em}}@media (min-width: 1024px), print{.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{width:550px}}.primary_media_feature .floating_text_area .media_feature_title,.banner_header_overlay .media_feature_title{color:white;margin-bottom:.4em;font-size:1.93em}@media (min-width: 600px), print{.primary_media_feature .floating_text_area .media_feature_title,.banner_header_overlay .media_feature_title{font-size:2.8em}}.primary_media_feature .floating_text_area .media_feature_title a,.banner_header_overlay .media_feature_title a{color:white;text-decoration:none}.custom_banner_container{height:190px;width:100%;background-size:cover;background-position:center}@media only screen and (orientation: landscape){.custom_banner_container{height:260px}}@media (min-width: 600px), print{.custom_banner_container{height:420px}}@media only screen and (min-width: 600px) and (orientation: landscape){.custom_banner_container{height:350px}}@media (min-width: 769px), print{.custom_banner_container{height:400px}}@media only screen and (min-width: 769px) and (orientation: landscape){.custom_banner_container{height:400px}}@media (min-width: 1024px), print{.custom_banner_container{height:440px}}@media (min-width: 1200px){.custom_banner_container{height:550px}}@media (min-width: 1700px){.custom_banner_container{height:660px}}.custom_banner_container .banner_header_overlay{position:absolute;width:100%;bottom:0;z-index:2}.custom_banner_container .article_title{margin-bottom:.5em;text-align:center;color:#FFF}.custom_banner_container .secondary_nav_mobile{display:block}.custom_banner_container .secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:.3em 0 2em}.custom_banner_container .secondary_nav_mobile select::-ms-expand{display:none}.custom_banner_container .secondary_nav_mobile select option{padding:0.5em 1em}@media (min-width: 769px), print{.custom_banner_container .secondary_nav_mobile{display:none}}.custom_banner_container .secondary_nav_desktop{display:none;margin:0 0 .8em 0;text-align:center}@media (min-width: 769px), print{.custom_banner_container .secondary_nav_desktop{display:block}}.custom_banner_container .secondary_nav_desktop li{display:inline-block;position:relative}.custom_banner_container .secondary_nav_desktop a{color:#5AA1F5;font-size:1.2em;font-weight:700;display:block;padding:.3em .9em .3em 0}@media (min-width: 1700px){.custom_banner_container .secondary_nav_desktop a{font-size:1.3em}}.custom_banner_container .secondary_nav_desktop li.current a,.custom_banner_container .secondary_nav_desktop li:hover a{text-decoration:none;color:white}.homepage_carousel .master-slider .ms-slide-bgvideocont{transform:none !important}#masterslider{height:480px;width:100%}@media only screen and (orientation: landscape){#masterslider{height:260px}}@media (min-width: 600px), print{#masterslider{height:500px}}@media only screen and (min-width: 600px) and (orientation: landscape){#masterslider{height:350px}}@media (min-width: 769px), print{#masterslider{height:500px}}@media only screen and (min-width: 769px) and (orientation: landscape){#masterslider{height:500px}}@media (min-width: 1024px), print{#masterslider{height:640px}}@media (min-width: 1200px){#masterslider{height:780px}}@media (min-width: 1700px){#masterslider{height:800px}}#masterslider .ms-slide-bgvideocont{background-color:#000}#masterslider .ms-slide-bgvideocont video{max-width:none}#masterslider .ms-nav-next,#masterslider .ms-nav-prev{display:none}@media (min-width: 769px), print{#masterslider .ms-nav-next,#masterslider .ms-nav-prev{display:block}}#masterslider .ms-nav-prev,#masterslider .ms-nav-next{width:40px;height:80px;margin-top:-60px}@media (min-width: 769px), print{#masterslider .ms-nav-prev,#masterslider .ms-nav-next{margin-top:-80px}}#masterslider .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_slim.png");background-size:40px 103px;background-position:0;left:15px;border-top-right-radius:6px;border-bottom-right-radius:6px}#masterslider .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_slim.png");background-size:40px 103px;background-position:0;right:15px;border-top-left-radius:6px;border-bottom-left-radius:6px}#masterslider .ms-bullets{left:0;right:0;margin:0 auto;bottom:120px;z-index:10}@media (min-width: 769px), print{#masterslider .ms-bullets{bottom:125px}}#masterslider .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:8px;width:8px;opacity:0.5;margin:0 10px}#masterslider .ms-bullet:hover{opacity:1.0}#masterslider .ms-bullet-selected{opacity:1.0}.wysiwyg_content .jpl_carousel .master-slider{width:100%;height:400px}.wysiwyg_content .jpl_carousel .slider_caption{margin-top:.8em;color:#5a6470;font-size:0.8em;height:100px;overflow-y:scroll;line-height:1.4em}.wysiwyg_content .jpl_carousel .ms-nav-next,.wysiwyg_content .jpl_carousel .ms-nav-prev{display:none}.wysiwyg_content .jpl_carousel.medium_mid .ms-nav-next,.wysiwyg_content .jpl_carousel.medium_mid .ms-nav-prev,.wysiwyg_content .jpl_carousel.medium_large .ms-nav-next,.wysiwyg_content .jpl_carousel.medium_large .ms-nav-prev,.wysiwyg_content .jpl_carousel.large .ms-nav-next,.wysiwyg_content .jpl_carousel.large .ms-nav-prev,.wysiwyg_content .jpl_carousel.xlarge .ms-nav-next,.wysiwyg_content .jpl_carousel.xlarge .ms-nav-prev,.wysiwyg_content .jpl_carousel.xxlarge .ms-nav-next,.wysiwyg_content .jpl_carousel.xxlarge .ms-nav-prev{display:block}.no-touchevents .wysiwyg_content .jpl_carousel.medium .ms-nav-next,.no-touchevents .wysiwyg_content .jpl_carousel.medium .ms-nav-prev,.no-touchevents .wysiwyg_content .jpl_carousel.small .ms-nav-next,.no-touchevents .wysiwyg_content .jpl_carousel.small .ms-nav-prev{display:block}.wysiwyg_content .jpl_carousel .ms-nav-prev,.wysiwyg_content .jpl_carousel .ms-nav-next{width:40px;height:80px;margin-top:-60px}.wysiwyg_content .jpl_carousel.medium_large .ms-nav-next,.wysiwyg_content .jpl_carousel.medium_large .ms-nav-prev,.wysiwyg_content .jpl_carousel.large .ms-nav-next,.wysiwyg_content .jpl_carousel.large .ms-nav-prev,.wysiwyg_content .jpl_carousel.xlarge .ms-nav-next,.wysiwyg_content .jpl_carousel.xlarge .ms-nav-prev,.wysiwyg_content .jpl_carousel.xxlarge .ms-nav-next,.wysiwyg_content .jpl_carousel.xxlarge .ms-nav-prev{margin-top:-80px}.wysiwyg_content .jpl_carousel .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;left:0;border-top-right-radius:6px;border-bottom-right-radius:6px}.wysiwyg_content .jpl_carousel .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;right:0;border-top-left-radius:6px;border-bottom-left-radius:6px}.slick-slider .slick-prev,.slick-slider .slick-next{width:30px;height:48px}.slick-slider .slick-prev:before,.slick-slider .slick-next:before{content:'';display:inline-block;padding:0;cursor:pointer;width:17px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -30px -100px;background-size:300px;opacity:1}.slick-slider .slick-prev:before:hover,.slick-slider .slick-prev:before.active,.slick-slider .slick-prev:before.current,.slick-slider .slick-next:before:hover,.slick-slider .slick-next:before.active,.slick-slider .slick-next:before.current{background-position:-30px -100px}.slick-slider .slick-prev:not(.slick-disabled):hover:before,.slick-slider .slick-next:not(.slick-disabled):hover:before{opacity:1}.slick-slider .slick-next:before{content:'';display:inline-block;padding:0;cursor:pointer;width:17px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -30px -150px;background-size:300px}.slick-slider .slick-next:before:hover,.slick-slider .slick-next:before.active,.slick-slider .slick-next:before.current{background-position:-30px -150px}.slick-slider .slick-prev{left:-33px}.slick-slider .slick-next{right:-33px}.slick-slider .slick-disabled{cursor:default;opacity:.4}.slick-slider .slick-disabled:before{cursor:default}.main_carousel .slick-nav_container{width:100%;text-align:center;padding-top:1em;border-top:1px solid #BEBEBE}@media (min-width: 600px), print{.main_carousel .slick-nav_container{border:none;margin-top:1em}}.main_carousel .slick-dots{position:relative;top:0}.main_carousel .slick-dots li{vertical-align:top}.main_carousel .slick-dots li button:before{content:"";border-radius:50%;background-color:black;height:8px;width:8px;top:6px;left:5px;text-align:center}.main_carousel .slick-nav{position:relative;display:inline-block}.main_carousel .slick-prev,.main_carousel .slick-next{top:-4px}.suggested_features .slick-prev,.suggested_features .slick-next{top:38%}.suggested_features .slick-prev{left:-9%}@media (min-width: 600px), print{.suggested_features .slick-prev{left:-7%}}@media (min-width: 769px), print{.suggested_features .slick-prev{left:-5%}}.suggested_features .slick-next{right:-9%}@media (min-width: 600px), print{.suggested_features .slick-next{right:-7%}}@media (min-width: 769px), print{.suggested_features .slick-next{right:-5%}}.related .slick-prev,.related .slick-next{top:31%}@media (min-width: 600px), print{.related .slick-prev,.related .slick-next{top:38%}}@media (min-width: 1024px), print{.related .slick-prev,.related .slick-next{top:31%}}.related .slick-prev:before,.related .slick-next:before{background-image:none}.main_carousel.module{padding-top:1.7em;padding-bottom:2.1em}@media (min-width: 600px), print{.main_carousel.module{padding-top:2em;padding-bottom:2.2em}}@media (min-width: 769px), print{.main_carousel.module{padding-top:3em;padding-bottom:2.9em}}@media (min-width: 769px), print{.main_carousel.module{padding-top:3.5em;padding-bottom:3.2em}}.main_carousel.module .carousel_header .carousel_title{margin-bottom:.8em}@media (min-width: 600px), print{.main_carousel.module .carousel_header .carousel_title{margin-bottom:1.1em}}.main_carousel.module .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:0}.main_carousel.module .slick-slider .slick-slide a{text-decoration:none}.main_carousel.module .slick-slider .content_title{margin:.6em 0 0;color:#222}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_title{margin:0 0 .8em}}.main_carousel.module .slick-slider .content_title h1{font-size:1.209em;margin-bottom:0em}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.395em;margin-bottom:0em}}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.581em;margin-bottom:0em}}@media (min-width: 1024px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.674em;margin-bottom:0em}}@media (min-width: 1200px){.main_carousel.module .slick-slider .content_title h1{font-size:1.767em;margin-bottom:0em}}.main_carousel.module .slick-slider .left-col,.main_carousel.module .slick-slider .right-col{display:inline-block;vertical-align:top}@media (min-width: 600px), print{.main_carousel.module .slick-slider .left-col{display:inline-block;width:45.83333%;float:left;margin-right:4.16667%}}.main_carousel.module .slick-slider .right-col{width:100%}@media (min-width: 600px), print{.main_carousel.module .slick-slider .right-col{width:50%;float:right;margin-right:0}}.main_carousel.module .slick-slider .carousel_item_description{display:none}@media (min-width: 600px), print{.main_carousel.module .slick-slider .carousel_item_description{display:block;margin-bottom:1.5em}}.main_carousel.module .slick-slider .content_body{margin-bottom:.9em}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_body{margin-bottom:0}}.main_carousel.module .slick-slider .content_footer{padding:.6em 1em 1em 0}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_footer{padding:0}.main_carousel.module .slick-slider .content_footer a+a{margin-top:.6em}}.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{padding-right:.8em;display:inline-block;font-size:.9em;white-space:nowrap}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{font-size:1em}}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{float:left;clear:both}}.main_carousel.module .slick-slider .content_footer .full_artical_link:before,.main_carousel.module .slick-slider .content_footer .full_category_link:before{content:"› "}.suggested_features.module .slick-slider,.related.module .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:1em}@media (min-width: 480px){.suggested_features.module .slick-slider,.related.module .slick-slider{width:84%}}@media (min-width: 600px), print{.suggested_features.module .slick-slider,.related.module .slick-slider{margin-bottom:1.5em;width:90%}}.suggested_features.module .slick-slider .category_title,.related.module .slick-slider .category_title{color:white;line-height:1.4em}.suggested_features.module .slick-slider .slick-slide a,.related.module .slick-slider .slick-slide a{text-decoration:none;color:#222}.suggested_features.module .slick-slider .slide,.related.module .slick-slider .slide{margin:0 6px}@media (min-width: 769px), print{.suggested_features.module .slick-slider .slide,.related.module .slick-slider .slide{margin:0 9px}}.no-touchevents .suggested_features.module .slide:hover .content_title a,.no-touchevents .related.module .slide:hover .content_title a{color:#366599}.no-touchevents .suggested_features.module .slide:hover .rollover_description,.no-touchevents .related.module .slide:hover .rollover_description{background-color:rgba(0,0,0,0.7)}.suggested_features.module .content_title,.related.module .content_title{padding:.6em 0 0;color:#222}.suggested_features.module .content_title a,.related.module .content_title a{font-size:.9em}@media (min-width: 769px), print{.suggested_features.module .content_title a,.related.module .content_title a{font-size:1.14em}}.suggested_features.module .image_and_description_container,.related.module .image_and_description_container{position:relative;overflow:hidden;min-height:129px}.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:2.5em}@media (min-width: 480px){.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:84%}}@media (min-width: 600px), print{.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:88%;margin-bottom:3em}}@media (min-width: 769px), print{.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{margin-bottom:3.5em}}@media (min-width: 1300px){.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:92%}}.carousel_teaser .slick-slider .category_title,.news_teaser .slick-slider .category_title,.multimedia_teaser .slick-slider .category_title{color:white;line-height:1.4em}.carousel_teaser .slick-slider .slick-slide a,.news_teaser .slick-slider .slick-slide a,.multimedia_teaser .slick-slider .slick-slide a{text-decoration:none;color:#222}.carousel_teaser .slick-slider .slide,.news_teaser .slick-slider .slide,.multimedia_teaser .slick-slider .slide{margin:0 6px}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slide,.news_teaser .slick-slider .slide,.multimedia_teaser .slick-slider .slide{margin:0 6px}}.no-touchevents .carousel_teaser .slick-slider .slide:hover .content_title a,.no-touchevents .news_teaser .slick-slider .slide:hover .content_title a,.no-touchevents .multimedia_teaser .slick-slider .slide:hover .content_title a{color:#366599;cursor:pointer}.carousel_teaser .slick-slider .image_and_description_container,.news_teaser .slick-slider .image_and_description_container,.multimedia_teaser .slick-slider .image_and_description_container{position:relative;overflow:hidden;min-height:129px}.carousel_teaser .slick-slider .content_title,.news_teaser .slick-slider .content_title,.multimedia_teaser .slick-slider .content_title{padding:.6em 0 0;color:#222;font-weight:400}.carousel_teaser .slick-slider .slick-prev,.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-next{top:35%;content:'';display:inline-block;padding:0;cursor:pointer;width:35px;height:35px;background:url("https://mars.nasa.gov/assets/[email protected]") -32px -208px;background-size:300px}.carousel_teaser .slick-slider .slick-prev:hover,.carousel_teaser .slick-slider .slick-prev.active,.carousel_teaser .slick-slider .slick-prev.current,.carousel_teaser .slick-slider .slick-next:hover,.carousel_teaser .slick-slider .slick-next.active,.carousel_teaser .slick-slider .slick-next.current,.news_teaser .slick-slider .slick-prev:hover,.news_teaser .slick-slider .slick-prev.active,.news_teaser .slick-slider .slick-prev.current,.news_teaser .slick-slider .slick-next:hover,.news_teaser .slick-slider .slick-next.active,.news_teaser .slick-slider .slick-next.current,.multimedia_teaser .slick-slider .slick-prev:hover,.multimedia_teaser .slick-slider .slick-prev.active,.multimedia_teaser .slick-slider .slick-prev.current,.multimedia_teaser .slick-slider .slick-next:hover,.multimedia_teaser .slick-slider .slick-next.active,.multimedia_teaser .slick-slider .slick-next.current{background-position:-32px -258px}.multimedia_module_gallery .carousel_teaser .slick-slider .slick-prev,.multimedia_module_gallery .carousel_teaser .slick-slider .slick-next,.multimedia_module_gallery .news_teaser .slick-slider .slick-prev,.multimedia_module_gallery .news_teaser .slick-slider .slick-next,.multimedia_module_gallery .multimedia_teaser .slick-slider .slick-prev,.multimedia_module_gallery .multimedia_teaser .slick-slider .slick-next{top:45%}.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:25%}@media (min-width: 769px), print{.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:25%}}@media (min-width: 1024px), print{.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:31%}}.news_teaser .carousel_teaser .slick-slider .slick-prev,.suggested_features .carousel_teaser .slick-slider .slick-prev,.news_teaser .carousel_teaser .slick-slider .slick-next,.suggested_features .carousel_teaser .slick-slider .slick-next,.news_teaser .news_teaser .slick-slider .slick-prev,.suggested_features .news_teaser .slick-slider .slick-prev,.news_teaser .news_teaser .slick-slider .slick-next,.suggested_features .news_teaser .slick-slider .slick-next,.news_teaser .multimedia_teaser .slick-slider .slick-prev,.suggested_features .multimedia_teaser .slick-slider .slick-prev,.news_teaser .multimedia_teaser .slick-slider .slick-next,.suggested_features .multimedia_teaser .slick-slider .slick-next{top:38%}.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{transform:scaleX(-1)}.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-9%}@media (min-width: 600px), print{.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-7%}}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-6.5%}}.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-9%}@media (min-width: 600px), print{.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-7%}}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-6.5%}}.carousel_teaser .slick-slider .slick-disabled,.news_teaser .slick-slider .slick-disabled,.multimedia_teaser .slick-slider .slick-disabled{cursor:default;opacity:.4}.carousel_teaser .slick-slider .slick-disabled:before,.news_teaser .slick-slider .slick-disabled:before,.multimedia_teaser .slick-slider .slick-disabled:before{cursor:default}section.vital_signs_menu .readout{color:#222;display:inline-block;text-align:right;position:absolute;right:0;top:0;height:100%}section.vital_signs_menu .readout .title{font-size:1em;font-weight:300;color:#e4e4e4;white-space:nowrap;text-transform:uppercase}@media (min-width: 600px), print{section.vital_signs_menu .readout .title{font-size:1.4em}}.mini_module header h3,.mini_module header h4{color:#B27628;font-weight:400;font-size:0.9em;margin:0 0 0.1em;text-transform:uppercase;letter-spacing:0}.mini_module .description{font-size:1.85rem}.mini_module .transmissions{font-weight:400;color:#B27628}.mini_module.countdown{overflow:hidden}.mini_module.countdown .unit>span{color:#B27628;background-color:#FFF1DE}#iframe_overlay .mini_module header h3,#iframe_overlay .mini_module header h4{color:#f7c585}#iframe_overlay .mini_module.countdown .unit>span{color:#B27628;background-color:#382B19}#iframe_overlay .mini_module .transmissions{color:#f7c585}#secondary_column .boxed hr+.mini_module{border:none;margin:0;padding:0}#secondary_column .mini_module header{margin-bottom:0.1rem}#secondary_column .mini_module .description{font-size:1.4em}@media (min-width: 1024px), print{#secondary_column .mini_module .description{font-size:1.5em}}#secondary_column .countdown .unit span{padding:0 0.8em}#primary_column .mini_module{margin:3rem 0}#primary_column .mini_module header{margin-bottom:.5rem}#primary_column .mini_module header h3{font-size:1.7rem}.homepage_dashboard_modal.vital_signs_container{position:absolute;top:0;left:0;width:100%;height:calc(100vh - 36px);z-index:41;color:#222;overflow-x:hidden;overflow-y:hidden;-webkit-overflow-scrolling:touch;visibility:hidden;opacity:0;transition:opacity .5s;background:rgba(0,0,0,0.9);word-wrap:break-word;transition:opacity 400ms}.homepage_dashboard_modal.vital_signs_container.visible{visibility:visible;opacity:1}.homepage_dashboard_modal.vital_signs_container.hide_background{background:transparent}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container{z-index:30}}.homepage_dashboard_modal.vital_signs_container .loading{position:absolute;left:50%;top:44vh;width:150px;margin-left:-75px;text-align:center}.homepage_dashboard_modal.vital_signs_container .loading p{color:#fcb963;padding-top:67px;font-size:15px;font-weight:300;margin:0;display:none}.homepage_dashboard_modal.vital_signs_container .vital_signs.module{padding:0}.homepage_dashboard_modal.vital_signs_container .countdown_time .unit{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .countdown_time .unit span{color:#966b35;background-color:#000000}.homepage_dashboard_modal.vital_signs_container .content{transition:opacity .5s;opacity:0}.homepage_dashboard_modal.vital_signs_container .content.visible{opacity:1}.homepage_dashboard_modal.vital_signs_container .vital_signs_header{margin-bottom:.8em}@media (min-width: 600px), print{.homepage_dashboard_modal.vital_signs_container .vital_signs_header{margin-bottom:1.5em}}.homepage_dashboard_modal.vital_signs_container a{color:#6bbed8;font-size:.9em}.homepage_dashboard_modal.vital_signs_container .more_link{margin:1em 0;float:right;font-weight:600;white-space:nowrap}.homepage_dashboard_modal.vital_signs_container .more_link::after{content:" ›"}.homepage_dashboard_modal.vital_signs_container .grid_layout{display:flex;flex-direction:column;background:rgba(62,23,0,0.75);position:relative;width:100%;margin:0;padding:2em;height:calc(100vh - 36px);z-index:5;left:0;top:100vh;opacity:0}@media (min-width: 769px), print{.homepage_dashboard_modal.vital_signs_container .grid_layout{padding:2em 15%}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .grid_layout{display:block;padding:2em 2em 2.5em;max-height:100%;height:auto;background:rgba(39,20,5,0.95);top:4.5em;max-width:550px;width:48%;left:100%}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .grid_layout{max-width:700px;width:48%}}@media (min-width: 1700px){.homepage_dashboard_modal.vital_signs_container .grid_layout{max-width:750px;width:48%}}.homepage_dashboard_modal.vital_signs_container .background_area{width:100%;height:100%;position:absolute;top:0;left:0;filter:none;display:none}.homepage_dashboard_modal.vital_signs_container .background_area.blur{filter:blur(3px)}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .background_area{display:none}}.homepage_dashboard_modal.vital_signs_container .module_title,.homepage_dashboard_modal.vital_signs_container .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .homepage_dashboard_modal.vital_signs_container .carousel_title{margin-bottom:0.4em;text-align:left;color:#efe9e6;font-weight:400;font-size:1.6em;width:90%}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .module_title,.homepage_dashboard_modal.vital_signs_container .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .homepage_dashboard_modal.vital_signs_container .carousel_title{width:100%}}.homepage_dashboard_modal.vital_signs_container a.close_button{position:absolute;top:.8em;right:.8em;z-index:99;width:44px;height:44px}.homepage_dashboard_modal.vital_signs_container a.close_button .close_icon{display:block;height:100%;position:relative}.homepage_dashboard_modal.vital_signs_container a.close_button .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#fff;opacity:.8}.homepage_dashboard_modal.vital_signs_container a.close_button .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#fff;opacity:.8}.no-touchevents .homepage_dashboard_modal.vital_signs_container a.close_button:hover{opacity:1}@media (min-width: 600px), print{.homepage_dashboard_modal.vital_signs_container a.close_button{top:1em;right:1em}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container a.close_button{background:rgba(37,21,8,0.8)}}@media (min-width: 1700px){.homepage_dashboard_modal.vital_signs_container a.close_button{top:1.2em;right:1.2em;width:60px;height:60px}}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser{overflow:visible;padding-bottom:2em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .loader{font-size:15px;color:#fcb963;display:block;margin:1em 0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser h3,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser h4{color:#fcb963;font-weight:400;font-size:0.9em;margin:0 0 0.1em;text-transform:uppercase;letter-spacing:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser p{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .desc_title{font-weight:600;margin-bottom:.3em;display:block}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article{margin:0 0 0.5em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article header{margin:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article header .subtitle{color:#fcb963}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article .description{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article .description:not(p){font-size:1.8em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_content{padding:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_title{margin-bottom:.5em;text-transform:uppercase;font-size:.9em;font-weight:400;color:#fcb963}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_image{width:30%;float:right;margin:0 0 1em 1.8em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_image .caption{font-size:0.7em;display:block;margin-top:0.4em;color:#beb0a4}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text{width:100%;float:none}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text .description{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text .description a{font-size:16px}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p{margin:1em 0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p:first-of-type{margin-top:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p:last-of-type{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text a.more_or_less{color:#efe9e6;text-decoration:underline}.homepage_dashboard_modal.vital_signs_container .overlay_nav li{display:inline-block;color:#94623a;font-size:1.2em;cursor:pointer;transition:all 200ms;margin-right:0.8em}.homepage_dashboard_modal.vital_signs_container .overlay_nav li.current,.homepage_dashboard_modal.vital_signs_container .overlay_nav li:hover{color:white}.homepage_dashboard_modal.vital_signs_container .overlay_nav .nav_spacer{font-size:0.4em;vertical-align:middle;margin:0 .5em;color:#94623a}.homepage_dashboard_modal.vital_signs_container .column_container{display:block;overflow:auto;overflow-x:hidden;padding-right:2em;max-width:100%;max-height:calc(90% - 150px)}@media (min-height: 800px){.homepage_dashboard_modal.vital_signs_container .column_container{max-height:calc(97% - 150px)}}@media (min-width: 769px) and (min-height: 700px), print and (min-height: 700px){.homepage_dashboard_modal.vital_signs_container .column_container{max-height:calc(100% - 129px)}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .column_container{width:100%;overflow-y:auto;height:auto;max-height:calc(80vh - 320px)}}@media (min-width: 1200px) and (min-height: 700px){.homepage_dashboard_modal.vital_signs_container .column_container{max-height:calc(88vh - 320px)}}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar-track{background-color:rgba(252,185,99,0.2)}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar{width:5px;left:10px}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar-thumb{background-color:#eaeaea}.homepage_dashboard_modal.vital_signs_container .content_col{position:relative;float:left;width:100%}.homepage_dashboard_modal.vital_signs_container .content_col hr{border-top:1px solid rgba(255,255,255,0.2)}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content>*{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content>*:last-child{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content p{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content h4{font-size:0.9em;font-weight:400;margin-top:0}.homepage_dashboard_modal.vital_signs_container .dsn_status_module+p{margin-top:0}#vital_signs_modal::-webkit-scrollbar-track{background-color:rgba(252,185,99,0.2)}#vital_signs_modal::-webkit-scrollbar{width:4px;left:10px}#vital_signs_modal::-webkit-scrollbar-thumb{background-color:#eaeaea}section.vital_signs_menu{overflow:hidden;position:relative;background-color:transparent;color:white;z-index:20;height:95px;text-align:center;visibility:hidden;margin-top:-95px}section.vital_signs_menu .slick-slider{width:80%;margin-left:auto;margin-right:auto;overflow:hidden}@media (min-width: 480px){section.vital_signs_menu .slick-slider{width:84%}}@media (min-width: 600px), print{section.vital_signs_menu .slick-slider{width:calc(100% - 85px)}}@media (min-width: 769px), print{section.vital_signs_menu .slick-slider{width:90%}}@media (min-width: 1024px), print{section.vital_signs_menu .slick-slider{width:92%}}@media (min-width: 1700px){section.vital_signs_menu .slick-slider{width:1350px}}section.vital_signs_menu .slick-slider .slick-list{z-index:22}section.vital_signs_menu .slick-slider .slick-track{margin-left:auto;margin-right:auto}section.vital_signs_menu .slick-navigation{display:block;position:absolute;top:0;width:100%;height:100%;z-index:21}@media screen and (min-width: 1550px){section.vital_signs_menu .slick-navigation{display:none}}section.vital_signs_menu .slick-navigation .slick-prev,section.vital_signs_menu .slick-navigation .slick-next{position:absolute;background-color:transparent;font-size:1.8em;color:#808080;height:77px;background-image:url("https://mars.nasa.gov/assets/[email protected]");background-repeat:no-repeat;background-size:135px;width:34px;text-indent:-9999px;top:4px;z-index:999999}section.vital_signs_menu .slick-navigation .slick-prev{left:.7%;border-right:1px solid rgba(216,216,216,0.2);background-position:-27px 19px}@media (min-width: 1024px), print{section.vital_signs_menu .slick-navigation .slick-prev{left:.8%}}@media (min-width: 1200px){section.vital_signs_menu .slick-navigation .slick-prev{left:.9%}}section.vital_signs_menu .slick-navigation .slick-prev:hover{background-position:0px 19px}section.vital_signs_menu .slick-navigation .slick-next{right:.7%;border-left:1px solid rgba(216,216,216,0.2);background-position:-75px 19px}@media (min-width: 1024px), print{section.vital_signs_menu .slick-navigation .slick-next{right:.8%}}@media (min-width: 1200px){section.vital_signs_menu .slick-navigation .slick-next{right:.9%}}section.vital_signs_menu .slick-navigation .slick-next:hover{background-position:-102px 19px}section.vital_signs_menu ul .slick-slide{height:95px;margin:0}section.vital_signs_menu li{display:inline-block;vertical-align:top;position:relative;width:100%}section.vital_signs_menu li:last-child .image_and_description_container{border-right:none}section.vital_signs_menu .image_and_description_container{display:block;overflow:visible;cursor:pointer;width:100%;height:75%;padding-right:.5em;padding-left:1em}@media (min-width: 480px){section.vital_signs_menu .image_and_description_container{border-right:1px solid rgba(216,216,216,0.2)}}@media (min-width: 1700px){section.vital_signs_menu .image_and_description_container{border-right:none}}.no-touchevents section.vital_signs_menu .image_and_description_container:hover .readout .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_expand_hover.png") no-repeat;background-size:100%}section.vital_signs_menu .image_and_description_container.current .readout .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_contract.png") no-repeat;background-size:100%}.no-touchevents section.vital_signs_menu .image_and_description_container.current:hover .readout .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_contract_hover.png") no-repeat !important;background-size:100% !important}section.vital_signs_menu .readout .subtitle{color:#4e8fa4;font-size:.78em;font-weight:500;text-transform:uppercase}section.vital_signs_menu .readout{color:white;position:relative;text-align:left;top:11px;margin:0 auto;float:left}section.vital_signs_menu .readout .text_container{float:left;margin-right:10px}section.vital_signs_menu .readout .title{font-size:1.4em;margin-bottom:0;letter-spacing:-.03em}@media (min-width: 480px){section.vital_signs_menu .readout .title{font-size:1.5em}}@media (min-width: 600px), print{section.vital_signs_menu .readout .title{font-size:1.2em}}@media (min-width: 769px), print{section.vital_signs_menu .readout .title{font-size:1.3em}}@media (min-width: 850px){section.vital_signs_menu .readout .title{font-size:1.1em}}@media (min-width: 950px){section.vital_signs_menu .readout .title{font-size:1.2em}}@media (min-width: 1700px){section.vital_signs_menu .readout .title{font-size:1.5em}}section.vital_signs_menu .readout .overlay_icon{position:absolute;width:37px;height:37px;background:url("https://mars.nasa.gov/assets/dashboard_expand.png") no-repeat;background-size:100%;display:inline-block}#vital_signs_modal.visible ~ .vital_signs_menu{z-index:40}div.modal_open{display:none !important}.image_of_the_day{overflow:hidden;z-index:10;padding:0}.image_of_the_day .window{width:100%;position:absolute;overflow:hidden;height:100%;padding:1em}.image_of_the_day .window.mobile{height:100%;min-height:100%}.image_of_the_day #featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}.image_of_the_day a.image_day{width:100%;height:100%;position:absolute;top:0;left:0;z-index:11;text-indent:-999px}.image_of_the_day .grid_layout{height:100%;position:relative}@media (min-width: 1024px), print{.image_of_the_day .grid_layout{width:86%}}.image_of_the_day .floating_text_area{position:absolute;bottom:0;width:100%;text-align:left}@media (min-width: 600px), print{.image_of_the_day .floating_text_area{bottom:1.3rem;text-align:left}}.image_of_the_day header{z-index:12;width:100%}.image_of_the_day header .header_link{display:inline-block;width:100%}@media (min-width: 600px), print{.image_of_the_day header .header_link{width:70%}}.image_of_the_day header .header_link:hover{text-decoration:none}.image_of_the_day header .category_title{color:white;font-size:0.9em;margin-bottom:0.3em}@media (min-width: 600px), print{.image_of_the_day header .category_title{margin-bottom:0.7em}}.image_of_the_day header .media_feature_title{font-size:1.75rem;font-weight:200;margin-bottom:.3rem}@media (min-width: 600px), print{.image_of_the_day header .media_feature_title{margin-bottom:0;font-size:3rem}}.image_of_the_day header .multimedia_link{display:inline-block;text-transform:uppercase;font-size:0.9em;color:white;text-align:right;font-weight:600;right:0;position:relative}@media (min-width: 600px), print{.image_of_the_day header .multimedia_link{bottom:10px;width:28%;position:absolute}}.image_of_the_day header .multimedia_link a{color:white}.image_of_the_day .outline_button,.image_of_the_day .primary_media_feature .floating_text_area .button,.primary_media_feature .floating_text_area .image_of_the_day .button,.image_of_the_day .primary_media_feature .floating_text_area .outline_button,.primary_media_feature .floating_text_area .image_of_the_day .outline_button,.image_of_the_day .banner_header_overlay .button,.banner_header_overlay .image_of_the_day .button{opacity:1}.filter_bar{z-index:20}.filter_bar .section_search{padding-bottom:1em;max-width:380px;width:100%;margin:0 auto}@media (min-width: 1024px), print{.filter_bar .section_search{width:auto;max-width:none;display:block !important;padding-bottom:0}}.filter_bar .section_search .search_submit{right:0px;top:-1px}.filter_bar.fixed{position:fixed;top:0;left:0;width:100%;-webkit-box-shadow:0 4px 4px -1px rgba(0,0,0,0.15);-moz-box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}.filter_bar .search_binder{width:100%;max-width:304px;position:relative;margin-left:auto;margin-right:auto;margin:0 auto .7em 0}@media (min-width: 480px){.filter_bar .search_binder{margin:0 0 .7em 0}}@media (min-width: 1024px), print{.filter_bar .search_binder{position:relative;vertical-align:top;display:inline-block;width:35%;margin-right:1%;max-width:350px}}.filter_bar input.search_field{width:100%}.filter_bar input.search_field::-webkit-input-placeholder{color:#bbb !important}.filter_bar input.search_field::-moz-placeholder{color:#bbb !important}.filter_bar input.search_field:-moz-placeholder{color:#bbb !important}.filter_bar input.search_field:-ms-input-placeholder{color:#bbb !important}.filter_bar select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0 auto .5em;float:none}.filter_bar select::-ms-expand{display:none}.filter_bar select option{padding:0.5em 1em}@media (min-width: 1024px), print{.filter_bar select{margin-bottom:0;width:30%;max-width:284px}.filter_bar select+select{margin-left:1%}}.filter_bar header{display:inline-block;width:100%;text-align:left}@media (min-width: 600px), print{.filter_bar header{text-align:center}}@media (min-width: 1024px), print{.filter_bar header{display:none}}.filter_bar .arrow_box{display:inline-block;position:absolute;padding:4px;cursor:pointer;right:0;bottom:7px;float:none;transition:all .2s}@media (min-width: 600px), print{.filter_bar .arrow_box{text-align:center}}.filter_bar .arrow_box.rotate_up{transform:rotate(180deg)}.filter_bar .arrow_box.rotate_right{transform:rotate(270deg)}.filter_bar .arrow_box.rotate_left{transform:rotate(90deg)}.filter_bar .arrow_box .arrow_down{display:block;border-left:8px solid transparent;border-right:8px solid transparent;border-top:8px solid #8597B1}.advanced_search .filter_bar .section_search{max-width:none}.advanced_search .filter_bar .suggestion_text{margin-top:0}.advanced_search .filter_bar .search_row+.search_row{margin-top:1em}@media (min-width: 600px), print{.advanced_search .filter_bar .search_row+.search_row{margin-top:0}}@media (min-width: 600px), print{.advanced_search .filter_bar .filter1{width:32.20339%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.advanced_search .filter_bar .search_binder{width:49.15254%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.advanced_search .filter_bar .conjunction{width:15.25424%;float:right;margin-right:0}}.advanced_search .filter_bar .search_field{background-color:transparent;border:1px solid #C1C1C1;padding-right:1.1em}.advanced_search footer{margin-top:1.5em}.rollover_description{opacity:0;height:0;z-index:1;overflow:hidden;transition:opacity .4s}.rollover_description .rollover_description_inner{height:100%;overflow:hidden}.slide{position:relative;min-height:100%}.slide .overlay_arrow{display:none}@media (min-width: 769px), print{.no-touchevents .slide:hover .rollover_description{padding:.9rem;position:absolute;opacity:1;height:auto;top:0;right:0;width:100%;height:100%;color:white;background-color:rgba(0,0,0,0.9);cursor:pointer;font-size:.95rem;line-height:1.3}.no-touchevents .slide:hover .rollover_description p{line-height:inherit;font-size:inherit;color:white}.no-touchevents .slide:hover .rollover_description p:first-child{margin-top:0}.no-touchevents .slide:hover .rollover_title{font-size:1.6em;font-weight:700;margin-bottom:.2em}.no-touchevents .slide:hover .overlay_arrow{height:14px;width:14px;position:absolute;right:14px;bottom:14px;display:block}.no-touchevents .slide:hover .overlay_arrow img{display:block}}.list_view .rollover_description{display:none}.fancybox-overlay,#fancybox-lock{background:#000 !important}.fancybox-mb-video.fancybox-wrap,.fancybox-mb-info.fancybox-wrap{background:#000}.fancybox-mb-video.fancybox-wrap .fancybox-prev span,.fancybox-mb-info.fancybox-wrap .fancybox-prev span{background-image:url("https://mars.nasa.gov/assets/arrow_left_darktheme.png") !important;background-position:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-next span,.fancybox-mb-info.fancybox-wrap .fancybox-next span{background-image:url("https://mars.nasa.gov/assets/arrow_right_darktheme.png") !important;background-position:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-inner,.fancybox-mb-info.fancybox-wrap .fancybox-inner{border:0}.fancybox-mb-video.fancybox-wrap .fancybox-title-float-wrap,.fancybox-mb-info.fancybox-wrap .fancybox-title-float-wrap{position:relative;right:auto;left:auto}.fancybox-mb-video.fancybox-wrap .fancybox-title-float-wrap .child,.fancybox-mb-info.fancybox-wrap .fancybox-title-float-wrap .child{display:block;margin:auto;white-space:normal;padding:1em 0;line-height:normal}.fancybox-mb-video.fancybox-wrap .fancybox-skin,.fancybox-mb-info.fancybox-wrap .fancybox-skin{background-color:black}.fancybox-mb-video.fancybox-wrap .fancybox-title-inside,.fancybox-mb-info.fancybox-wrap .fancybox-title-inside{text-align:left}.fancybox-mb-video.fancybox-wrap .fancybox-nav{top:-15%}@media (min-width: 1px) and (print: 769px){.fancybox-mb-video.fancybox-wrap,.fancybox-mb-info.fancybox-wrap{margin:0 !important;width:95% !important;margin:0 auto !important}.fancybox-mb-video.fancybox-wrap .fancybox-nav,.fancybox-mb-info.fancybox-wrap .fancybox-nav{display:none}.fancybox-mb-video.fancybox-wrap .fancybox-inner,.fancybox-mb-info.fancybox-wrap .fancybox-inner{width:100% !important;height:auto !important}.fancybox-mb-video.fancybox-wrap .fancybox-image,.fancybox-mb-info.fancybox-wrap .fancybox-image{width:100% !important;height:auto !important;margin:0 auto !important}.fancybox-mb-video.fancybox-wrap{left:0 !important;right:0 !important;position:relative !important;margin:0 auto !important;padding:0 !important;border:none}.fancybox-mb-video.fancybox-wrap .fancybox-inner{margin:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-iframe{height:600px !important}}#fancybox_video .player{min-height:200px;margin-bottom:1.5em}@media (min-width: 480px){#fancybox_video .player{min-height:300px}}@media (min-width: 600px), print{#fancybox_video .player{min-height:400px}}#fancybox_info{margin-top:1.5em}#fancybox_info,#fancybox_video{color:white}#fancybox_info p,#fancybox_info .description,#fancybox_video p,#fancybox_video .description{color:white}#fancybox_info .image_caption,#fancybox_info .image_caption p,#fancybox_video .image_caption,#fancybox_video .image_caption p{color:#aaa;font-size:.9em}#fancybox_info .image_details,#fancybox_video .image_details{overflow:hidden;display:inline-block;width:100%;min-width:inherit}#fancybox_info .image_details .text,#fancybox_video .image_details .text{float:left;text-align:left;width:100%;margin-top:10px}@media (min-width: 769px), print{#fancybox_info .image_details .text,#fancybox_video .image_details .text{margin-top:0}}@media (min-width: 1024px), print{#fancybox_info .image_details .text,#fancybox_video .image_details .text{width:60%}}#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.235em;margin-bottom:.1em;line-height:1.3em;font-weight:700}@media (min-width: 600px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.425em;margin-bottom:.18em}}@media (min-width: 769px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.615em;margin-bottom:.26em}}@media (min-width: 1024px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.71em;margin-bottom:.29em}}@media (min-width: 1200px){#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.805em;margin-bottom:.32em}}#fancybox_info .image_details .buttons,#fancybox_video .image_details .buttons{width:100%;float:right}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons,#fancybox_video .image_details .buttons{width:40%}}#fancybox_info .image_details .buttons .inner_buttons,#fancybox_video .image_details .buttons .inner_buttons{float:left}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .inner_buttons,#fancybox_video .image_details .buttons .inner_buttons{float:right}}#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons .addthis_toolbox{border-radius:4px;overflow:hidden}#fancybox_info .image_details .buttons .addthis_toolbox img,#fancybox_video .image_details .buttons .addthis_toolbox img{height:37px !important;width:auto !important}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox img,#fancybox_video .image_details .buttons .addthis_toolbox img{height:38px !important}}#fancybox_info .image_details .buttons .close_button,#fancybox_video .image_details .buttons .close_button{margin-left:12px}#fancybox_info .image_details .buttons a.button,#fancybox_info .image_details .buttons a.outline_button,#fancybox_video .image_details .buttons a.button,#fancybox_video .image_details .buttons a.outline_button{padding-left:16px;padding-right:16px}#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_info .image_details .buttons a.outline_button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button,#fancybox_video .image_details .buttons a.outline_button{float:left;margin-left:0;margin-right:12px}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_info .image_details .buttons a.outline_button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button,#fancybox_video .image_details .buttons a.outline_button{float:right;margin-left:12px;margin-right:0}}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_info .image_details .buttons a.outline_button,#fancybox_info .image_details .buttons .close_button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button,#fancybox_video .image_details .buttons a.outline_button,#fancybox_video .image_details .buttons .close_button{margin-bottom:12px}}#fancybox_info .close_button,#fancybox_video .close_button{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0px;background-size:300px;z-index:8060;position:relative;display:block;float:right}#fancybox_info .close_button:hover,#fancybox_info .close_button.active,#fancybox_info .close_button.current,#fancybox_video .close_button:hover,#fancybox_video .close_button.active,#fancybox_video .close_button.current{background-position:-25px 0px}figure{margin-bottom:1em;max-width:100%}@media (min-width: 769px), print{figure{margin-bottom:2em}}figure figcaption,figure figcaption p{margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{figure figcaption,figure figcaption p{font-size:.88em}}.explore_overlay_page figcaption,.explore_overlay_page figcaption p{color:#b0b4b9}@media (max-width: 480px){figure.lede.full_width{width:100%}figure.lede.full_width figcaption{margin-left:auto;margin-right:auto}}#secondary_column aside figure{margin-bottom:1em}#secondary_column aside figure figcaption{margin-bottom:0}.inline_caption{margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.inline_caption{font-size:.88em}}.content_page #page_header{margin-bottom:1.5em}@media (min-width: 769px), print{.content_page #page_header{margin-bottom:2em}}.content_page #page_header .author{margin:.5em 0 1.8em}.content_page .release_date{font-size:1em;color:#222;text-transform:none}.content_page .category_title{color:#222}.content_page .category_title a{color:#257cdf}.content_page .audio_player{margin-bottom:1em}.content_page .main_feature .master-slider,.content_page .jpl_carousel .master-slider{width:100%;height:300px}@media (min-width: 600px), print{.content_page .main_feature .master-slider,.content_page .jpl_carousel .master-slider{height:400px}}.content_page .main_feature .master-slider .gradient_container_bottom,.content_page .jpl_carousel .master-slider .gradient_container_bottom{height:80px}.content_page .main_feature .master-slider .ms-nav-next,.content_page .main_feature .master-slider .ms-nav-prev,.content_page .jpl_carousel .master-slider .ms-nav-next,.content_page .jpl_carousel .master-slider .ms-nav-prev{display:none}@media (min-width: 769px), print{.content_page .main_feature .master-slider .ms-nav-next,.content_page .main_feature .master-slider .ms-nav-prev,.content_page .jpl_carousel .master-slider .ms-nav-next,.content_page .jpl_carousel .master-slider .ms-nav-prev{display:block}}.content_page .main_feature .master-slider .ms-bullets,.content_page .jpl_carousel .master-slider .ms-bullets{bottom:30px}.content_page .main_feature .master-slider .ms-bullets-count,.content_page .jpl_carousel .master-slider .ms-bullets-count{right:-50%;position:absolute}.content_page .main_feature .master-slider .ms-bullet,.content_page .jpl_carousel .master-slider .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:10px;width:10px;opacity:0.5;margin:0 10px}.content_page .main_feature .master-slider .ms-bullet:hover,.content_page .main_feature .master-slider .ms-bullet.ms-bullet-selected,.content_page .jpl_carousel .master-slider .ms-bullet:hover,.content_page .jpl_carousel .master-slider .ms-bullet.ms-bullet-selected{opacity:1.0}.content_page #primary_column{margin-bottom:5.26316%}@media (min-width: 600px), print{.content_page #primary_column{width:61.53846%;float:left;margin-right:2.5641%;margin-bottom:0}}@media (min-width: 769px), print{.content_page #primary_column{width:64.40678%;float:left;margin-right:1.69492%}}@media (min-width: 1024px), print{.content_page #primary_column{width:61.86441%;float:left;margin-right:1.69492%}}@media (min-width: 1200px){.content_page #primary_column{width:59.32203%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.content_page #secondary_column{width:35.89744%;float:right;margin-right:0}}@media (min-width: 769px), print{.content_page #secondary_column{width:32.20339%;float:right;margin-right:0}}.content_page.full_width #primary_column,.content_page.full_width #secondary_column{width:100%}.content_page.feature{padding:2em 0 5.3em}.content_page.feature #secondary_column{display:none}.content_page.feature #primary_column{width:64.40678%;float:left;margin-right:1.69492%;margin:auto;padding:1em 0 5.3em;float:none}#secondary_column>:first-child{margin-top:0}#secondary_column{word-wrap:break-word}#secondary_column aside{margin-bottom:7.14286%}#secondary_column aside:last-child{margin-bottom:0}#secondary_column aside.boxed{border:1px solid #C1C1C1;padding:5.26316%}#secondary_column aside.none{border:0;padding:0}#secondary_column aside>:last-child{margin-bottom:0}#secondary_column aside.links_module li{margin-bottom:.5em}#secondary_column aside.downloads_module .download{margin-bottom:1em}#secondary_column aside.downloads_module .download:last-of-type{margin-bottom:0}#secondary_column aside.downloads_module .button,#secondary_column aside.downloads_module .outline_button{margin-top:1em}#secondary_column aside.list_view_module a{text-decoration:none}#secondary_column aside.list_view_module ul{margin-bottom:1.5em}#secondary_column aside.list_view_module li{padding:.6em 0}#secondary_column aside.list_view_module li:last-child{padding-bottom:0}#secondary_column aside.list_view_module .list_image{float:right;margin-left:4%;margin-bottom:.5em;width:32%}@media (min-width: 600px), print{#secondary_column aside.list_view_module .list_image{margin-left:0;margin-bottom:0;float:left;width:31.03448%;float:left;margin-right:3.44828%}}@media (min-width: 600px), print{#secondary_column aside.list_view_module .list_text{width:65.51724%;float:right;margin-right:0}}#secondary_column aside.list_view_module .list_title{letter-spacing:-.01em;font-weight:700}#secondary_column aside.list_view_module .list_title:hover{color:#222}#secondary_column aside.sig_events_module h4{margin-bottom:1em}#secondary_column aside.sig_events_module h4:last-child{margin-bottom:0}#secondary_column aside.sig_events_module ul{margin-bottom:0}#secondary_column aside.sig_events_module ul li{margin-bottom:.5em}#secondary_column .inline_image{margin-bottom:7.14286%}#secondary_column .inline_image .inline_caption{display:block;margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{#secondary_column .inline_image .inline_caption{font-size:.88em}}#secondary_column .related_content_module{margin:0 0 7.14286% 0;padding:5.26316%;width:100%;border:1px solid #bebebe}#secondary_column .related_content_module li{width:100%;border-bottom:1px solid #bebebe}#secondary_column .related_content_module li:last-child{border-bottom:none;padding-bottom:0}#secondary_column .related_content_module li:first-child{border-top:none}#secondary_column .related_content_module>:last-child{margin-bottom:0}a.main_image_enlarge,a.inline_image_enlarge{display:block;position:relative;height:100%}a.main_image_enlarge .enlarge_icon,a.inline_image_enlarge .enlarge_icon{position:absolute;border-radius:6px;border:1px solid rgba(200,200,200,0.8);left:15px;bottom:15px;width:40px;height:40px;background-color:rgba(0,0,0,0.5);background-image:url("https://mars.nasa.gov/assets/zoom_icon.png");background-size:50%;background-repeat:no-repeat;background-position:50%;opacity:0;transition:opacity 0.2s ease-in}a.main_image_enlarge:hover .enlarge_icon,a.inline_image_enlarge:hover .enlarge_icon{opacity:0.8}body #fancybox-lock{z-index:200000}.article_nav{display:none}@media (min-width: 1024px), print{.article_nav{display:block;position:relative;z-index:11}.article_nav .article_nav_block{position:fixed;height:86px;display:inline-block;top:42.5%}.article_nav .article_nav_block .link_box{width:40px;background-color:#e4e9ef;display:inline;height:100%}.article_nav .article_nav_block .article_details{display:inline;width:250px;background-color:#FFF;text-decoration:none;color:#000;padding:10px;background-color:#e4e9ef}.article_nav .article_nav_block .article_details .img{margin-bottom:6px}.article_nav .article_nav_block .article_details .title{font-weight:700;font-size:.9em}.article_nav .article_nav_block.prev{left:0}.article_nav .article_nav_block.prev .link_box{float:left}.article_nav .article_nav_block.prev .article_details{float:left;display:none}.article_nav .article_nav_block.next{right:0}.article_nav .article_nav_block.next .link_box{float:right}.article_nav .article_nav_block.next .article_details{display:none;float:right}.no-touchevents .article_nav .article_nav_block:hover .article_details{display:block}}.feature_pages{padding:1em 0 3.8em}.feature_pages #page_header{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages #page_header{width:80%}}@media (min-width: 1200px){.feature_pages #page_header{width:55%}}.feature_pages #primary_column{width:100%;margin:0}.feature_pages .wysiwyg_content>*{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages .wysiwyg_content>*{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content>*{width:55%}}.feature_pages .wysiwyg_content p{font-size:18px;line-height:28px}@media (min-width: 769px), print{.feature_pages .wysiwyg_content p{font-size:19px;line-height:30px}}.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module),.feature_pages .wysiwyg_content>ol{list-style-position:outside;padding-left:1em}.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module) ul,.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module) ol,.feature_pages .wysiwyg_content>ol ul,.feature_pages .wysiwyg_content>ol ol{list-style-position:outside}.top_feature_area{text-align:center;position:relative}.top_feature_area .header_overlay{position:absolute;width:100%;padding:0 1%;top:46%;color:white;transform:translateY(-50%)}@media (min-width: 769px), print{.top_feature_area .header_overlay{padding:0 4%}}.top_feature_area .header_overlay .article_title{font-size:1.2em;margin-bottom:0}@media (min-width: 480px){.top_feature_area .header_overlay .article_title{font-size:1.6em}}@media (min-width: 600px), print{.top_feature_area .header_overlay .article_title{font-size:1.9em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .article_title{font-size:2.2em}}@media (min-width: 1024px), print{.top_feature_area .header_overlay .article_title{font-size:2.8em}}@media (min-width: 1200px){.top_feature_area .header_overlay .article_title{font-size:3.2em}}@media (min-width: 1700px){.top_feature_area .header_overlay .article_title{font-size:3.4em}}.top_feature_area .header_overlay .sub_title{font-size:1.2em}@media (min-width: 480px){.top_feature_area .header_overlay .sub_title{font-size:1.5em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .sub_title{font-size:1.9em}}.top_feature_area .header_overlay .author{padding:0.2em 0.5em 0.5em 0.6em;background-color:rgba(0,0,0,0.5);margin:0.2em auto 1em;display:inline-block}@media (min-width: 480px){.top_feature_area .header_overlay .author{margin-top:0.4em}}@media (min-width: 600px), print{.top_feature_area .header_overlay .author{margin-top:0.7em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .author{padding:0.3em 0.5em 0.5em 0.7em;max-width:360px;margin-top:1em}}@media (min-width: 1024px), print{.top_feature_area .header_overlay .author{padding:0.4em 0.6em 0.6em 0.8em;max-width:400px;margin-top:1.5em}}@media (min-width: 1200px){.top_feature_area .header_overlay .author{margin-top:1.8em}}.top_feature_area .header_overlay .author p{color:white;margin:0;font-size:0.8em}@media (min-width: 769px), print{.top_feature_area .header_overlay .author p{font-size:0.95em}}.top_feature_area .article_title{margin-bottom:0.9em}.top_feature_area .category_title{color:#707070;margin-bottom:1em}.top_feature_area a.category_title{color:#257cdf}.top_feature_area .feature_header:first-child{width:94%;max-width:600px;margin-left:auto;margin-right:auto;padding:3em 0 1.7em}@media (min-width: 769px), print{.top_feature_area .feature_header:first-child{width:80%}}@media (min-width: 1200px){.top_feature_area .feature_header:first-child{width:55%}}@media (min-width: 480px){.top_feature_area .feature_header:first-child{padding:4em 0 2em}}.top_feature_area .feature_header:first-child:after{content:"";display:block;height:1px;width:56%;border-bottom:5px solid;max-width:200px;margin:1.7em auto 0.3em}.top_feature_area .feature_header.no_main_image{padding-bottom:.9em}.top_feature_area .release_date{text-transform:none;color:#222;margin-left:0.1em}.top_feature_area .header_overlay+.feature_header{width:94%;max-width:600px;margin-left:auto;margin-right:auto;text-align:left}@media (min-width: 769px), print{.top_feature_area .header_overlay+.feature_header{width:80%}}@media (min-width: 1200px){.top_feature_area .header_overlay+.feature_header{width:55%}}.top_feature_area figure.lede.full_width figcaption{margin:.8em;text-align:left;font-size:1em}.feature_pages .wysiwyg_content .mb_expand{width:100%;max-width:none}.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:94%;max-width:600px;margin-left:auto;margin-right:auto;display:block}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:55%}}.feature_pages .wysiwyg_content .mb_expand .expandable_element{display:none;max-width:none;width:100%}.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:55%}}.countdown .unit{font-weight:300;display:inline-block;position:relative;padding:0 9px 0 0;vertical-align:middle;text-align:center}.countdown .unit+.unit:before{content:" : ";position:absolute;left:-4px}.countdown .unit:first-of-type{padding-left:0}.countdown .unit:last-of-type{padding-right:0}.countdown .unit span{font-weight:600;padding:0 1em;clear:both;display:block;font-size:11px;text-transform:uppercase;text-align:center;margin-bottom:.3rem}.countdown .completed{font-size:1.8em;font-weight:300;margin-top:3px}.feature_pages .countdown,.content_page .countdown{border-top:1px solid #E8E8E8;border-bottom:1px solid #E8E8E8;text-align:center;padding:0.7em 0 0.9em;margin-top:2.7em;margin-bottom:2.7em}.feature_pages .countdown>div,.content_page .countdown>div{display:inline-block;vertical-align:middle}.feature_pages .countdown_title,.content_page .countdown_title{width:100%}@media (min-width: 600px), print{.feature_pages .countdown_title,.content_page .countdown_title{width:auto;margin-right:.8em}}#secondary_column .countdown{margin-bottom:7.14286%;text-align:left}#secondary_column .countdown .countdown_title{margin-right:0;width:100%}#explore_overlay .countdown{border-top-color:#212121;border-bottom-color:#212121}.curtain_module{margin:3em 0;clear:both}.curtain_module .curtain_caption_container{background-color:#eee;padding:1em}.curtain_module .curtain_title{margin:0 0 .5em 0}.curtain_module .curtain_subtitle{color:#777777;margin:0 0 0.5em 0}.feature_pages .curtain_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .curtain_module{max-width:600px}}.feature_pages .curtain_module.full-bleed,.feature_pages .curtain_module.full_width,.feature_pages .curtain_module.wide,.feature_pages .curtain_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .curtain_module.full-bleed,.feature_pages .curtain_module.full_width,.feature_pages .curtain_module.wide,.feature_pages .curtain_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .curtain_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .curtain_module.column-width{max-width:600px}}.feature_pages .curtain_module.full-bleed{width:100%;max-width:none}.feature_pages .curtain_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .curtain_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .curtain_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .curtain_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .curtain_module.full_width{width:55%}}.feature_pages .curtain_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .curtain_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .curtain_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .curtain_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .curtain_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .curtain_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .curtain_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .curtain_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .curtain_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .curtain_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .curtain_module.right{margin-right:20%}}.feature_pages .curtain_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .curtain_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .curtain_module.parallax_module .caption{font-size:.88em}}.feature_pages .curtain_module.parallax_module img{height:auto !important}.feature_pages .curtain_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .curtain_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .curtain_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .curtain_module.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .curtain_module .curtain_caption_container{background-color:#232323}.wysiwyg_content .related_content_module,#secondary_column .related_content_module{font-weight:700}.wysiwyg_content .related_content_module ul,#secondary_column .related_content_module ul{margin:0}.wysiwyg_content .related_content_module li,#secondary_column .related_content_module li{padding:1em 0;border-bottom:1px solid #E5E5E5}.wysiwyg_content .related_content_module li:first-child,#secondary_column .related_content_module li:first-child{border-top:1px solid #E5E5E5}.wysiwyg_content .related_content_module .module_title,.wysiwyg_content .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .wysiwyg_content .related_content_module .carousel_title,#secondary_column .related_content_module .module_title,#secondary_column .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #secondary_column .related_content_module .carousel_title{font-size:1.2em;text-align:left;margin-top:0}.wysiwyg_content .related_content_module .list_image,#secondary_column .related_content_module .list_image{width:25%;display:inline-block}.wysiwyg_content .related_content_module .list_image+.list_text,#secondary_column .related_content_module .list_image+.list_text{width:67%;position:relative;display:inline-block;margin-left:4%;vertical-align:middle}.wysiwyg_content .related_content_module{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .related_content_module{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{float:none}@media (min-width: 480px){.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .related_content_module.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .related_content_module.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .related_content_module.full-bleed,.wysiwyg_content .related_content_module.full_width,.wysiwyg_content .related_content_module.wide,.wysiwyg_content .related_content_module.parallax,.wysiwyg_content .related_content_module.column-width{clear:both}.wysiwyg_content .related_content_module.parallax_module{width:100%;position:relative}.wysiwyg_content .related_content_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .related_content_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .related_content_module.parallax_module .caption{color:#b0b4b9}.wysiwyg_content .related_content_module .sidebar_title,.wysiwyg_content #secondary_column .related_content_module .module_title,#secondary_column .wysiwyg_content .related_content_module .module_title,.wysiwyg_content #secondary_column .related_content_module .main_carousel.module .carousel_header .carousel_title,#secondary_column .wysiwyg_content .related_content_module .main_carousel.module .carousel_header .carousel_title,.wysiwyg_content .main_carousel.module .carousel_header #secondary_column .related_content_module .carousel_title,.main_carousel.module .carousel_header #secondary_column .wysiwyg_content .related_content_module .carousel_title,.wysiwyg_content .right_col .related_content_module .module_title,.right_col .wysiwyg_content .related_content_module .module_title,.wysiwyg_content .right_col .related_content_module .main_carousel.module .carousel_header .carousel_title,.right_col .wysiwyg_content .related_content_module .main_carousel.module .carousel_header .carousel_title,.wysiwyg_content .main_carousel.module .carousel_header .right_col .related_content_module .carousel_title,.main_carousel.module .carousel_header .right_col .wysiwyg_content .related_content_module .carousel_title{margin-top:0;font-size:1.5em}.wysiwyg_content .related_content_module.full_width{border:1px solid #D2D2D2;padding:5.26316%}@media (min-width: 600px), print{.wysiwyg_content .related_content_module.full_width{padding:2em}}.wysiwyg_content .related_content_module.full_width li{width:100%}.wysiwyg_content .related_content_module.full_width li:first-child{border-top:none}.wysiwyg_content .related_content_module.full_width li:last-child{border-bottom:transparent 0;padding-bottom:0}.wysiwyg_content .related_content_module.full_width .module_title,.wysiwyg_content .related_content_module.full_width .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .wysiwyg_content .related_content_module.full_width .carousel_title{margin-top:0;font-size:1.5em}.feature_pages .wysiwyg_content .related_content_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module{max-width:600px}}.feature_pages .wysiwyg_content .related_content_module.full-bleed,.feature_pages .wysiwyg_content .related_content_module.full_width,.feature_pages .wysiwyg_content .related_content_module.wide,.feature_pages .wysiwyg_content .related_content_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.full-bleed,.feature_pages .wysiwyg_content .related_content_module.full_width,.feature_pages .wysiwyg_content .related_content_module.wide,.feature_pages .wysiwyg_content .related_content_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:600px}}.feature_pages .wysiwyg_content .related_content_module.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .related_content_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .related_content_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.full_width{width:55%}}.feature_pages .wysiwyg_content .related_content_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .related_content_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.right{margin-right:20%}}.feature_pages .wysiwyg_content .related_content_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .related_content_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .related_content_module.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.parallax_module .window .featured_image{position:absolute}}.feature_pages .wysiwyg_content .related_content_module .module_title,.feature_pages .wysiwyg_content .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .feature_pages .wysiwyg_content .related_content_module .carousel_title{margin-bottom:0.8em}.vital_signs .related_content_module{font-weight:normal;margin-bottom:1em}.vital_signs .related_content_module li{border:none;padding:0.4em 0;font-size:0.9em}.vital_signs .related_content_module .module_title,.vital_signs .related_content_module .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .vital_signs .related_content_module .carousel_title{font-size:1.2em;margin-bottom:.4em}.vital_signs .related_content_module .list_image{display:none}.vital_signs .related_content_module .list_text{width:100%;float:none;margin-left:10px}.vital_signs .related_content_module .list_text:before{content:"›";color:#42a0f2;margin-left:-10px}.explore_overlay_page .related_content_module.full_width{border-color:#353535}.explore_overlay_page .related_content_module ul li{border-bottom:1px solid #212121}.explore_overlay_page .related_content_module ul li:first-child{border-top:1px solid #212121}.wysiwyg_content .carousel_module{max-width:100%;margin-top:1.4em;margin-bottom:1.4em;overflow:hidden;clear:both;height:275px}@media (min-width: 769px), print{.wysiwyg_content .carousel_module{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .carousel_module.left,.wysiwyg_content .carousel_module.right{float:none}@media (min-width: 480px){.wysiwyg_content .carousel_module.left,.wysiwyg_content .carousel_module.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .carousel_module.left,.wysiwyg_content .carousel_module.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .carousel_module.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .carousel_module.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .carousel_module.full-bleed,.wysiwyg_content .carousel_module.full_width,.wysiwyg_content .carousel_module.wide,.wysiwyg_content .carousel_module.parallax,.wysiwyg_content .carousel_module.column-width{clear:both}.wysiwyg_content .carousel_module.parallax_module{width:100%;position:relative}.wysiwyg_content .carousel_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .carousel_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .carousel_module.parallax_module .caption{color:#b0b4b9}.wysiwyg_content .carousel_module .gradient_container_top{display:none}.wysiwyg_content .carousel_module.medium_mid{height:500px}.wysiwyg_content .carousel_module.medium_large,.wysiwyg_content .carousel_module.large{height:600px}.wysiwyg_content .carousel_module.xlarge,.wysiwyg_content .carousel_module.xxlarge{height:700px}.wysiwyg_content .carousel_module .master-slider{width:100%;height:100%}.wysiwyg_content .carousel_module:last-child{margin-bottom:0}.wysiwyg_content .carousel_module header{position:relative;margin-bottom:0;padding:0 22px}.wysiwyg_content .carousel_module header:after{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;content:"";position:absolute;right:0;top:.9em;background-size:12px;height:7.5px;width:12px}.wysiwyg_content .carousel_module .media_feature_title{color:white;font-size:1.4em;margin:0}.wysiwyg_content .carousel_module .media_feature_title a{color:white;text-decoration:none}.wysiwyg_content .carousel_module .media_feature_title:hover{cursor:pointer}.wysiwyg_content .carousel_module .media_feature_title:after{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;content:"";margin-left:14px;background-size:12px;height:7.5px;width:12px;display:inline-block}.wysiwyg_content .carousel_module .subtitle{font-size:1.1em;margin:0.4em 0 .4em}.wysiwyg_content .carousel_module .description{display:block;max-height:130px;overflow-y:auto}.wysiwyg_content .carousel_module .description a{color:#69B9FF}.wysiwyg_content .carousel_module.medium_large .description,.wysiwyg_content .carousel_module.large .description,.wysiwyg_content .carousel_module.xlarge .description,.wysiwyg_content .carousel_module.xxlarge .description{max-height:none;overflow:hidden;margin-bottom:0;display:none}.wysiwyg_content .carousel_module.medium_large .floating_text_area.open header,.wysiwyg_content .carousel_module.large .floating_text_area.open header,.wysiwyg_content .carousel_module.xlarge .floating_text_area.open header,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.open header{margin-bottom:.7em}.wysiwyg_content .carousel_module.medium_large .floating_text_area.open header .media_feature_title:after,.wysiwyg_content .carousel_module.large .floating_text_area.open header .media_feature_title:after,.wysiwyg_content .carousel_module.xlarge .floating_text_area.open header .media_feature_title:after,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.open header .media_feature_title:after{transform:rotate(180deg)}.wysiwyg_content .carousel_module.small .floating_text_area,.wysiwyg_content .carousel_module.medium .floating_text_area,.wysiwyg_content .carousel_module.medium_mid .floating_text_area{background:linear-gradient(transparent, rgba(0,0,0,0.6));background-size:100%;background-repeat:no-repeat;background-position:bottom}.wysiwyg_content .carousel_module.small header,.wysiwyg_content .carousel_module.medium header,.wysiwyg_content .carousel_module.medium_mid header{cursor:pointer;margin-bottom:.4em}.wysiwyg_content .carousel_module.small .description,.wysiwyg_content .carousel_module.medium .description,.wysiwyg_content .carousel_module.medium_mid .description{display:none;cursor:pointer}.wysiwyg_content .carousel_module.small .media_feature_title:after,.wysiwyg_content .carousel_module.medium .media_feature_title:after,.wysiwyg_content .carousel_module.medium_mid .media_feature_title:after{display:none}.wysiwyg_content .carousel_module.small .gradient_container_bottom,.wysiwyg_content .carousel_module.medium .gradient_container_bottom,.wysiwyg_content .carousel_module.medium_mid .gradient_container_bottom{display:none}.wysiwyg_content .carousel_module .floating_text_area{width:100%;padding:2em 1.4em 2.8em;bottom:0;text-align:center;margin-left:auto;margin-right:auto;color:white}.wysiwyg_content .carousel_module.medium_large .floating_text_area,.wysiwyg_content .carousel_module.large .floating_text_area,.wysiwyg_content .carousel_module.xlarge .floating_text_area,.wysiwyg_content .carousel_module.xxlarge .floating_text_area{padding:1.4em;text-align:left;bottom:5em;background-color:black;width:auto;max-width:500px}.wysiwyg_content .carousel_module.medium_large .floating_text_area.left,.wysiwyg_content .carousel_module.medium_large .floating_text_area.bottom_left,.wysiwyg_content .carousel_module.large .floating_text_area.left,.wysiwyg_content .carousel_module.large .floating_text_area.bottom_left,.wysiwyg_content .carousel_module.xlarge .floating_text_area.left,.wysiwyg_content .carousel_module.xlarge .floating_text_area.bottom_left,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.left,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.bottom_left{left:5%}.wysiwyg_content .carousel_module.medium_large .floating_text_area.right,.wysiwyg_content .carousel_module.medium_large .floating_text_area.bottom_right,.wysiwyg_content .carousel_module.large .floating_text_area.right,.wysiwyg_content .carousel_module.large .floating_text_area.bottom_right,.wysiwyg_content .carousel_module.xlarge .floating_text_area.right,.wysiwyg_content .carousel_module.xlarge .floating_text_area.bottom_right,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.right,.wysiwyg_content .carousel_module.xxlarge .floating_text_area.bottom_right{right:5%}.wysiwyg_content .carousel_module.medium_large .floating_text_area header,.wysiwyg_content .carousel_module.large .floating_text_area header,.wysiwyg_content .carousel_module.xlarge .floating_text_area header,.wysiwyg_content .carousel_module.xxlarge .floating_text_area header{padding:0}.wysiwyg_content .carousel_module.medium_large .floating_text_area header:after,.wysiwyg_content .carousel_module.large .floating_text_area header:after,.wysiwyg_content .carousel_module.xlarge .floating_text_area header:after,.wysiwyg_content .carousel_module.xxlarge .floating_text_area header:after{content:none}.wysiwyg_content .carousel_module .floating_text_area.open header:after{transform:rotate(180deg)}.wysiwyg_content .carousel_module .ms-nav-prev,.wysiwyg_content .carousel_module .ms-nav-next{margin-top:-40px}.wysiwyg_content .carousel_module .ms-slide-bgvideocont{background-color:#000}.wysiwyg_content .carousel_module .ms-slide-bgvideocont video{max-width:none}.wysiwyg_content .carousel_module .ms-nav-next,.wysiwyg_content .carousel_module .ms-nav-prev{display:none}.wysiwyg_content .carousel_module.medium_mid .ms-nav-next,.wysiwyg_content .carousel_module.medium_mid .ms-nav-prev,.wysiwyg_content .carousel_module.medium_large .ms-nav-next,.wysiwyg_content .carousel_module.medium_large .ms-nav-prev,.wysiwyg_content .carousel_module.large .ms-nav-next,.wysiwyg_content .carousel_module.large .ms-nav-prev,.wysiwyg_content .carousel_module.xlarge .ms-nav-next,.wysiwyg_content .carousel_module.xlarge .ms-nav-prev,.wysiwyg_content .carousel_module.xxlarge .ms-nav-next,.wysiwyg_content .carousel_module.xxlarge .ms-nav-prev{display:block}.no-touchevents .wysiwyg_content .carousel_module.medium .ms-nav-next,.no-touchevents .wysiwyg_content .carousel_module.medium .ms-nav-prev,.no-touchevents .wysiwyg_content .carousel_module.small .ms-nav-next,.no-touchevents .wysiwyg_content .carousel_module.small .ms-nav-prev{display:block}.wysiwyg_content .carousel_module .ms-nav-prev,.wysiwyg_content .carousel_module .ms-nav-next{width:40px;height:80px;margin-top:-60px}.wysiwyg_content .carousel_module.medium_large .ms-nav-next,.wysiwyg_content .carousel_module.medium_large .ms-nav-prev,.wysiwyg_content .carousel_module.large .ms-nav-next,.wysiwyg_content .carousel_module.large .ms-nav-prev,.wysiwyg_content .carousel_module.xlarge .ms-nav-next,.wysiwyg_content .carousel_module.xlarge .ms-nav-prev,.wysiwyg_content .carousel_module.xxlarge .ms-nav-next,.wysiwyg_content .carousel_module.xxlarge .ms-nav-prev{margin-top:-80px}.wysiwyg_content .carousel_module .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;left:0;border-top-right-radius:6px;border-bottom-right-radius:6px}.wysiwyg_content .carousel_module .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;right:0;border-top-left-radius:6px;border-bottom-left-radius:6px}.wysiwyg_content .carousel_module .ms-bullets{left:0;right:0;margin:0 auto;bottom:1.2em;z-index:10}.wysiwyg_content .carousel_module.medium_mid .ms-bullets{bottom:1.5em}.wysiwyg_content .carousel_module.medium_large .ms-bullets,.wysiwyg_content .carousel_module.large .ms-bullets,.wysiwyg_content .carousel_module.xlarge .ms-bullets,.wysiwyg_content .carousel_module.xxlarge .ms-bullets{bottom:2.2em}.wysiwyg_content .carousel_module .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:8px;width:8px;opacity:0.5;margin:0 10px}.wysiwyg_content .carousel_module .ms-bullet:hover{opacity:1.0}.wysiwyg_content .carousel_module .ms-bullet-selected{opacity:1.0}.wysiwyg_content .carousel_module .ms-slide-layers{left:0 !important}.wysiwyg_content .carousel_module .ms-container,.wysiwyg_content .carousel_module .ms-slide-layers{max-width:none !important}.feature_pages .wysiwyg_content .carousel_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module{max-width:600px}}.feature_pages .wysiwyg_content .carousel_module.full-bleed,.feature_pages .wysiwyg_content .carousel_module.full_width,.feature_pages .wysiwyg_content .carousel_module.wide,.feature_pages .wysiwyg_content .carousel_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module.full-bleed,.feature_pages .wysiwyg_content .carousel_module.full_width,.feature_pages .wysiwyg_content .carousel_module.wide,.feature_pages .wysiwyg_content .carousel_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .carousel_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module.column-width{max-width:600px}}.feature_pages .wysiwyg_content .carousel_module.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .carousel_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .carousel_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module.full_width{width:55%}}.feature_pages .wysiwyg_content .carousel_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .carousel_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .carousel_module.left,.feature_pages .wysiwyg_content .carousel_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module.left,.feature_pages .wysiwyg_content .carousel_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.left,.feature_pages .wysiwyg_content .carousel_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module.left,.feature_pages .wysiwyg_content .carousel_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .carousel_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module.right{margin-right:20%}}.feature_pages .wysiwyg_content .carousel_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .carousel_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .carousel_module.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .carousel_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .carousel_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .carousel_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .full_width .carousel_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.explore_overlay_page .full_width .carousel_module{max-width:600px}}.explore_overlay_page .full_width .carousel_module.full-bleed,.explore_overlay_page .full_width .carousel_module.full_width,.explore_overlay_page .full_width .carousel_module.wide,.explore_overlay_page .full_width .carousel_module.parallax{clear:both}@media (min-width: 600px), print{.explore_overlay_page .full_width .carousel_module.full-bleed,.explore_overlay_page .full_width .carousel_module.full_width,.explore_overlay_page .full_width .carousel_module.wide,.explore_overlay_page .full_width .carousel_module.parallax{margin-top:5em;margin-bottom:5em}}.explore_overlay_page .full_width .carousel_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.explore_overlay_page .full_width .carousel_module.column-width{max-width:600px}}.explore_overlay_page .full_width .carousel_module.full-bleed{width:100%;max-width:none}.explore_overlay_page .full_width .carousel_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.explore_overlay_page .full_width .carousel_module.full_width{clear:both}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.explore_overlay_page .full_width .carousel_module.full_width{width:55%}}.explore_overlay_page .full_width .carousel_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.wide{width:95%}}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.explore_overlay_page .full_width .carousel_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.explore_overlay_page .full_width .carousel_module.column-width{max-width:calc(600px + 15%)}}.explore_overlay_page .full_width .carousel_module.left,.explore_overlay_page .full_width .carousel_module.right{max-width:94%}@media (min-width: 600px), print{.explore_overlay_page .full_width .carousel_module.left,.explore_overlay_page .full_width .carousel_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.left,.explore_overlay_page .full_width .carousel_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.explore_overlay_page .full_width .carousel_module.left,.explore_overlay_page .full_width .carousel_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.explore_overlay_page .full_width .carousel_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.explore_overlay_page .full_width .carousel_module.left{margin-left:15%}}@media (min-width: 1700px){.explore_overlay_page .full_width .carousel_module.left{margin-left:20%}}@media (min-width: 480px){.explore_overlay_page .full_width .carousel_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.explore_overlay_page .full_width .carousel_module.right{margin-right:15%}}@media (min-width: 1700px){.explore_overlay_page .full_width .carousel_module.right{margin-right:20%}}.explore_overlay_page .full_width .carousel_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.explore_overlay_page .full_width .carousel_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .full_width .carousel_module.parallax_module img{height:auto !important}.explore_overlay_page .full_width .carousel_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.explore_overlay_page .full_width .carousel_module.parallax_module .window.mobile{height:auto;min-height:100%}.explore_overlay_page .full_width .carousel_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .full_width .carousel_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.explore_overlay_page .full_width .carousel_module.wide{width:95%;max-width:1200px}}.wysiwyg_content .image_module{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .image_module{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{float:none}@media (min-width: 480px){.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .image_module.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .image_module.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .image_module.full-bleed,.wysiwyg_content .image_module.full_width,.wysiwyg_content .image_module.wide,.wysiwyg_content .image_module.parallax,.wysiwyg_content .image_module.column-width{clear:both}.wysiwyg_content .image_module.parallax_module{width:100%;position:relative}.wysiwyg_content .image_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .image_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .image_module.parallax_module .caption{color:#b0b4b9}.feature_pages .wysiwyg_content .image_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module{max-width:600px}}.feature_pages .wysiwyg_content .image_module.full-bleed,.feature_pages .wysiwyg_content .image_module.full_width,.feature_pages .wysiwyg_content .image_module.wide,.feature_pages .wysiwyg_content .image_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.full-bleed,.feature_pages .wysiwyg_content .image_module.full_width,.feature_pages .wysiwyg_content .image_module.wide,.feature_pages .wysiwyg_content .image_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .image_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:600px}}.feature_pages .wysiwyg_content .image_module.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .image_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .image_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.full_width{width:55%}}.feature_pages .wysiwyg_content .image_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .image_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.right{margin-right:20%}}.feature_pages .wysiwyg_content .image_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .image_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .image_module.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .image_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .image_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .image_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.parallax_module .window .featured_image{position:absolute}}.image_module figure.inline_figure{margin-bottom:0}.item_grid_module{clear:both;margin:3em auto}@media (min-width: 769px), print{.item_grid_module{margin:4em auto}}.item_grid_module li{visibility:hidden}.item_grid_module .grid_content{margin-bottom:20px}.item_grid_module .grid_content .grid_image{text-align:center}@media (min-width: 600px), print{.item_grid_module .grid_content .grid_image{text-align:left}}.item_grid_module .grid_content .grid_image img{width:auto}@media (min-width: 600px), print{.item_grid_module .grid_content .grid_image img{width:100%}}.item_grid_module .grid_content .caption{font-size:.88em;padding:1em;color:#565455}@media (min-width: 600px), print{.item_grid_module .grid_content .caption{background-color:#D8D6D7}}.feature_pages .wysiwyg_content .item_grid_module{max-width:none;width:90% !important;left:5%;margin-left:auto;margin-right:auto}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .item_grid_module{width:98% !important;left:1%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .item_grid_module{width:95% !important;left:2.5%}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .item_grid_module{width:85% !important;left:0}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .item_grid_module{width:75% !important}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .item_grid_module{width:65% !important}}.explore_overlay_page .item_grid_module .caption{background-color:transparent;color:#9C9FA4}.grid_gallery.list_view .item_grid{display:none}.grid_gallery.masonry_view .item_list{display:none}.grid_gallery.masonry_view .grid_text .caption{background-color:#e5ecf4}.grid_gallery.masonry_view .detail_link_button{position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);padding:0.7em 1.5em;border:2px solid white;border-radius:12px;font-size:0.9em;font-weight:bold}.view_selectors .nav_item.masonry_icon{background-position:-62px -62px}.no-touchevents .view_selectors .nav_item.masonry_icon:hover{background-position:-62px -12px}.masonry_view .view_selectors .nav_item.masonry_icon{background-position:-62px -12px}blockquote{clear:both;color:#000}blockquote .quote{font-style:italic;margin-bottom:.5em;font-size:1.5em;line-height:1.4em;font-weight:700}blockquote footer{font-size:1em;text-align:left}blockquote cite{font-style:normal}.explore_overlay_page blockquote{color:#FFF !important}.explore_overlay_page blockquote.inspirational{color:#FFF}.feature_pages .wysiwyg_content blockquote{width:80%}@media (min-width: 769px), print{.feature_pages .wysiwyg_content blockquote{width:65%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content blockquote{width:40%}}.wysiwyg_content .video_player_container,.wysiwyg_content figure.embedded_video{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .video_player_container,.wysiwyg_content figure.embedded_video{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{float:none}@media (min-width: 480px){.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .video_player_container.left,.wysiwyg_content figure.embedded_video.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .video_player_container.full-bleed,.wysiwyg_content .video_player_container.full_width,.wysiwyg_content .video_player_container.wide,.wysiwyg_content .video_player_container.parallax,.wysiwyg_content .video_player_container.column-width,.wysiwyg_content figure.embedded_video.full-bleed,.wysiwyg_content figure.embedded_video.full_width,.wysiwyg_content figure.embedded_video.wide,.wysiwyg_content figure.embedded_video.parallax,.wysiwyg_content figure.embedded_video.column-width{clear:both}.wysiwyg_content .video_player_container.parallax_module,.wysiwyg_content figure.embedded_video.parallax_module{width:100%;position:relative}.wysiwyg_content .video_player_container.parallax_module .caption,.wysiwyg_content figure.embedded_video.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .video_player_container.parallax_module .caption,.wysiwyg_content figure.embedded_video.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .video_player_container.parallax_module .caption,.explore_overlay_page .wysiwyg_content figure.embedded_video.parallax_module .caption{color:#b0b4b9}.feature_pages .wysiwyg_content .video_player_container,.feature_pages .wysiwyg_content figure.embedded_video{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container,.feature_pages .wysiwyg_content figure.embedded_video{max-width:600px}}.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content .video_player_container.parallax,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full_width,.feature_pages .wysiwyg_content figure.embedded_video.wide,.feature_pages .wysiwyg_content figure.embedded_video.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content .video_player_container.parallax,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full_width,.feature_pages .wysiwyg_content figure.embedded_video.wide,.feature_pages .wysiwyg_content figure.embedded_video.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:600px}}.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .video_player_container.full-bleed figcaption,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:55%}}.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content figure.embedded_video.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content figure.embedded_video.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{margin-right:20%}}.feature_pages .wysiwyg_content .video_player_container.parallax_module,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .video_player_container.parallax_module .caption,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.parallax_module .caption,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .video_player_container.parallax_module img,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window.mobile,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window .featured_image,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.parallax_module .window .featured_image,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window .featured_image{position:absolute}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:44%}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:33%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:25%}}.feature_pages .wysiwyg_content .video_player_container.full-width>iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed>iframe,.feature_pages .wysiwyg_content .video_player_container.wide>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed>iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide>iframe{max-height:98vh}@media (min-height: 400px){.feature_pages .wysiwyg_content .video_player_container.full-width>iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed>iframe,.feature_pages .wysiwyg_content .video_player_container.wide>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed>iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide>iframe{max-height:300px}}@media (min-height: 600px){.feature_pages .wysiwyg_content .video_player_container.full-width>iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed>iframe,.feature_pages .wysiwyg_content .video_player_container.wide>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed>iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide>iframe{max-height:400px}}@media (min-height: 800px){.feature_pages .wysiwyg_content .video_player_container.full-width>iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed>iframe,.feature_pages .wysiwyg_content .video_player_container.wide>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed>iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide>iframe{max-height:600px}}@media (min-height: 1000px){.feature_pages .wysiwyg_content .video_player_container.full-width>iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed>iframe,.feature_pages .wysiwyg_content .video_player_container.wide>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width>iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed>iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide>iframe{max-height:90vh}}.wysiwyg_content figure.embedded_video>iframe{width:100%;min-height:300px}@media (min-width: 769px), print{.wysiwyg_content figure.embedded_video>iframe{min-height:400px}}@media (min-width: 1200px){.wysiwyg_content figure.embedded_video>iframe{min-height:500px}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:400px}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:500px}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:600px}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:700px}}.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.left,.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.right{width:100%;max-width:100%}.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.left>iframe,.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.right>iframe{min-height:300px}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.left,.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.right{width:50%}}.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.left,.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.right{width:100%;max-width:100%}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.left,.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.right{width:50%}}.webvr-button{padding:0px !important;margin:12px}img[title="Configure viewer"]{margin-left:-12px !important}.feature_pages .wysiwyg_content #scene_container{width:94%;max-width:100%;margin:3em auto;float:none;position:relative}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container{max-width:600px}}.feature_pages .wysiwyg_content #scene_container.full-bleed,.feature_pages .wysiwyg_content #scene_container.full_width,.feature_pages .wysiwyg_content #scene_container.wide,.feature_pages .wysiwyg_content #scene_container.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.full-bleed,.feature_pages .wysiwyg_content #scene_container.full_width,.feature_pages .wysiwyg_content #scene_container.wide,.feature_pages .wysiwyg_content #scene_container.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content #scene_container.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:600px}}.feature_pages .wysiwyg_content #scene_container.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content #scene_container.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content #scene_container.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.full_width{width:55%}}.feature_pages .wysiwyg_content #scene_container.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content #scene_container.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.right{margin-right:20%}}.feature_pages .wysiwyg_content #scene_container.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content #scene_container.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content #scene_container.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content #scene_container.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content #scene_container.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.parallax_module .window .featured_image{position:absolute}}.feature_pages .wysiwyg_content #scene_container:before{display:block;content:"";width:100%;padding-top:75%}.feature_pages .wysiwyg_content #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.content_page:not(.feature_pages) .wysiwyg_content #scene_container{max-width:100%;margin-top:1.4em;margin-bottom:1.4em;position:relative}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content #scene_container{margin-top:2em;margin-bottom:2em}}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{float:none}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{max-width:50%}}@media (min-width: 1200px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{max-width:40%}}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{float:right;margin:1em 0 1.5em 2.5em}}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.full-bleed,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.full_width,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.wide,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.column-width{clear:both}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module{width:100%;position:relative}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{font-size:.88em}}.explore_overlay_page .content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{color:#b0b4b9}.content_page:not(.feature_pages) .wysiwyg_content #scene_container:before{display:block;content:"";width:100%;padding-top:75%}.content_page:not(.feature_pages) .wysiwyg_content #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.content_page.atlas_detail #scene_container{max-width:100%;margin-top:1.4em;margin-bottom:1.4em;position:relative}@media (min-width: 769px), print{.content_page.atlas_detail #scene_container{margin-top:2em;margin-bottom:2em}}.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{float:none}@media (min-width: 480px){.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{max-width:50%}}@media (min-width: 1200px){.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{max-width:40%}}@media (min-width: 480px){.content_page.atlas_detail #scene_container.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.content_page.atlas_detail #scene_container.right{float:right;margin:1em 0 1.5em 2.5em}}.content_page.atlas_detail #scene_container.full-bleed,.content_page.atlas_detail #scene_container.full_width,.content_page.atlas_detail #scene_container.wide,.content_page.atlas_detail #scene_container.parallax,.content_page.atlas_detail #scene_container.column-width{clear:both}.content_page.atlas_detail #scene_container.parallax_module{width:100%;position:relative}.content_page.atlas_detail #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.content_page.atlas_detail #scene_container.parallax_module .caption{font-size:.88em}}.explore_overlay_page .content_page.atlas_detail #scene_container.parallax_module .caption{color:#b0b4b9}.content_page.atlas_detail #scene_container:before{display:block;content:"";width:100%;padding-top:36.36364%}.content_page.atlas_detail #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.magic_shell_page .wysiwyg_content #scene_container{height:100%}#scene_container{position:relative;overflow:hidden;width:100%}#scene_container .loading_webgl{width:100%;height:100%;top:0;left:0;right:0;bottom:0;position:absolute;z-index:21;opacity:1.0;color:white;background-color:black;text-align:center;-webkit-transition:opacity 2s, visibility 2s;-moz-transition:opacity 2s, visibility 2s;transition:opacity 2s, visibility 2s}#scene_container .loading_webgl .loading_text{position:relative;top:50%;padding-bottom:6px;color:grey}#scene_container .loading_webgl .load_bar{position:relative;top:50%;left:0;width:0%;height:2px;background-color:#FFFFFF;opacity:0.5}#gl_layer{margin:0 auto}#gl_layer,#gl_fallback_layer{-webkit-tap-highlight-color:transparent;-webkit-tap-highlight-color:transparent;position:absolute;overflow:hidden;top:0;left:0;right:0;bottom:0;width:100%;height:100%;display:none}#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{position:absolute;padding-top:8px;top:-120px;left:50%;width:90%;text-align:center;visibility:hidden;-webkit-transform:translateX(-50%);-ms-transform:translateX(-50%);transform:translateX(-50%);-webkit-transition:top 1s, visibility 1s;-moz-transition:top 1s, visibility 1s;transition:top 1s, visibility 1s;z-index:20}@media (min-width: 480px){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:26px}}@media only screen and (min-width: 480px) and (orientation: landscape){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:0px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:34px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:4px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:34px}}#gl_layer .module_title_wrapper.active,#gl_fallback_layer .module_title_wrapper.active{top:0px;visibility:visible}#gl_layer .module_title_wrapper .gradient_overlay_top,#gl_fallback_layer .module_title_wrapper .gradient_overlay_top{position:absolute;width:120%;height:250%;top:0;left:50%;opacity:.8;background:-moz-linear-gradient(top, #000 0%, #000 14%, transparent 99%, transparent 100%);background:-webkit-gradient(linear, left top, left bottom, color-stop(0%, #000), color-stop(14%, #000), color-stop(99%, transparent), color-stop(100%, transparent));background:-webkit-linear-gradient(top, #000 0%, #000 14%, transparent 99%, transparent 100%);background:-o-linear-gradient(top, #000 0%, #000 14%, transparent 99%, transparent 100%);background:-ms-linear-gradient(top, #000 0%, #000 14%, transparent 99%, transparent 100%);background:linear-gradient(to bottom, #000 0%, #000 14%, transparent 99%, transparent 100%);filter:progid:DXImageTransform.Microsoft.gradient( startColorstr='#000000', endColorstr='#00000000',GradientType=0 );-webkit-transform:translateX(-50%);-ms-transform:translateX(-50%);transform:translateX(-50%)}#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:20px;margin-right:0.4em;position:relative;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif;vertical-align:middle}@media (min-width: 480px){#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:22px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:34px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:24px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:28px}}@media (min-width: 1024px), print{#gl_layer .module_title_wrapper .module_title,#gl_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_layer .module_title_wrapper .carousel_title,#gl_fallback_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header #gl_fallback_layer .module_title_wrapper .carousel_title{font-size:34px}}#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{cursor:pointer;position:relative;display:inline-block;padding:.1em;width:26px;height:26px;vertical-align:middle}@media (min-width: 480px){#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:28px;height:28px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:36px;height:36px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:30px;height:30px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:36px;height:36px}}@media (min-width: 1024px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:42px;height:42px}}#gl_layer .module_title_wrapper a.close_focus .close_icon,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon{display:block;height:100%;position:relative;opacity:0.7}#gl_layer .module_title_wrapper a.close_focus .close_icon:before,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:3px;width:100%;top:calc(50% - 1.5px);left:0;background:#fff;opacity:.8}#gl_layer .module_title_wrapper a.close_focus .close_icon:after,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:3px;width:100%;top:calc(50% - 1.5px);left:0;background:#fff;opacity:.8}#gl_layer .module_title_wrapper a.close_focus:hover .close_icon,#gl_fallback_layer .module_title_wrapper a.close_focus:hover .close_icon{opacity:1.0}#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{position:absolute;padding-bottom:26px;bottom:-110px;left:50%;width:90%;text-align:center;visibility:hidden;-webkit-transform:translateX(-50%);-ms-transform:translateX(-50%);transform:translateX(-50%);-webkit-transition:bottom 1s, visibility 1s;-moz-transition:bottom 1s, visibility 1s;transition:bottom 1s, visibility 1s;z-index:20}@media (min-width: 480px){#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:66px}}@media only screen and (min-width: 480px) and (orientation: landscape){#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:12px}}@media (min-width: 769px), print{#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:66px}}#gl_layer .module_description_wrapper.active,#gl_fallback_layer .module_description_wrapper.active{bottom:0px;visibility:visible}#gl_layer .module_description_wrapper .gradient_overlay,#gl_fallback_layer .module_description_wrapper .gradient_overlay{position:absolute;width:120%;height:250%;bottom:0;left:50%;opacity:.8;background:-moz-linear-gradient(top, transparent 0%, transparent 7%, #000 100%);background:-webkit-gradient(linear, left top, left bottom, color-stop(0%, transparent), color-stop(7%, transparent), color-stop(100%, #000));background:-webkit-linear-gradient(top, transparent 0%, transparent 7%, #000 100%);background:-o-linear-gradient(top, transparent 0%, transparent 7%, #000 100%);background:-ms-linear-gradient(top, transparent 0%, transparent 7%, #000 100%);background:linear-gradient(to bottom, transparent 0%, transparent 7%, #000 100%);filter:progid:DXImageTransform.Microsoft.gradient( startColorstr='#00000000', endColorstr='#000000',GradientType=0 );-webkit-transform:translateX(-50%);-ms-transform:translateX(-50%);transform:translateX(-50%)}#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{padding:0px 12px;font-size:14px;font-weight:300;position:relative;z-index:10;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif}@media (min-width: 600px), print{#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:20px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:16px}}@media (min-width: 1200px){#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:22px}}#gl_layer .module_description_wrapper .explore,#gl_fallback_layer .module_description_wrapper .explore{letter-spacing:.1em;font-family:Whitney-Bold,Helvetica,Arial,sans-serif;font-size:0;vertical-align:baseline;margin:0px;color:#78BDFF;background:rgba(0,0,0,0.1);border-radius:2px;cursor:pointer;position:relative;z-index:10}#gl_layer .module_description_wrapper .explore:hover,#gl_fallback_layer .module_description_wrapper .explore:hover{color:#CFE7FF;background:rgba(0,0,0,0.2)}#gl_layer .module_description_wrapper .explore:after,#gl_fallback_layer .module_description_wrapper .explore:after{content:"››";font-size:20px}@media (min-width: 1024px), print{#gl_layer .module_description_wrapper .explore,#gl_fallback_layer .module_description_wrapper .explore{font-size:17px;padding:10px;border:1px solid #78BDFF;vertical-align:middle}#gl_layer .module_description_wrapper .explore:after,#gl_fallback_layer .module_description_wrapper .explore:after{content:none}}#gl_layer .label{position:absolute;padding:0;border:0;margin:0;max-width:300px;text-align:center;-webkit-transform:translate(-50%, -110%);-ms-transform:translate(-50%, -110%);transform:translate(-50%, -110%);font-size:16px;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif}#gl_fallback_layer .focus_layer{position:absolute;top:0;right:0;width:100%;height:100%;background-size:cover;background-position:center center;visibility:hidden;opacity:0;-webkit-transition:opacity 1s, visibility 1s;-moz-transition:opacity 1s, visibility 1s;transition:opacity 1s, visibility 1s;z-index:15}#gl_fallback_layer .focus_layer.active{visibility:visible;opacity:1}#gl_fallback_layer .overlay_image{position:absolute;top:0;right:0;width:100%;height:100%;height:100%;background-size:cover;background-position:center center}.hotspot_container{position:absolute;height:100%;width:100%}.hotspot_container .hotspot_wrapper{position:absolute;top:0px;left:0px;width:350px;visibility:visible;-webkit-transform:translate(-12px, -12px);-ms-transform:translate(-12px, -12px);transform:translate(-12px, -12px)}.hotspot_container .hotspot_wrapper span{position:relative;visibility:visible;font-size:0.7em;color:#FFFFFF;opacity:0.0;transition:opacity 0.75s;left:6px}.hotspot_container .hotspot_wrapper.hidden{visibility:hidden}.hotspot_container .hotspot_wrapper .hotspot{width:24px;height:24px;opacity:0.8}.fallback_mode #gl_fallback_layer{height:auto}.fallback_mode #gl_fallback_layer img{height:auto}.image{height:300px;width:300px;border-color:#000000;border:4px solid #ffffff;border-radius:8px;transform:translate(-50%, -50%)}.wysiwyg_content p{line-height:1.4em}.wysiwyg_content p,.wysiwyg_content a{word-wrap:break-word}.wysiwyg_content table a{word-break:break-word}#primary_column .wysiwyg_content>:first-child{margin-top:0}#primary_column .wysiwyg_content .inset_box{padding:.5em 2em;margin:2em 0;border:4px solid #DCE0E5}.wysiwyg_content h1,.wysiwyg_content h2,.wysiwyg_content h3,.wysiwyg_content h4{font-weight:700;letter-spacing:-.01em;margin:1.2em 0 .5em}@media (min-width: 769px), print{.wysiwyg_content h1,.wysiwyg_content h2,.wysiwyg_content h3,.wysiwyg_content h4{margin:1.5em 0 .5em}}.wysiwyg_content h1{font-size:2.2em}.wysiwyg_content h2{font-size:1.8em}.wysiwyg_content h3{font-size:1.4em}.wysiwyg_content h4{font-size:1.1em}.wysiwyg_content strong,.wysiwyg_content b,.wysiwyg_content .bold{font-weight:bold}.wysiwyg_content .content_title{font-size:1.04em;margin-bottom:.1em}@media (min-width: 600px), print{.wysiwyg_content .content_title{font-size:1.2em;margin-bottom:.18em}}@media (min-width: 769px), print{.wysiwyg_content .content_title{font-size:1.36em;margin-bottom:.26em}}@media (min-width: 1024px), print{.wysiwyg_content .content_title{font-size:1.44em;margin-bottom:.29em}}@media (min-width: 1200px){.wysiwyg_content .content_title{font-size:1.52em;margin-bottom:.32em}}.wysiwyg_content .article_teaser_body{font-size:1em}.wysiwyg_content .indent1{margin-left:3.5em}.wysiwyg_content .indent2{margin-left:7em}.wysiwyg_content .indent3{margin-left:10.5em}.wysiwyg_content .publish_date{font-weight:700}.wysiwyg_content .item_list_module{clear:both}.wysiwyg_content .expandable_element_link.style_1{font-size:.8em;font-weight:700;text-transform:uppercase}.wysiwyg_content .expandable_element{display:none}.wysiwyg_content table{border-spacing:1px;border-collapse:separate;font-size:15px;line-height:normal}.wysiwyg_content table th,.wysiwyg_content table td{padding:13px}.wysiwyg_content table th{background-color:#ddd;font-weight:600;text-align:left}.wysiwyg_content table td{background-color:#eee}.wysiwyg_content .table_wrapper{width:100%;margin:1em 0;-webkit-overflow-scrolling:touch}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar{height:12px}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar-track{box-shadow:0 0 2px rgba(0,0,0,0.15) inset;background:#f0f0f0}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar-thumb{border-radius:6px;background:#ccc}.wysiwyg_content .table_wrapper.has-scroll{position:relative;overflow:hidden}.wysiwyg_content .table_wrapper.has-scroll:after{position:absolute;top:0;left:100%;width:50px;height:100%;border-radius:10px 0 0 10px / 50% 0 0 50%;box-shadow:-5px 0 10px rgba(0,0,0,0.25);content:''}.wysiwyg_content .table_wrapper.has-scroll>div{overflow-x:auto}.wysiwyg_content table.mb_table{border-collapse:collapse;width:100%}.wysiwyg_content table.mb_table td{background-color:transparent}.wysiwyg_content table.mb_table th{background-color:white;color:#f08d77;font-size:.75em;font-weight:500;text-align:left;padding:13px}@media (min-width: 600px), print{.wysiwyg_content table.mb_table th{font-size:.9em}}.wysiwyg_content table.mb_table tbody td{font-size:.9em}@media (min-width: 600px), print{.wysiwyg_content table.mb_table tbody td{font-size:1.1em}}.wysiwyg_content table.mb_table tr:nth-child(even){background-color:#edf4fb}.wysiwyg_content table.mb_table tr:nth-child(odd){background-color:#ffffff}.wysiwyg_content table.mb_table td{border:1px solid #d2d2d2;padding:.8em}.wysiwyg_content table.mb_table td:first-child{border-left:transparent}.wysiwyg_content table.mb_table td:last-child{border-right:transparent}.wysiwyg_content table.small_table,.wysiwyg_content table.mb_table.small_table{font-size:.75em;padding:.6em}#main_container form.gsc-search-box{padding:0}#main_container form.gsc-search-box td.gsc-input{padding:0}#main_container table[class^="gsc-"] td,#main_container table[class^="gcsc-"] td{background-color:transparent}#main_container.placeholder{-webkit-font-smoothing:antialiased}#main_container:-moz-placeholder{-webkit-font-smoothing:antialiased}#main_container::-moz-placeholder{-webkit-font-smoothing:antialiased}#main_container::-webkit-input-placeholder{-webkit-font-smoothing:antialiased}#main_container:-ms-input-placeholder{-webkit-font-smoothing:antialiased}#main_container .gsc-control-cse table{margin:0}#main_container input.gsc-input{padding:10px 12px;border-radius:6px;font-size:15px}#main_container input.gsc-search-button{border-color:#fff;background-color:#3b788b;padding:10px 14px 10px;height:38px;color:white;font-size:15px;font-weight:500;border-radius:6px;text-transform:uppercase}#main_container input.gsc-search-button:hover{background-color:#5097ad}#main_container .gsc-selected-option-container{width:auto !important;max-width:none}#main_container td.gsc-clear-button{padding-left:4px}#main_container .cse .gsc-control-cse,#main_container .gsc-control-cse{padding:0}#main_container .cse .gsc-control-cse tr,#main_container .gsc-control-cse tr{background:none !important}#main_container td.gsc-result-info-container{padding-left:0}#main_container .gs-no-results-result .gs-snippet,#main_container .gs-error-result .gs-snippet{padding:5px 0;margin:5px 0;border:none;background-color:transparent}#main_container .gsc-webResult.gsc-results{margin-top:0px}#main_container div.gsc-webResult.gsc-result{border-bottom:1px solid #CFD7E1;padding-bottom:16px;padding-top:16px;padding-left:0;margin-bottom:0px;margin-top:0px}#main_container td.gsc-table-cell-snippet-close{padding:0}#main_container div.gs-title{padding:0;height:auto;line-height:1.4em;text-decoration:none}#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{color:#388FDA;text-decoration:none;font-weight:700;letter-spacing:-.035em;height:auto;padding:0}@media (min-width: 600px), print{#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{font-size:18px}}@media (min-width: 769px), print{#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{font-size:20px}}#main_container a.gs-title:hover{color:#115FA3;text-decoration:underline}#main_container a.gs-title:hover b{color:#115FA3}#main_container .gs-webResult .gs-snippet,#main_container .gs-imageResult .gs-snippet,#main_container .gs-fileFormatType{color:#333;line-height:1.4em}@media (min-width: 1024px), print{#main_container .gs-webResult .gs-snippet,#main_container .gs-imageResult .gs-snippet,#main_container .gs-fileFormatType{font-size:15px}}#main_container .gs-webResult div.gs-visibleUrl,#main_container .gs-imageResult div.gs-visibleUrl{color:#888}#main_container .gsc-table-cell-thumbnail{padding:0 6px 0 0}@media (min-width: 600px), print{#main_container .gsc-table-cell-thumbnail{padding:0 12px 0 0}}@media (min-width: 1024px), print{#main_container .gsc-table-cell-thumbnail{padding:0 16px 0 0}}#main_container .gs-web-image-box{width:100px}@media (min-width: 600px), print{#main_container .gs-web-image-box{padding:0;width:125px}}#main_container img.gs-image,#main_container .gs-promotion-image-box img.gs-promotion-image{border:none;width:100%;height:auto;max-width:none;max-height:none}#main_container a.gs-image{display:block}#main_container .gsc-results .gsc-cursor-box{padding-top:2px}#main_container .gsc-results .gsc-cursor-box .gsc-cursor-page{color:#388FDA;font-size:17px}#main_container .gsc-results .gsc-cursor-box .gsc-cursor-current-page{color:#333;background-color:transparent;text-shadow:none;padding:0}#main_container .gsc-adBlock{display:none !important}.aaa{border:0 solid red}.shareline{width:100%;margin:1.7em 0 2.7em;display:block;position:relative;clear:both}.shareline .shareline_heading{margin-bottom:.5em;position:relative;margin-top:0}.shareline.top_attached_sl{margin-bottom:0}.shareline.bottom_attached_sl{margin-top:-1px;margin-top:0}.shareline.bottom_attached_sl article{border-top:none}.shareline.bottom_attached_sl .shareline_heading{display:none}.shareline article{padding:17px 0em 18px;border-top:1px solid #BEBEBE;border-bottom:1px solid #BEBEBE;position:relative;overflow:visible}.shareline .share_container{display:inline-block;vertical-align:top;width:75px}.shareline .share_container .selector{display:inline-block}.shareline .share_container .selected{display:inline-block;position:relative;top:1px;height:25px;width:25px;cursor:pointer}.shareline .share_container .selected:before{font-size:32px;margin-top:-3px}.shareline .share_container .arrow_box{display:inline-block;position:relative;padding:9px 11px;cursor:pointer}.shareline .share_container .arrow_down{width:0;height:0;border-top:6px solid transparent !important;border-bottom:6px solid transparent !important;border-left:8px solid #b2b2b2;display:inline-block;transform:rotate(90deg)}.shareline .share_container .arrow_down:hover{border-color:black}.shareline .share_options{display:none;position:absolute;top:47px;left:0;z-index:2;background-color:#FFF;border:1px solid #BEBEBE;padding:5px 6px 0 6px}.shareline .share_options .share_btn{width:40px;height:40px;font-size:40px;display:inline;margin:0.1em;cursor:pointer}.shareline a.fi-social-twitter,.shareline a.fi-social-facebook{color:#2b2b2b;text-decoration:none}.shareline .share_text{display:inline-block;vertical-align:middle;width:calc(100% - 75px);color:#555;font-size:95%}#explore_overlay .shareline a.fi-social-twitter,#explore_overlay .shareline a.fi-social-facebook,#explore_overlay .shareline .share_text,.explore_overlay_page .shareline a.fi-social-twitter,.explore_overlay_page .shareline a.fi-social-facebook,.explore_overlay_page .shareline .share_text{color:#FFF}#explore_overlay .shareline .share_options a.fi-social-twitter,#explore_overlay .shareline .share_options a.fi-social-facebook,.explore_overlay_page .shareline .share_options a.fi-social-twitter,.explore_overlay_page .shareline .share_options a.fi-social-facebook{color:#2b2b2b}#explore_overlay .shareline .arrow_down:hover,.explore_overlay_page .shareline .arrow_down:hover{border-color:white}#explore_overlay .shareline article,.explore_overlay_page .shareline article{border-color:#6d6b6b}.keypoint .share_container{width:28px;margin-top:-1px;margin-left:1em}.keypoint .keypoint_icon{font-size:1.2em}section.missions_teaser{background:#edecec;z-index:10}section.missions_teaser header{margin-bottom:2em}section.missions_teaser ul.missions_circles{text-align:center;margin-bottom:2em}section.missions_teaser ul.missions_circles li.mission_item{display:block;margin-left:auto;margin-right:auto;width:300px;text-decoration:none;margin-bottom:3%;border-radius:150px;overflow:hidden}@media (max-width: 480px){section.missions_teaser ul.missions_circles li.mission_item{width:290px;border-radius:145px}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item{width:210px;border-radius:105px;margin-right:2%;display:inline-block;margin-bottom:0}section.missions_teaser ul.missions_circles li.mission_item:last-child{margin-right:0}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item{margin-right:3%;width:300px;border-radius:150px}}@media (min-width: 1200px){section.missions_teaser ul.missions_circles li.mission_item{margin-right:5%}}section.missions_teaser ul.missions_circles li.mission_item:first-child{border:5px solid #3b788b}section.missions_teaser ul.missions_circles li.mission_item:nth-child(2){border:5px solid #c25b28}section.missions_teaser ul.missions_circles li.mission_item:nth-child(3){border:5px solid #fda43c}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description{transition:opacity .4s}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description .rollover_description_inner{font-size:0.9em}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description .rollover_description_inner{font-size:1em}}section.missions_teaser ul.missions_circles li.mission_item .rollover_description *{color:white}section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{display:none}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{display:block;opacity:1;height:100%;width:100%;z-index:1;top:0;right:0;overflow:hidden;position:absolute;background-color:rgba(0,0,0,0.6);border-radius:50%;padding:2em;color:white;font-weight:500;font-size:1.1em}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{padding:6em 1.5em}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item:hover li.mission_item .title{opacity:0}}section.missions_teaser ul.missions_circles a{height:300px;display:block;position:relative;text-decoration:none}@media (max-width: 480px){section.missions_teaser ul.missions_circles a{height:290px}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles a{height:210px}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles a{height:300px}}section.missions_teaser ul.missions_circles a .title{text-align:center;position:relative;display:block;top:80%;color:white;text-transform:uppercase;font-size:1.2em;font-weight:500;transition:opacity .4s}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles a .title{top:72%}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles a .title{top:80%}}section.missions_teaser footer .detail_link{display:block;margin:0.5em 1em 0 0;white-space:nowrap}@media (min-width: 769px), print{section.missions_teaser footer{float:right;text-align:right}}section.missions_teaser footer a{color:#4e8fa4}.wysiwyg_content .footnotes li{position:relative;font-size:.85em;margin-bottom:1em}.wysiwyg_content .footnotes li .footnote h2{margin:0;color:#222;font-size:1.2em}.wysiwyg_content .footnotes li p{margin:0.4em 0 0.6em}.wysiwyg_content .footnote{font-size:.8em}#secondary_column .footnote{font-size:.8em}.primary_media_feature{margin-bottom:0}@media (min-width: 769px), print{.primary_media_feature{padding:0}}.primary_media_feature.single{position:relative;margin-bottom:0;overflow:hidden}.primary_media_feature.single .feature_container{height:300px;background-size:cover;position:relative;z-index:3;background-position:center}@media (min-width: 769px), print{.primary_media_feature.single .feature_container{height:700px}}.primary_media_feature.single.video .play{display:none;position:absolute;top:47%;left:47%;top:calc(50%- 30px);left:calc(50%- 30px);top:-webkit-calc(50% - 30px);left:-webkit-calc(50% - 30px);width:60px;height:60px;padding-top:0;cursor:pointer;background:url("https://mars.nasa.gov/assets/play-button.png") 0 0 no-repeat;z-index:10}.primary_media_feature.single.video .player{width:100%;height:100%;position:absolute;top:0;left:0;z-index:2}.primary_media_feature.single .video_header_overlay{position:absolute;bottom:2em;margin:0 auto;left:0;right:0;width:auto;text-align:center;color:white;z-index:5}.primary_media_feature.single .video_header_overlay .media_feature_title{font-size:3em}.custom_banner_container{position:relative}.faq_section h2{margin-top:0}.faq_section ul.q_and_a{margin-bottom:1em}.faq_section ul.q_and_a .question{margin-bottom:1em}.faq_section ul.q_and_a .question:last-child{margin-bottom:0.6em}.faq_section ul.q_and_a .title_container{cursor:pointer}.faq_section ul.q_and_a .title{font-weight:700;font-size:1.1em}.faq_section ul.q_and_a .text.answer{visibility:hidden;position:absolute;left:-9999px}.faq_section ul.q_and_a .text.answer.open{visibility:visible;position:relative;left:0}.faq_section hr:last-child{display:none}.fullscreen_element{position:absolute;top:7px;right:7px;cursor:pointer;background-color:rgba(0,0,0,0.5);width:50px;height:50px;border-radius:5px;z-index:10}@media (min-width: 769px), print{.fullscreen_element{top:20px;right:20px}}.fullscreen_element .fullscreen-icon{height:25px;width:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 1px -25px;background-size:25px;margin:13px 0 0 13px}.fullscreen_element:hover .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px 0px;background-size:25px}.fullscreen_element.fullscreen-mode .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px -74px;background-size:25px}.fullscreen_element.fullscreen-mode:hover .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px -49px;background-size:25px}#timeline-embed:fullscreen{height:100%;width:100%;min-height:none;max-height:none}.triple_teaser{background-color:white;z-index:11}.triple_teaser .column{width:100%}@media (min-width: 769px), print{.triple_teaser .column{width:31.03448%;float:left}.triple_teaser .column:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.triple_teaser .column:nth-child(3n+2){margin-left:34.48276%;margin-right:-100%;clear:none}.triple_teaser .column:nth-child(3n+3){margin-left:68.96552%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.triple_teaser .column{width:28.57143%;float:left}.triple_teaser .column:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.triple_teaser .column:nth-child(3n+2){margin-left:35.71429%;margin-right:-100%;clear:none}.triple_teaser .column:nth-child(3n+3){margin-left:71.42857%;margin-right:-100%;clear:none}}.triple_teaser .column:last-child{margin-bottom:1em}.triple_teaser .column+.column{margin-top:3em}@media (min-width: 769px), print{.triple_teaser .column+.column{margin-top:0}}.triple_teaser header{margin-bottom:1.3em}.triple_teaser .module_title,.triple_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .triple_teaser .carousel_title{text-align:left}@media (min-width: 600px), print{.triple_teaser .module_title,.triple_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .triple_teaser .carousel_title{font-size:2em}}@media (min-width: 600px), print{.triple_teaser footer{text-align:left}}.triple_teaser footer .detail_link{float:left;clear:both;text-align:left;white-space:nowrap}.triple_teaser .img_area{margin-bottom:1em;float:none;width:100%}.triple_teaser .item_list{margin-bottom:1em}.triple_teaser .item_list li{border-bottom:1px solid #BEBEBE;padding:3.44828% 0}.triple_teaser .item_list li:first-child{padding-top:0}.triple_teaser .item_list li:last-child{border-bottom:none}.triple_teaser .item_list .list_image{width:39.65517%;float:left;margin-right:3.44828%;margin-left:0}@media (min-width: 600px), print{.triple_teaser .item_list .list_image{width:31.03448%;float:left;margin-right:3.44828%}}.triple_teaser .item_list .list_text{width:56.89655%;float:right;margin-right:0}@media (min-width: 600px), print{.triple_teaser .item_list .list_text{width:65.51724%;float:right;margin-right:0}}.triple_teaser .item_list .list_text .date{color:#707070;font-size:0.8em;font-weight:500;margin-bottom:0.3em}.triple_teaser .item_list .list_text .date span:before{content:" \2022 "}.triple_teaser .item_list .list_text .title{font-size:1em;font-weight:500}.triple_teaser .upcoming_events .item_list li{padding:3.44828% 0}.triple_teaser .upcoming_events .item_list li:first-child{padding-top:0}.triple_teaser .upcoming_events .item_list .list_text .date{margin-bottom:.7em}.triple_teaser .follow_teaser .text_area{font-weight:300;font-size:1rem}.triple_teaser .follow_teaser .text_area footer{margin-top:2em}ul.item_list{margin-bottom:2em}ul.item_list .list_title{font-size:1.3em;font-weight:700;margin-bottom:.5em}ul.item_list .list_title a{color:#222}ul.item_list .text_only .list_text{width:100%;padding:0}ul.item_list>li hr{margin:0}ul.item_list .list_image{width:37.5%;float:right;margin-right:0;margin-left:4.16667%;margin-bottom:.5em}@media (min-width: 600px), print{ul.item_list .list_image{margin-left:0;margin-bottom:0;width:35.89744%;float:left;margin-right:2.5641%}}@media (min-width: 769px), print{ul.item_list .list_image{width:22.41379%;float:left;margin-right:3.44828%}}@media (min-width: 1024px), print{ul.item_list .list_image{width:31.03448%;float:left;margin-right:3.44828%}}@media (min-width: 600px), print{ul.item_list .list_text{width:61.53846%;float:right;margin-right:0}}@media (min-width: 769px), print{ul.item_list .list_text{width:74.13793%;float:right;margin-right:0}}@media (min-width: 1024px), print{ul.item_list .list_text{width:65.51724%;float:right;margin-right:0}}ul.item_list .list_text h2,ul.item_list .list_text h3,ul.item_list .list_text h4{margin-top:0}ul.item_list .list_content{padding:1em 0}ul.item_list .list_description{margin-top:0}ul.item_list .description .long{display:none}ul.item_list .description .long p:first-of-type{margin-top:0}ul.people.item_list li.person{padding:4.16667% 0}ul.people.item_list li.person:first-child{padding-top:0}ul.people.item_list .person_header{margin-bottom:1.2em}ul.people.item_list .list_title.list_name{padding-top:7%}@media (min-width: 600px), print{ul.people.item_list .list_title.list_name{padding:0}}ul.people.item_list .person_title{font-weight:300}ul.people.item_list .description{clear:both}@media (min-width: 600px), print{ul.people.item_list .description{clear:none}}ul.people.item_list .person+.person{border-top:1px solid #BEBEBE}ul.item_list.text_item_list .list_text{width:100%}ul.item_list.text_item_list .list_text .date{margin-bottom:.3em}ul.item_list.text_item_list a{color:#257cdf}ul.item_list.text_item_list a:hover{text-decoration:underline}ul.item_list.text_item_list .publication_authors{margin-bottom:.4em}ul.item_list.text_item_list .citation{font-size:.85em;margin-bottom:.4em;font-weight:300}ul.item_list.text_item_list .publication_title{font-size:1.1em;font-weight:700;margin-bottom:.4em}ul.item_list.text_item_list .publication_title a{color:#222}ul.item_list.text_item_list .list_title a{color:#222}.explore_overlay_page .feature_pages .wysiwyg_content .item_list_module{margin-left:auto}@media (min-width: 1024px){.secondary_nav_desktop{overflow-x:auto}}@media (min-width: 1024px){.custom_banner_container .secondary_nav_desktop{overflow-x:visible}}@media (min-width: 1024px){.custom_banner_container .fixed_secondary_nav{overflow-x:auto}}nav.secondary_nav{font-weight:400}nav.secondary_nav .grid_layout{width:100%;padding-left:10px;padding-right:10px;max-width:none}@media (min-width: 600px), print{nav.secondary_nav .grid_layout{padding-left:17px;padding-right:17px}}nav.secondary_nav.secondary_nav_mobile{display:block;width:100%}nav.secondary_nav.secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0;background-color:#AFB3B9;width:100%;max-width:none;border-radius:0}nav.secondary_nav.secondary_nav_mobile select::-ms-expand{display:none}nav.secondary_nav.secondary_nav_mobile select option{padding:0.5em 1em}@media (min-width: 1024px){nav.secondary_nav.secondary_nav_mobile{display:none}}nav.secondary_nav.secondary_nav_desktop{display:none}@media (min-width: 1024px){nav.secondary_nav.secondary_nav_desktop{padding:0.8em 0 0.9em;display:block;margin:0;background-color:#eee;text-align:center}}nav.secondary_nav.secondary_nav_desktop .section_title{display:none}nav.secondary_nav.secondary_nav_desktop .section_title a{padding-left:0;text-decoration:none}nav.secondary_nav.secondary_nav_desktop li{display:inline-block;position:relative}nav.secondary_nav.secondary_nav_desktop a{color:#777;font-size:1em;font-weight:600;display:block;padding:.3em .3em}@media (min-width: 769px), print{nav.secondary_nav.secondary_nav_desktop a{padding:.3em .6em}}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop a{padding:.3em .9em}}@media (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop a{font-size:1.1em}}.custom_banner_container nav.secondary_nav.secondary_nav_desktop a{color:white}nav.secondary_nav.secondary_nav_desktop ul{white-space:nowrap}nav.secondary_nav.secondary_nav_desktop li.current a,nav.secondary_nav.secondary_nav_desktop li:hover a{text-decoration:none;color:#222}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .grid_layout{display:flex;flex-wrap:nowrap;justify-content:space-between}@media (min-width: 1024px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{position:fixed;width:100%;top:0;left:0;z-index:100;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav.secondary_nav_desktop{padding:1em 0 0.8em;white-space:nowrap}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title{display:inline-block;margin-top:3px;margin-right:1.6em;font-size:1.2em;flex-shrink:0}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title a{padding:0;color:#2B2B2B}}@media (min-width: 1024px) and (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title{margin-top:6px}}@media (min-width: 1024px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav ul{display:inline-block;text-align:right;width:100%}}@media (min-width: 1024px) and (min-width: 1024px), print and (min-width: 1024px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .5em;font-size:0.9em}}@media (min-width: 1024px) and (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .6em;font-size:0.95em}}@media (min-width: 1024px) and (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .8em;font-size:1em}}@media (min-width: 1024px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current a,nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav a:hover,nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title a:hover{text-decoration:none;color:#2B2B2B}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li:last-of-type a{padding-right:0}}.custom_banner_container nav.secondary_nav.secondary_nav_desktop,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop{text-align:center;margin:0;background-color:transparent}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li{margin-bottom:6px}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li a,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li a{color:white}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li.current:after,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li.current:after{bottom:-60px;left:50%;border:solid transparent;content:" ";height:0;width:0;position:absolute;pointer-events:none;border-top-color:black;border-width:20px;margin-left:-20px}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{background-color:#e4e7ec}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after{content:none}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{color:#fff}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a:hover,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a:hover{color:#2B2B2B}.custom_banner_container nav.secondary_nav.secondary_nav_mobile,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile{text-align:center;padding:0 2.5%}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0.9em 0 1.1em}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select::-ms-expand,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select::-ms-expand{display:none}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select option,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select option{padding:0.5em 1em}.homepage_feature_container nav.secondary_nav{position:absolute;bottom:0;z-index:2}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop{width:100%}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{bottom:auto}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after{content:none}#explore_overlay nav.secondary_nav{display:none}nav.tertiary_nav{position:relative;margin-top:1.2em;font-weight:400}nav.tertiary_nav ul li{position:relative;display:inline-block;vertical-align:middle}nav.tertiary_nav ul li:hover a,nav.tertiary_nav ul li.current a{color:black}nav.tertiary_nav ul li+li:before{content:" | ";padding:0 .5em;vertical-align:middle;font-weight:100;color:#BEBEBE}nav.tertiary_nav ul a{display:inline-block;vertical-align:middle;color:#909090}nav.tertiary_nav ul a:hover{text-decoration:none}@media (min-width: 769px), print{nav.tertiary_nav ul a{font-size:1.2em}}@media (min-width: 1024px), print{nav.tertiary_nav ul a{font-size:1.3em}}@media (min-width: 1200px){nav.tertiary_nav ul a{font-size:1.4em}}.tertiary_nav_mobile{display:block;text-align:center}@media (min-width: 600px), print{.tertiary_nav_mobile{text-align:left}}.tertiary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0 auto}.tertiary_nav_mobile select::-ms-expand{display:none}.tertiary_nav_mobile select option{padding:0.5em 1em}@media (min-width: 769px), print{.tertiary_nav_mobile{display:none}}.tertiary_nav_desktop{display:none}@media (min-width: 769px), print{.tertiary_nav_desktop{display:block}}.condense_control{color:#257cdf}.condense_control:hover{text-decoration:underline}.condense_control:before{content:' › ';white-space:nowrap}section.intro{display:block;background-color:#000;position:absolute;top:0;left:0;z-index:99;width:100%;overflow:hidden;height:100vh}section.intro .brand_area{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:100%;z-index:301;position:absolute;top:2%;left:4%;width:60%;height:80%;max-width:500px}@media (min-width: 769px), print{section.intro .brand_area{width:45%;left:2%}}section.intro img{object-fit:cover;height:100%;width:100%}@media (min-width: 480px){section.intro img{margin-top:0}}body.intro_screen_visible .vital_signs_menu,body.intro_screen_visible .more_bar{z-index:100}body.intro_screen_visible .vital_signs_menu .overlay_icon{display:none}body.intro_screen_visible section.more_bar .title,body.intro_screen_visible section.more_bar .arrow_down{display:none}body.intro_screen_visible section.more_bar:after{content:"loading...";display:inline-block;padding:0.6em 0;font-size:.9em}html.explore_overlay_open,body.explore_overlay_open{overflow:hidden;width:100%;height:100%;position:fixed;-ms-overflow-style:-ms-autohiding-scrollbar}#explore_overlay{position:fixed;top:0;left:0;height:100vh;width:100%;z-index:1000001;overflow-x:hidden;visibility:hidden;opacity:0;padding:0 0 6px;background-color:rgba(0,0,0,0.9)}.overlay_loaded #explore_overlay{background-color:#000}#explore_overlay.visible{visibility:visible}#explore_overlay .content{position:relative;height:100%;visibility:hidden;opacity:0}#explore_overlay .content.visible{visibility:visible}#explore_overlay .content>iframe{top:0;left:0;position:absolute}#explore_overlay .loading{position:absolute;left:50%;top:42vh;transform:translateX(-50%);width:auto;text-align:center;display:none}#explore_overlay .loading img{width:44px;height:44px}#explore_overlay .loading p{font-family:Whitney, Helvetica, Arial, sans-serif;position:relative;color:#76aee6;font-size:14px;letter-spacing:0.1em}#explore_overlay .loading .spinner div{background:#ccc !important}#explore_overlay .background_area{-webkit-tap-highlight-color:transparent;-webkit-tap-highlight-color:transparent;width:100%;height:100%;position:absolute;top:0;left:0;cursor:pointer;z-index:-1}#explore_overlay.lightbox_overlay{background-color:rgba(0,0,0,0.75);text-align:center;padding:0}#explore_overlay.lightbox_overlay .content{position:fixed;background-color:#1f1f1f;width:95%;height:92% !important;max-width:1400px;margin:2em auto 0;border-radius:4px;left:auto;right:calc(50vw - 47.5%)}@media only screen and (min-width: 1480px){#explore_overlay.lightbox_overlay .content{right:calc(50vw - 697px)}}@media only screen and (max-device-width: 1024px) and (-webkit-min-device-pixel-ratio: 1){#explore_overlay.lightbox_overlay .content{overflow-y:scroll;-webkit-overflow-scrolling:touch}}.overlay_close_button{position:absolute;top:1em;right:1.1em;z-index:1000003;width:40px;height:40px;position:fixed;background:#000;text-decoration:none;text-align:center;line-height:1em;transition:.3s opacity;visibility:hidden;opacity:0}.overlay_close_button.visible{visibility:visible}.no-touchevents .overlay_close_button:hover{opacity:1}.overlay_close_button .close_icon{display:block;height:100%;position:relative}.overlay_close_button .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:1px;width:100%;top:calc(50% - .5px);left:0;background:#fff;opacity:.8}.overlay_close_button .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:1px;width:100%;top:calc(50% - .5px);left:0;background:#fff;opacity:.8}@media (min-width: 769px), print{.overlay_close_button{width:60px;height:60px;top:1.1em;right:1.1em}}@media (min-width: 1700px){.overlay_close_button{width:70px;height:70px;top:1.2em;right:1.2em}}.overlay_close_button.lightbox_overlay{background-color:#1f1f1f;top:2.5em;right:calc(50vw - 45.5%)}@media (min-width: 600px), print{.overlay_close_button.lightbox_overlay{right:calc(50vw - 45%)}}@media (min-width: 769px), print{.overlay_close_button.lightbox_overlay{top:2.6em;width:50px;height:50px}}@media (min-width: 1024px), print{.overlay_close_button.lightbox_overlay{right:calc(50vw - 45.5%)}}@media only screen and (min-width: 1480px){.overlay_close_button.lightbox_overlay{right:calc(50vw - 680px)}}@media (min-width: 1700px){.overlay_close_button.lightbox_overlay{top:2.7em;width:60px;height:60px}}#iframe_overlay,#iframe_overlay body{height:100%;overflow-y:auto;-webkit-overflow-scrolling:touch;font-weight:400}#iframe_overlay{width:1px;min-width:100%;word-wrap:break-word;color:#e4e3e3}#iframe_overlay p,#iframe_overlay .release_date{color:#e4e3e3}#iframe_overlay hr{border-color:#3c3c3c}#iframe_overlay a{color:#42a0f2}#iframe_overlay .header_mask{display:none}#iframe_overlay .explore_overlay_page{padding-bottom:4em}#iframe_overlay .done_btn{text-align:center}#iframe_overlay .done_btn button{color:#6bbed8;margin:2em 0;padding:.3em .7em .4em;background:none;cursor:pointer;letter-spacing:1px;font-weight:300;outline:none;position:relative;font-size:1.8em;border:1px solid #6bbed8;transition:color 200ms, border-color 200ms}#iframe_overlay .done_btn button::after{content:'Close'}#iframe_overlay .done_btn button:hover{color:#82ddf9;border-color:#82ddf9}#iframe_overlay .left_col,#iframe_overlay .right_col{position:relative;float:left}#iframe_overlay .left_col{width:100%}@media (min-width: 769px), print{#iframe_overlay .left_col{width:65%;border-right:1px solid #BEBEBE;padding-right:1em}}@media (min-width: 1200px){#iframe_overlay .left_col{padding-right:3em}}#iframe_overlay .right_col{width:100%}#iframe_overlay .right_col p{color:#868686}#iframe_overlay .right_col p b{color:#222}@media (min-width: 769px), print{#iframe_overlay .right_col{width:35%;padding-left:1em;left:-1px}}@media (min-width: 1200px){#iframe_overlay .right_col{padding-left:3em}}#iframe_overlay .suggested_features{display:none}#iframe_overlay #secondary_column aside.boxed{border-color:#6d6b6b}#iframe_overlay #secondary_column .related_content_module{border-color:#5a5a5a}#iframe_overlay #secondary_column .related_content_module li{border-color:#3c3c3c;padding:.8em 0}#iframe_overlay .article_nav{display:none}.info_tabs_module{position:relative;color:black;background:url("https://mars.nasa.gov/assets/mars_landscape.jpg") center top no-repeat;background-size:cover}@media (min-width: 769px), print{.info_tabs_module{height:620px;background-position:center bottom}}.info_tabs_module div[data-react-class="InfoTabs"]{height:100%}.info_tabs_module .grid_layout{padding-bottom:10em}@media (min-width: 769px), print{.info_tabs_module .grid_layout{padding-bottom:0}}.info_tabs_module .gradient_container_bottom{display:none}.info_tabs_module .info_tabs{padding:2.7em 0 5em;height:100%}@media (min-width: 769px), print{.info_tabs_module .info_tabs{padding:5.3em 0 5em}}.info_tabs_module .col1,.info_tabs_module .col2{width:100%}@media (min-width: 769px), print{.info_tabs_module .col1,.info_tabs_module .col2{width:48%}}.info_tabs_module .col2{display:none;float:right;margin-top:2rem}@media (min-width: 769px), print{.info_tabs_module .col2{display:block;margin-top:0}}.info_tabs_module .col1{float:left}.info_tabs_module .info_tabs_header{width:100%;margin-bottom:1.8em;display:inline-block;text-align:center}@media (min-width: 769px), print{.info_tabs_module .info_tabs_header{text-align:left;margin-bottom:3em}}.info_tabs_module .info_tabs_header h2{font-size:1.69em;margin-bottom:0em;font-weight:300}@media (min-width: 600px), print{.info_tabs_module .info_tabs_header h2{font-size:1.95em;margin-bottom:0em}}@media (min-width: 769px), print{.info_tabs_module .info_tabs_header h2{font-size:2.21em;margin-bottom:0em}}@media (min-width: 1024px), print{.info_tabs_module .info_tabs_header h2{font-size:2.34em;margin-bottom:0em}}@media (min-width: 1200px){.info_tabs_module .info_tabs_header h2{font-size:2.47em;margin-bottom:0em}}.info_tabs_module .info_tabs_links{padding-left:2rem;position:relative;z-index:2;font-size:1.3rem;font-weight:300;width:90%}@media (min-width: 769px), print{.info_tabs_module .info_tabs_links{width:auto}}.info_tabs_module .info_tabs_links li{cursor:pointer;position:relative;margin-bottom:0.3em}.info_tabs_module .info_tabs_links .tab_title{margin-bottom:0.2em;letter-spacing:-0.02em}.info_tabs_module .info_tabs_links .info_tabs_link:before{content:'';width:0;height:0;border-top:7px solid transparent !important;border-bottom:7px solid transparent !important;border-left:11px solid #943b2b;display:inline-block;transform:none;position:absolute;left:-20px;top:7px;opacity:0;transition:all 200ms}.no-touchevents .info_tabs_module .info_tabs_links .info_tabs_link:not(.active):hover:before,.info_tabs_module .info_tabs_links .active:before{opacity:1;left:-28px}.no-touchevents .info_tabs_module .info_tabs_links .info_tabs_link:not(.active):hover:before{opacity:.7}.info_tabs_module .info_tabs_detail,.info_tabs_module .active .mobile_tab_detail{float:right;max-height:320px;overflow-y:auto;padding-right:1em;position:relative;z-index:2;font-weight:300;width:100%;-webkit-overflow-scrolling:touch}.info_tabs_module .info_tabs_detail::-webkit-scrollbar,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar{width:5px}.info_tabs_module .info_tabs_detail::-webkit-scrollbar-thumb,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar-thumb{background-color:rgba(107,107,107,0.6)}.info_tabs_module .info_tabs_detail::-webkit-scrollbar-track,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar-track{background-color:rgba(157,157,157,0.4)}@media (min-width: 769px), print{.info_tabs_module .info_tabs_detail,.info_tabs_module .active .mobile_tab_detail{padding-right:3em;top:0.8em}}.info_tabs_module .info_tabs_detail .info_tabs_title,.info_tabs_module .active .mobile_tab_detail .info_tabs_title{display:none}@media (min-width: 769px), print{.info_tabs_module .info_tabs_detail .info_tabs_title,.info_tabs_module .active .mobile_tab_detail .info_tabs_title{display:block;font-size:1.1em;margin-bottom:1em;margin-top:0;text-transform:uppercase}}.info_tabs_module .mobile_tab_detail{display:none}.info_tabs_module .active .mobile_tab_detail{font-size:0.95rem;float:none;display:block}@media (min-width: 769px), print{.info_tabs_module .active .mobile_tab_detail{display:none}}.info_tabs_module .active .tab_title{margin-bottom:.5em}@media (min-width: 769px), print{.info_tabs_module .active .tab_title{margin-bottom:.2em}}.info_tabs_module .info_tabs_content *:first-child,.info_tabs_module .active .mobile_tab_detail *:first-child{margin-top:0}.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .info_tabs_content>p,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4,.info_tabs_module .active .mobile_tab_detail>p{margin:1em 0}.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{font-size:1.1em;margin-top:2em}@media (min-width: 769px), print{.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{margin-top:0}}.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{margin-top:1.5em}.info_tabs_module .info_tabs_content>p:last-child{margin-bottom:1em}.info_tabs_module .info_tabs_content ol{margin-bottom:0}.info_tabs_module .active .mobile_tab_detail>p:last-child{margin-bottom:0}.info_tabs_module .less_option,.info_tabs_module .more_option{display:inline-block;font-size:.95rem;margin-bottom:0.5em}@media (min-width: 769px), print{.info_tabs_module .less_option,.info_tabs_module .more_option{display:none}}.info_tabs_module .less_option:after{content:"- less";display:block}.info_tabs_module .more_option:after{content:"+ more";display:block}.info_tabs_module footer{width:90%;max-width:1330px;position:absolute;bottom:50px;right:0;left:0;margin:auto;text-align:right}.info_tabs_module .more_link{font-size:1rem;font-weight:500;text-transform:uppercase;color:white;cursor:pointer}@media (min-width: 769px){.parallax_categorized_teaser .bubble_container{max-height:788px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .bubble_container .oculus{transform:scale(0.8)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:214px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .bubble_container .oculus{transform:scale(0.9)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:119px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:70px;left:408px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){left:0;top:269px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:347px;left:290px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .bubble_container .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:179px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:104px;left:495px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:282px;left:0}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:390px;left:324px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:229px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:134px;left:565px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:272px;left:0}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:420px;left:354px}}@media (min-width: 769px){.parallax_categorized_teaser .bubble_container{height:75vh}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.two .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:-30px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.two .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:30px;left:70px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:30px;left:378px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.two .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:40px;left:70px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:40px;left:422px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.three .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:-30px;left:100px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:185px;left:-20px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:185px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.three .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:0;left:204px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:280px;left:30px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:280px;left:378px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.three .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:20px;left:226px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:310px;left:30px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:310px;left:432px}}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four{max-height:788px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:214px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:119px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:70px;left:408px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){left:0;top:269px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:347px;left:290px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.four .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:179px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:104px;left:495px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:282px;left:0}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:390px;left:324px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:229px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:134px;left:565px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:272px;left:0}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:420px;left:354px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.five .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:214px;left:224px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:432px;left:102px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.five .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:0;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:0;left:408px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:200px;left:204px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:400px;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:400px;left:408px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.five .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:0;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:0;left:452px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:200px;left:226px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:400px;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:400px;left:452px}}.parallax_categorized_teaser{height:auto;background:url("https://mars.nasa.gov/assets/red_planet_bg.png") center no-repeat;background-size:cover;color:#eeaaa1;overflow:hidden;position:relative;padding-top:3em}@media (min-width: 769px){.parallax_categorized_teaser{height:calc(100vh - 74px);padding-top:4em;min-height:860px}}@media (min-width: 1200px){.parallax_categorized_teaser{padding-top:5em}}@media (min-width: 769px){.parallax_categorized_teaser .module_content{height:calc(100% - 36px)}}.parallax_categorized_teaser .module_content.fixed{position:fixed;top:82px;left:0}.parallax_categorized_teaser .module_title,.parallax_categorized_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .parallax_categorized_teaser .carousel_title{font-size:2em;font-weight:200;text-align:center;margin-bottom:0.85em}@media (min-width: 769px){.parallax_categorized_teaser .module_title,.parallax_categorized_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .parallax_categorized_teaser .carousel_title{text-align:left;margin-bottom:1em}}@media (min-width: 1200px){.parallax_categorized_teaser .module_title,.parallax_categorized_teaser .main_carousel.module .carousel_header .carousel_title,.main_carousel.module .carousel_header .parallax_categorized_teaser .carousel_title{font-size:2.2em}}.parallax_categorized_teaser .mobile_only{display:block}@media (min-width: 769px){.parallax_categorized_teaser .mobile_only{display:none}}.parallax_categorized_teaser .categorized_content{width:100%;top:0;left:0;right:0;bottom:0;margin-bottom:0}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content{width:55%;max-width:795px;position:absolute;top:80px;left:38%}}@media (min-width: 769px) and (min-width: 600px), print and (min-width: 769px){.parallax_categorized_teaser .categorized_content{width:58%}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .categorized_content{left:33.5%;width:61.2%}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .categorized_content{left:34.5%;top:100px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .categorized_content{position:relative;left:0;top:20px}}.parallax_categorized_teaser .categorized_content .content_for{position:relative;display:none}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents{padding:0 40px 0 0;max-height:71vh;overflow:hidden;overflow-y:auto}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar{width:5px}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar-thumb{background-color:rgba(255,255,255,0.4)}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar-track{background-color:rgba(255,255,255,0.1)}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p{margin:1em 1.5em}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{color:#eeaaa1;font-size:1.1em;font-weight:400;margin-top:2em}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{margin-top:0}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2{text-transform:uppercase}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{margin-top:1.5em}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p{color:#e3e3e3;max-width:640px}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p:last-child{margin-bottom:2em}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p:last-child{margin-bottom:1em}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p a{color:#54b3da}.parallax_categorized_teaser .categorized_content .content_for.current{display:block}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for{position:absolute;display:block;opacity:0;left:1000px;top:0;transition:all 400ms;width:100%}.parallax_categorized_teaser .categorized_content .content_for.current{left:0;top:0;opacity:1;display:block !important}}.parallax_categorized_teaser .carousels{height:100%;width:100%}.parallax_categorized_teaser footer{position:absolute;height:50px;bottom:0;left:0;width:100vw;background-color:rgba(129,33,24,0.6)}.parallax_categorized_teaser .oculus{position:relative;text-align:center;background-size:cover;height:186px;padding:1.5em 0;border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .oculus{padding:0;position:absolute;width:275px;height:275px}}.parallax_categorized_teaser .oculus h3.carousel_title{position:absolute;font-size:.9rem;text-align:center;margin:0 auto 12px;width:auto;font-weight:300}@media (max-width: 768px){.parallax_categorized_teaser .oculus h3.carousel_title{left:50%;transform:translateX(-50%)}}@media (min-width: 769px){.parallax_categorized_teaser .oculus h3.carousel_title{position:relative;width:50%;margin:24px auto 12px}}.parallax_categorized_teaser .oculus h3.description_title{font-size:.9rem;font-weight:400;margin:0 0 12px}@media (min-width: 769px){.parallax_categorized_teaser .oculus h3.description_title{margin:17px 0 12px}}.parallax_categorized_teaser .oculus span.mobile_only_title{font-weight:200}@media (min-width: 769px){.parallax_categorized_teaser .oculus span.mobile_only_title{display:none}}.parallax_categorized_teaser .oculus .slide_description{padding:0 2em;font-weight:400}.parallax_categorized_teaser .oculus .cols{display:flex;margin-left:-6px;margin-right:-6px}.parallax_categorized_teaser .oculus .cols .col{margin:0 auto;max-width:55%}.parallax_categorized_teaser .oculus .cols .val{font-size:2rem;font-weight:400;letter-spacing:-.05em}.parallax_categorized_teaser .oculus .cols .val.small_text{font-size:1.6rem}.parallax_categorized_teaser .oculus .cols .val_label{font-size:.9rem;font-weight:300}@media (min-width: 769px){.parallax_categorized_teaser .oculus{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .oculus .carousel_title{color:#445c64}.parallax_categorized_teaser .oculus .title,.parallax_categorized_teaser .oculus .hover{color:#c7d9de}.parallax_categorized_teaser .oculus .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus .slick-prev:before,.parallax_categorized_teaser .oculus .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.data{border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .oculus.data .carousel_title{color:#445c64}.parallax_categorized_teaser .oculus.data .title,.parallax_categorized_teaser .oculus.data .hover{color:#c7d9de}.parallax_categorized_teaser .oculus.data .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.data .slick-prev:before,.parallax_categorized_teaser .oculus.data .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.data .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.data h3.carousel_title{display:none}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data h3.carousel_title{display:block}}.parallax_categorized_teaser .oculus.data .carousel_title{color:#6b93a0}.parallax_categorized_teaser .oculus.data .val{color:white}.parallax_categorized_teaser .oculus.data .note{margin-top:15px;font-size:.75rem;font-weight:300}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data .slick-slide.slide{background-color:rgba(95,180,206,0.1)}}.parallax_categorized_teaser .oculus.images{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60;padding:0}@media (min-width: 769px){.parallax_categorized_teaser .oculus.images{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .oculus.images .carousel_title{color:#f06c60}.parallax_categorized_teaser .oculus.images .title,.parallax_categorized_teaser .oculus.images .hover{color:#ffddcf}.parallax_categorized_teaser .oculus.images .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.images .slick-prev:before,.parallax_categorized_teaser .oculus.images .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.images .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.images h3.carousel_title{z-index:1;background-color:rgba(86,42,42,0.65);text-align:center;padding:5px 14px;border-radius:6px;margin-top:1.2em}@media (max-width: 768px){.parallax_categorized_teaser .oculus.images h3.carousel_title{color:#ffbdb7}}@media (min-width: 769px){.parallax_categorized_teaser .oculus.images h3.carousel_title{background-color:transparent;padding:0;margin-top:24px}}.parallax_categorized_teaser .oculus.images a.hover_state{display:block;height:100%;width:100%;position:absolute;top:0}.parallax_categorized_teaser .oculus.images .rollover_description{background-color:rgba(86,42,42,0.85);padding:1em 2em;text-align:left}.parallax_categorized_teaser .oculus.images .rollover_description .title{font-size:.9rem;color:#f5dddb}.parallax_categorized_teaser .oculus.compare{border-bottom:3px solid #fcb963;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f7ca99;background:linear-gradient(to right, rgba(53,30,1,0.46) 0%, rgba(53,30,1,0.46) 49%, rgba(10,8,9,0.47) 50%, rgba(10,8,9,0.47) 100%)}@media (min-width: 769px){.parallax_categorized_teaser .oculus.compare{border:3px solid #fcb963;border-radius:50%}}.parallax_categorized_teaser .oculus.compare .carousel_title{color:#f7ca99}.parallax_categorized_teaser .oculus.compare .title,.parallax_categorized_teaser .oculus.compare .hover{color:#f7ca99}.parallax_categorized_teaser .oculus.compare .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.compare .slick-prev:before,.parallax_categorized_teaser .oculus.compare .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.compare .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.compare h3.carousel_title{display:none}@media (min-width: 769px){.parallax_categorized_teaser .oculus.compare h3.carousel_title{display:block}}.parallax_categorized_teaser .oculus.compare .thing_one .val{color:white}.parallax_categorized_teaser .oculus.compare .thing_two{color:#66c8eb}.parallax_categorized_teaser .oculus.compare .thing{font-weight:300}.parallax_categorized_teaser .oculus.compare .details{font-size:.9rem;font-weight:300}.parallax_categorized_teaser .oculus.news{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60}@media (min-width: 769px){.parallax_categorized_teaser .oculus.news{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .oculus.news .carousel_title{color:#f06c60}.parallax_categorized_teaser .oculus.news .title,.parallax_categorized_teaser .oculus.news .hover{color:#ffddcf}.parallax_categorized_teaser .oculus.news .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.news .slick-prev:before,.parallax_categorized_teaser .oculus.news .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.news .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.news .slide_description{text-align:left}@media (max-width: 768px){.parallax_categorized_teaser .oculus.news .slide_description{margin-top:32px;padding:0 18%}}.parallax_categorized_teaser .oculus.news .slide_description a{color:#f5dddb}.parallax_categorized_teaser .oculus.news .slide_description .description{font-size:.9rem;font-weight:400}@media (min-width: 769px){.parallax_categorized_teaser .oculus.news .slick-slide.slide{background-color:rgba(86,42,42,0.58)}}.parallax_categorized_teaser .more_bar{background-color:rgba(129,33,24,0.4);transition:background-color 200ms}.parallax_categorized_teaser .more_bar:hover{background-color:rgba(129,33,24,0.7)}.parallax_categorized_teaser .more_bar .arrow_down{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -50px -100px;background-size:300px}.parallax_categorized_teaser .more_bar .arrow_down:hover,.parallax_categorized_teaser .more_bar .arrow_down.active,.parallax_categorized_teaser .more_bar .arrow_down.current{background-position:-50px -100px}.parallax_categorized_teaser .slick-prev,.parallax_categorized_teaser .slick-next{top:calc(50% - 20px)}@media (min-width: 769px){.parallax_categorized_teaser .slick-prev,.parallax_categorized_teaser .slick-next{top:83%}}.parallax_categorized_teaser .slick-slider .slick-prev{left:2%}@media (min-width: 769px){.parallax_categorized_teaser .slick-slider .slick-prev{left:calc(50% - 35px)}}.parallax_categorized_teaser .slick-slider .slick-next{right:2%}@media (min-width: 769px){.parallax_categorized_teaser .slick-slider .slick-next{right:calc(50% - 35px)}}.parallax_categorized_teaser .slick-slide.slide{overflow:hidden;height:186px}@media (min-width: 769px){.parallax_categorized_teaser .slick-slide.slide{height:160px}}.parallax_categorized_teaser .definition_teasers .oculus{cursor:pointer}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme{border-bottom:3px solid #fcb963;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f7ca99}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme{border:3px solid #fcb963;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .carousel_title{color:#f7ca99}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .hover{color:#f7ca99}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme{border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .carousel_title{color:#445c64}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .hover{color:#c7d9de}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .carousel_title{color:#f06c60}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .hover{color:#ffddcf}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .title{margin-bottom:12px;font-size:1.6em}.no-touchevents .parallax_categorized_teaser .definition_teasers .title{margin-bottom:0}.parallax_categorized_teaser .definition_teasers .hover{font-size:1.2em}.parallax_categorized_teaser .definition_teasers .title,.parallax_categorized_teaser .definition_teasers .hover{margin-top:0;margin-right:10%;margin-left:10%;top:30%;position:relative;font-weight:300}.parallax_categorized_teaser .definition_teasers .title_container{position:relative;top:50%;transform:translateY(-50%)}.no-touchevents .parallax_categorized_teaser .definition_teasers .hover{display:none}.parallax_categorized_teaser .definition_teasers .bg_container{position:absolute;top:0;left:0;width:100%;height:100%;opacity:0.25;background-size:cover;transition:filter .4s;filter:brightness(60%)}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .bg_container{filter:brightness(100%)}}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .hover{display:block}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .title{display:none}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .bg_container{filter:brightness(60%)}.parallax_categorized_teaser .definition_teasers .mobile_detailed_definition{display:none;position:relative}.parallax_categorized_teaser .detailed_def_visible .detailed_definition{padding:1.4em .8em;width:98%;position:relative;display:none}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .detailed_definition{padding:0;max-width:680px;width:90%;left:3%;float:left;display:block}}@media (min-width: 1024px), print{.parallax_categorized_teaser .detailed_def_visible .detailed_definition{max-width:740px;left:6%}}.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition{display:none;padding:0 1.5em;text-align:left}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition{display:none}}.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition .mobile_detailed_definition_contents a{color:#54b3da}.parallax_categorized_teaser .detailed_def_visible .close_button{display:block;height:35px;width:33px;position:absolute;right:0;top:0;padding:3px;text-decoration:none;-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .close_button{top:0;right:-5px}}.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon:hover,.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon.active,.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon.current{background-position:-25px 0}.parallax_categorized_teaser .detailed_def_visible p{color:#e3e3e3}.parallax_categorized_teaser .detailed_def_visible .detailed_def_title{padding:0 3%;text-decoration:none;cursor:pointer;color:#eeaaa1;font-size:1.1em;font-weight:400;display:block;text-transform:uppercase}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .detailed_def_title{padding:0 10% 0 0;margin:.1em 0 0.5em;transition:color 200ms}.parallax_categorized_teaser .detailed_def_visible .detailed_def_title:hover{color:white}}.parallax_categorized_teaser .detailed_def_visible .definition_contents{padding:3%}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .definition_contents{padding:0 20px 0 0}}.parallax_categorized_teaser .detailed_def_visible .definition_contents h1,.parallax_categorized_teaser .detailed_def_visible .definition_contents h2,.parallax_categorized_teaser .detailed_def_visible .definition_contents h3,.parallax_categorized_teaser .detailed_def_visible .definition_contents h4,.parallax_categorized_teaser .detailed_def_visible .definition_contents h5{font-weight:500}.parallax_categorized_teaser .detailed_def_visible .definition_contents h3{font-size:1.1em;margin:0.6em 0 0.6em}.parallax_categorized_teaser .detailed_def_visible .definition_contents h3:first-child{margin-top:0}.parallax_categorized_teaser .detailed_def_visible .definition_contents a{color:#54b3da}.parallax_categorized_teaser .detailed_def_visible .oculus{display:block}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .oculus{display:none}}.parallax_categorized_teaser .detailed_def_visible .oculus.current{height:auto;pointer-events:none}.parallax_categorized_teaser .detailed_def_visible .oculus.current a{pointer-events:auto}.parallax_categorized_teaser .detailed_def_visible .oculus.current h2.title,.parallax_categorized_teaser .detailed_def_visible .oculus.current .hover{display:none !important}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition{display:block}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h1,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h2,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h3,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h4,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h5{font-weight:500}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition .mobile_detailed_definition_title{margin-top:0;width:85%;font-weight:300}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition .close_button{pointer-events:auto;right:1em}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav{margin-bottom:1.5em;width:93%}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li{cursor:pointer;display:inline-block;font-weight:300;font-size:1.1em;line-height:1.7em;margin-right:1em;transition:color 200ms}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li:hover{color:white}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li:last-child{margin-right:0}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav .current{color:white}#iframe_overlay .explore_overlay_page{background-color:black}@keyframes pulse{0%{opacity:1}50%{opacity:.3}100%{opacity:1}}.dsn_connection .target_name,.mars_relay_connection .target_name{display:inline-block;width:calc(100% - 61px);vertical-align:middle;white-space:normal}.dsn_connection .info_tip ~ .dsn_title,.dsn_connection .info_tip ~ .mars_relay_title,.mars_relay_connection .info_tip ~ .dsn_title,.mars_relay_connection .info_tip ~ .mars_relay_title{float:left}.dsn_connection .mars_relay_title,.dsn_connection .dsn_title,.mars_relay_connection .mars_relay_title,.mars_relay_connection .dsn_title{max-width:calc(100% - 30px);display:inline-block}.dsn_connection .signal,.mars_relay_connection .signal{font-size:13px;margin:0.7em 0;white-space:nowrap;font-weight:300}@media (min-width: 480px){.dsn_connection .signal,.mars_relay_connection .signal{font-size:15px}}.dsn_connection .signal .dsn_icon,.mars_relay_connection .signal .dsn_icon{display:inline-block;width:54px;height:54px;vertical-align:middle;margin-right:0.8em}.homepage_dashboard_modal .dsn_connection .signal .sending_receiving,.homepage_dashboard_modal .dsn_connection .signal .sending,.homepage_dashboard_modal .dsn_connection .signal .receiving,.homepage_dashboard_modal .mars_relay_connection .signal .sending_receiving,.homepage_dashboard_modal .mars_relay_connection .signal .sending,.homepage_dashboard_modal .mars_relay_connection .signal .receiving{color:white}.homepage_dashboard_modal .dsn_connection .signal .sending_receiving .target_name,.homepage_dashboard_modal .dsn_connection .signal .sending .target_name,.homepage_dashboard_modal .dsn_connection .signal .receiving .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .sending_receiving .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .sending .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .receiving .target_name{color:white}.homepage_dashboard_modal .dsn_connection .signal .transmissions,.homepage_dashboard_modal .dsn_connection .signal .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .transmissions,.homepage_dashboard_modal .mars_relay_connection .signal .target_name{color:#fcb963}.dsn_connection .signal .sending_receiving,.mars_relay_connection .signal .sending_receiving{animation:pulse 1s infinite}.dsn_connection .signal .sending_receiving .dsn_icon,.mars_relay_connection .signal .sending_receiving .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -146px -130px no-repeat;background-size:300px}.dsn_connection .signal .sending_receiving .target_name:before,.mars_relay_connection .signal .sending_receiving .target_name:before{content:" SENDING/RECEIVING"}.dsn_connection .signal .sending,.mars_relay_connection .signal .sending{animation:pulse 1s infinite}.dsn_connection .signal .sending .dsn_icon,.mars_relay_connection .signal .sending .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") 0 -130px no-repeat;background-size:300px}.dsn_connection .signal .sending .target_name:before,.mars_relay_connection .signal .sending .target_name:before{content:" SENDING"}.dsn_connection .signal .receiving,.mars_relay_connection .signal .receiving{animation:pulse 1s infinite}.dsn_connection .signal .receiving .dsn_icon,.mars_relay_connection .signal .receiving .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -73px -130px no-repeat;background-size:300px}.dsn_connection .signal .receiving .target_name:before,.mars_relay_connection .signal .receiving .target_name:before{content:" RECEIVING"}.dsn_connection .signal .disconnected .dsn_icon,.mars_relay_connection .signal .disconnected .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -219px -130px no-repeat;background-size:300px}.dsn_connection .signal .disconnected .target_name:before,.mars_relay_connection .signal .disconnected .target_name:before{content:" AWAITING NEXT TRANSMISSION"}.homepage_dashboard_modal .dsn_connection .signal .dsn_icon,.homepage_dashboard_modal .mars_relay_connection .signal .dsn_icon{background-position-y:0px}#iframe_overlay .dsn_connection .signal .dsn_icon,#iframe_overlay .mars_relay_connection .signal .dsn_icon{background-position-y:-62px}.dsn_connection .info_tip,.mars_relay_connection .info_tip{display:inline-block;margin-left:10px}#secondary_column .dsn_connection .info_tip,#secondary_column .mars_relay_connection .info_tip{margin-left:0}@media (min-width: 1024px), print{#secondary_column .dsn_connection .info_tip,#secondary_column .mars_relay_connection .info_tip{margin-left:10px}}.dsn_connection .info_tip .info_icon,.mars_relay_connection .info_tip .info_icon{display:block;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -100px 0;background-size:300px;position:absolute;top:-1px}#secondary_column .dsn_connection .info_tip .info_icon,#secondary_column .mars_relay_connection .info_tip .info_icon{right:0;top:-3px}#primary_column .dsn_connection .info_tip .info_icon,#primary_column .mars_relay_connection .info_tip .info_icon{top:2px}#primary_column .dsn_connection .info_tip .info_text,#primary_column .mars_relay_connection .info_tip .info_text{top:-13px}.dsn_connection .info_tip .info_text,.mars_relay_connection .info_tip .info_text{display:none}.dsn_connection .info_tip.open .info_icon:before,.mars_relay_connection .info_tip.open .info_icon:before{display:none}@media (min-width: 480px){.dsn_connection .info_tip.open .info_icon:before,.mars_relay_connection .info_tip.open .info_icon:before{display:block;content:'';position:absolute;right:calc(50% - 7.5px);width:15px;height:15px;background:white;border-right:1px solid #c9c9c9;border-bottom:1px solid #c9c9c9;z-index:1;transform:rotate(45deg);top:-23px}}@media (min-width: 769px), print{.dsn_connection .info_tip.open .info_icon:before,.mars_relay_connection .info_tip.open .info_icon:before{top:-23px}}@media (min-width: 1200px){.dsn_connection .info_tip.open .info_icon:before,.mars_relay_connection .info_tip.open .info_icon:before{top:-24px}}.dsn_connection .info_tip.open .info_text,.mars_relay_connection .info_tip.open .info_text{display:block;position:absolute;color:#222;font-size:.9em;background-color:white;padding:1.3em;border:1px solid #c9c9c9;border-radius:4px;top:-17px;transform:translateY(-100%);width:100%;left:0}@media (min-width: 1024px){.dsn_connection .info_tip.open .info_text,.mars_relay_connection .info_tip.open .info_text{font-size:.8em}}.homepage_dashboard_modal .dsn_connection .info_tip.open .info_icon:before,.homepage_dashboard_modal .mars_relay_connection .info_tip.open .info_icon:before{background-color:#221307;border-right:1px solid #5f4326;border-bottom:1px solid #5f4326}.homepage_dashboard_modal .dsn_connection .info_tip.open .info_text,.homepage_dashboard_modal .mars_relay_connection .info_tip.open .info_text{color:#beb0a4;background-color:#221307;border:1px solid #5f4326}.mars_relay_connection .transmissions{display:inline-block;vertical-align:middle;width:calc(100% - 70px);white-space:normal}.parallax_categorized_teaser{font-weight:400}.parallax_categorized_teaser nav.mobile_catcont_nav{display:block}@media (min-width: 769px){.parallax_categorized_teaser nav.mobile_catcont_nav{display:none}}.parallax_categorized_teaser nav.desktop_catcont_nav{display:none}@media (min-width: 769px){.parallax_categorized_teaser nav.desktop_catcont_nav{display:block}}.parallax_categorized_teaser .nav_content_container{display:block}@media (min-width: 769px){.parallax_categorized_teaser .nav_content_container{max-width:1300px;width:94%;margin:auto;display:flex;height:100%}}.parallax_categorized_teaser nav.catcont_nav{width:260px;margin-right:3%}@media (min-width: 1024px), print{.parallax_categorized_teaser nav.catcont_nav{width:300px;margin-right:4%}}@media (min-width: 1200px){.parallax_categorized_teaser nav.catcont_nav{width:340px;margin-right:5.5%}}@media (min-width: 1700px){.parallax_categorized_teaser nav.catcont_nav{margin-right:7.5%}}.parallax_categorized_teaser nav.catcont_nav .section{background-color:rgba(129,33,24,0.4);margin-bottom:1px;cursor:pointer;text-align:center;padding:14px 5%;transition:background-color 200ms}.no-touchevents .parallax_categorized_teaser nav.catcont_nav .section:hover:not(.current){background-color:rgba(129,33,24,0.8)}@media (min-width: 480px){.parallax_categorized_teaser nav.catcont_nav .section{padding:14px 10%}}@media (min-width: 600px), print{.parallax_categorized_teaser nav.catcont_nav .section{padding:14px 15%}}@media (min-width: 769px){.parallax_categorized_teaser nav.catcont_nav .section{text-align:left;padding:10px 15px}}.parallax_categorized_teaser nav.catcont_nav .section .nav_title,.parallax_categorized_teaser nav.catcont_nav .section .title{text-transform:uppercase;font-size:15px;letter-spacing:0}.parallax_categorized_teaser nav.catcont_nav .section.default .category_info .description{display:block}.parallax_categorized_teaser nav.catcont_nav .category_info .description{display:none;font-size:16px;margin:.5em 0}@media (min-width: 480px){.parallax_categorized_teaser nav.catcont_nav .category_info .description{font-size:15px}}@media (min-width: 769px){.parallax_categorized_teaser nav.catcont_nav .current{background-color:#822118}.parallax_categorized_teaser nav.catcont_nav .current .category_info .title{color:white}}.parallax_categorized_teaser nav.mobile_catcont_nav{width:100%}.parallax_categorized_teaser nav.mobile_catcont_nav.current .category_info .description{display:block}.parallax_categorized_teaser nav.mobile_catcont_nav .category_info .description{margin:.5em 0;display:none}.-ms-.no-flexbox .grid_view.grid_gallery .list_image img{height:auto}
</style>
<style data-href="/assets/gulp/vendor/jquery.fancybox-364352e03618ba5a8da007665b1f0be31795293b22bc4d7c5974891d4976a137.css" media="screen">
@charset "UTF-8";/*! fancyBox 3.0.0 Beta 1 fancyapps.com | fancyapps.com/fancybox/#license */#fancybox-loading,#fancybox-lock,.fancybox-wrap,.fancybox-skin,.fancybox-inner,.fancybox-error,.fancybox-image{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}.fancybox-wrap iframe,.fancybox-wrap object,.fancybox-wrap embed{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}a.fancybox-close,a.fancybox-expand{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}a.fancybox-nav{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}a.fancybox-nav span{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}.fancybox-tmp{padding:0;margin:0;border:0;outline:none;vertical-align:top;background-color:transparent;background-repeat:no-repeat;background-image:none;text-shadow:none}#fancybox-lock{position:fixed;top:0;left:0;right:0;bottom:0;z-index:8020;overflow-y:scroll;overflow-y:auto;overflow-x:auto;-webkit-transition:-webkit-transform 0.5s;-webkit-transform:translateX(0px)}.fancybox-lock-test{overflow-y:hidden !important}.fancybox-lock{overflow:hidden !important;width:auto}.fancybox-lock body{overflow:hidden !important}.fancybox-wrap{position:absolute;top:0;left:0;z-index:8020;-webkit-transform:translate3d(0, 0, 0)}.fancybox-opened{z-index:8030}.fancybox-skin{border-style:solid;border-color:#fff;background:#fff;color:#444}.fancybox-inner{position:relative;overflow:hidden;-webkit-overflow-scrolling:touch;width:100%;height:100%;max-width:100%;max-height:100%}.fancybox-spacer{position:absolute;top:100%;left:0;width:1px}.fancybox-image,.fancybox-iframe{display:block;width:100%;height:100%}.fancybox-image{max-width:100%;max-height:100%;zoom:1}a.fancybox-close{position:absolute;top:-23px;right:-23px;width:46px;height:46px;cursor:pointer;background-position:0 0;z-index:8040}a.fancybox-nav{position:absolute;top:0;width:50%;height:100%;cursor:pointer;text-decoration:none;-webkit-tap-highlight-color:transparent;z-index:8040;overflow:hidden}.fancybox-type-iframe a.fancybox-nav,.fancybox-type-inline a.fancybox-nav,.fancybox-type-html a.fancybox-nav{width:70px}a.fancybox-prev{left:-70px}a.fancybox-next{right:-70px}a.fancybox-nav span{position:absolute;top:50%;width:46px;height:46px;margin-top:-23px;cursor:pointer;z-index:8040}a.fancybox-prev span{left:0;background-position:0 -50px}a.fancybox-next span{right:0;background-position:0 -100px}.fancybox-mobile a.fancybox-nav{max-width:80px}.fancybox-desktop a.fancybox-nav{opacity:0.5;filter:alpha(opacity=50)}.fancybox-desktop a.fancybox-nav:hover{opacity:1;filter:alpha(opacity=100)}a.fancybox-expand{position:absolute;bottom:0;right:0;width:46px;height:46px;z-index:8050;opacity:0;filter:alpha(opacity=0);background-position:0 -150px;zoom:1;-webkit-transition:opacity .5s ease;-moz-transition:opacity .5s ease;-o-transition:opacity .5s ease;transition:opacity .5s ease}.fancybox-wrap:hover a.fancybox-expand{opacity:0.5;filter:alpha(opacity=50)}.fancybox-wrap a.fancybox-expand:hover{opacity:1;filter:alpha(opacity=100)}#fancybox-loading{position:fixed;top:50%;left:50%;margin-top:-30px;margin-left:-30px;width:60px;height:60px;background-color:#111;background-image:url(data:image/gif;base64,R0lGODlhGAAYAPcAAAAAAAUFBQkJCQ8PDxAQEBQUFBkZGSEhISYmJikpKS8vLzExMTQ0NDo6Oj8/P0BAQEVFRU1NTVRUVFlZWWVlZW9vb4eHh4mJiYyMjJOTk5WVlZqamp6enqKioq+vr7y8vMPDw8nJyc7OztPT09TU1Nzc3OLi4ubm5ggICA0NDRERERgYGB0dHSAgICQkJCsrKy0tLTMzM0NDQ1JSUl1dXXl5eX5+foWFhYiIiJSUlJycnKGhoaenp62trbCwsLS0tLu7u729vcLCwuXl5e7u7vX19fr6+gQEBAsLCwwMDBISEhcXFyIiIioqKjg4OD09PUdHR1tbW5mZmZ2dnaOjo6urq66urrGxsba2trq6ur+/v9DQ0PT09Pn5+RMTEyMjIzAwMERERExMTGZmZoaGhpaWls/Pz9XV1dvb2+Hh4Tw8PBYWFkZGRktLS1paWm5ubp+fn6CgoKysrL6+vs3NzZubm8DAwAoKClxcXD4+Pg4ODjk5OZCQkAYGBicnJywsLDIyMnh4eAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH/C05FVFNDQVBFMi4wAwEAAAAh/i1NYWRlIGJ5IEtyYXNpbWlyYSBOZWpjaGV2YSAod3d3LmxvYWRpbmZvLm5ldCkAIfkEAQoAAAAsAAAAABgAGAAABvdAgHBIBCwWxWRSEBAOPp+BclrYVJwikRRgODSngMKHpAAMslLBIvEFS06ZwFnLZRCoBaGgY4II0AQMCEMBbQEYHhECAA0lGgITEwEHC1IBBAkHhBQgIxoMAhGDQwJ3AggMCwZFCRYiIRBTA0cHi0kBDxeaSgIHd0UCwUy2YEKFQgcZG8scDsUECgnSCb0aHRzYD88J0QkIaQMC4W1TTcdJA15Tvb9LlAvtRQS0xEIGC4JS4USXZqiqRA4kINBEjSYCdyhtKZCJXxtUd7jJWbALwLkk8zQFkIbMTjGLCRYs2sjGzBpytw6sEhJtSBeUHxEk+PhR3McgACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QBMC+EiYqCASiCKD49KYwBi4QFGBSCKUFBkwA1PCuWggU9QoicngAxQyKjpAARIzcBqikBO0Y0lioqjzkiMiidKBFFPo4AAZWMNjrDAAwhOCgzMyg7RDKCKi8tgwE0PkE3MCgQLoQvM7YuMTErzYIuNkA/Db3wLcqKDTYsLKFo8anQMkaxwh1E4eKFQxi/SKk45NAFihQuKL6I2IvioUnMDiZE2KvFvEQBWnBMhIIFvJWEVMRgwC/RCnguJuEidBEARgYxChBqAXFTDHC+ALSIAbLAt0LNArhg8OsFDFsM1FHqRVOQQ0EtGAiNFcCqo7KIfMK4SrYFLLTNDVaYHLkuLd1FKPpZCgQAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABNLoWJiUdHgkg7O0iOjYqDSjZRgklWVkmCFVJLlYJKU1aIm1WeCiRZoqMAUFo1AEhWVZIaJxKVjI44WU62uBAmkYIGBoRMTUqCC1g1SFBQSBolDQBJUVtUksgLCy5JR08shE3VT1ddJzWUjixOC56KM0RcOwuVSUzfiU2oRIA3iBJBRQYHIWnCkKGzUUoUNJHYBMlChhIfVlLSUOI/WIsgsvhICAmLeomSyKO3MZy/QgYUiCOX5CMST0lcOFHwShATBQ+TLGACQIkzFgrqcSRaEJ5OTwyLOkEkyJciJU6IHokKgIkTjb0mfmPYCInEg4WOMFEGYGuTQQYMmKCF5eItSFgWQQYCACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QAX1+FiYqDSDkYSIJIR4uDR18GgikcUpAAYxhKlABHTWCQSJuQTUI9XqIAXgyImlJHR2QjYou2gwhgKaicD2Y5nQaug19NoQApYF9HDw9HOCEMAEgSQrWDBmBgCCkASpPJYUgMVENnFZ2RXwy/i2JoaWUviylf7oUIZWHlCPF6hQ1JCiUpxCFp8qLhC2aLJpiZaEbLi4VNGC4TJZGiEDACCRpMmDBRCgP8CCExIE4REngMWiZS8m1fIS9gGIQbx89gMwTxMPV6gSwFA0xKQn2RB6sJokoBfYXKOA4c1EVKZI2iaggMxF0MO2WchORFk4CKjiAQSqpJN2gECwkhcFsprsqUiQIBACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QASEiFiYqETS6DR0eLj18rg01NkQA0NkqSAEdNYIigTYJNHhudnkoMX6alRzZAYYuQgkcuYEpHL6VqQBaIAAUFhF9NqilgLABKnTY/L4ZiPziZACtgDC4pACnCgiwNSGAaIyAU14ZfYGDdimEhIjiliilf4IVfFmrqt/+ekKQY+M3QpYOqFs0AAQQIiB9NkBxs8iKhohkNG0Yj5E+RQIL5BN3rKOhFBzEkkbDTpZAIlw5g1GXb1m0XxxRHwvzocqLGtS8VRS5rVowdIiQ0RPAAZ+tTrk6XjigB40rQikqKCrT61EsQu2KeQLl7FQlJL5KTsJIatOIL2kUuCFy89SToEN1AACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QAAgKFiYqETS5Hi4pHXyuDTTCDK1+PkABNYCkARzBNjwKjm5BKDF+CTaQAXwxKi0ebRy5gSkeuAEpgLoNrs4NfTcMpYKxKs18woAJscDaoK2AMLqApqIbaYDhzPW7bAl9gn4sOWFk1wIopX4iKLDVO24O1nIJHhymHhq6uYAxbFKGHQTlxmggAOGqgojYGDSbUl2/QIX7xCCnRtKiJBjb2BJEz55BQhBJpNFwiVO0aKF2MJAhwQmXImTeEmh1L1ktXHCIQDEmgowEVPkG4QPGKUKRHvDVrFq1ZFYqXgDhG3OTbBQbRrpVghtChBEkSWQCnBNWgcrbirSYWBzNWFClXUSAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQvLouLR18Ggy8vR4IGX5ePRy9giJ0vgkgKlo+CBQxfgpWXXwxKkJsALmCxlQBKYC6bR7MAXy+xAClgq0qxXwopgkoKq4MGYAwuzEq/SMwpLgxgBYVIX2BgzIq6xoiKKV/piZHlir+Q2fSGlZUKw4thdf1xGezuVdKnqEGdDRvqACQkT9GhQ0faDVonkdAXHA0aGhK3bF+IERZEEZJGTZtEFxGQgNEwwg6FWcGGpXh2ZMIEJBpKNDAUwQOGWb4G1UqRQoQIJGFMdChX4JuiVKuKikhxJMMJCacAdCJHzCgzBSQ+OIUkSVCKEVMFVdgwKetEO3YIykV0W2hc1kAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhAB3d4WJioQvLkeLikdfK4MvL48AK1+YkC9gKQBHloJ3CpeQgkoMX4KjAF8MSotHmEcuYLKjKQyOgrSEXy+yAClgrEqyX5+pCqyDKwq8oEqcobIptwpLhXfKuItKYMbVhEosiJFfw4TkqIp3lpYK64pKpqYvh/GW9IlKL/jyuUvUrpCSL+gSsajRoGA3MApAKWrwA4iNF4WWKADjIsWRGRgHfYFwRAGZDz3wcPoyT5AMIjvuzJhxh0wIBoYg6LDB6ZehK0Xa3Pnw4Y6METnQIVsUxciOIymIIiIzoo27FXSGgCEm5AOoF0J6bIO0gkcNQVG9ChqDoR9BdHcLrlxB53NgJQXuAgEAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQvLouLR0wrgy8vR4IrLpePRy9giJ0vgkiVm49KDEyCpQBMDEqQpkxgSqEASmCOgkemrS+wAANgqkqwswOCSi+qgytgDC7IA4iDR9IuDGCThEiztIsDL6nUiQNM5IXdwIS8j4mbm6SVleuKyvMvSKHz9Yn3ldHeudvVrtCRCB1EKYqE7B2YDlyIzFiEaxi6IzVOdLmSB0kbXYJY5DmCBJu2QUh4bImCyEkJDR4jYMQCJtkyQiu2IelgAgKSKnKQOPmAg1rBRDNOaDAEFFENLRAGrvlAQtSAKlUQuZAzpV+hNVIqCLpapWEUG14NUtvZwWivgasEQC4KBAAh+QQBCgAAACwAAAAAGAAYAAAH/4AAgoOEAAIChYmKgwEuL4uLAV8rgy8vAYIrX5iQAC8LegABloICC5edAEoMX4KWmF8MXpGcAC4LSqOPegsujLUAXy9KgrytXsRfCqGqL62DKwoMLqF6wAHVtwuUhAJfC7iLvAtfiIpKBuaJksSFeu/vwJ2cC3Yi9yITnUoKlpYCCrTgy7fPX79q8PSogySPEYQyvhRJYpZIQZk0aMQsUgKuHKEAFc4MobJHAIRnpYjpccFgG6MNdiQgYhACR4AHDwIYACVIiTNCXrgJKCMi5wYOAnhFFNVQkJgzNgUcDRWrHSQvPew8korUUL+mg7xgGFNqqiAvm1IJ4CSAT5mFqQYSfVm6KBAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQJCYuLfV8rg419gitflo99CWCInI6Gfwmaj0oMX4J/f5ZfYEqLK5OCrkmgAElgfpp9pX08W1FJuGCpSrC1gkoJqYJ9NSddV099SYiDfbBJfgxgBYVgHVxEM4u5qNeFfWIdoYmRsIVJ89bpmwCaf1dAc/3lpqMSjEKir5+/RwCWNWo0jF49hM56vXuCo1kiJCyGKUpgQUSIMIuUgClmrw8FEFs0MEDSgAUhJA25gZmFD4MHMYj+/KiRDRYLMBoLMCNU4JshC3MaAGiUUBe2UoXCzOHZZ1QrBvFMbfAQqpIoUgiV2IjijKmgApkgShTkxx3ERYcDIAYCACH5BAEKAAAALAAAAAAYABgAAAj/AAEIHEgQwJ07BRMm7INQoB8/CiMCWMGjxsAmTQauaNFH4kQ6QwAB6IOx4x0YTTp6xGOECsImMDq2AEQg4po1ApP4KBIBAEYASQD5UdlH5UgpcyQgdECESh8CNWcmEUigSYuBfd6cGULFyZ0ZEAfeqXnHDyBAKwrCKJOmRJuIBM62mLoQQpmwCe/MTZjkoF+PWEf6pNJDjpwebyUSQInRT1kqhnsg9rgYI0aEfv8C7miUoJNALCLqranQT40sWBxEDMqgRUOBfdz0mIMD0NPXI2smMYsWqw04EDADugoVgFSBa6wSJIDTIaCpMPskYYC3KFyhAmEKbMGAtESSMBpqFjeIsvPCFmlHlhS40TzgJngBi8atMCAAOw==);background-position:center center;opacity:0.85;filter:alpha(opacity=85);cursor:pointer;z-index:8060;-webkit-border-radius:8px;-moz-border-radius:8px;border-radius:8px}.fancybox-tmp{position:absolute !important;top:-99999px;left:-99999px;max-width:99999px;max-height:99999px;overflow:visible !important}.fancybox-title{font:normal 14px "Helvetica Neue",Helvetica,Arial,sans-serif;line-height:1.5;position:relative;text-shadow:none;z-index:8050;display:block;visibility:hidden}.fancybox-title-float-wrap{position:relative;margin-top:10px;text-align:center;zoom:1;left:-9999px}.fancybox-title-float-wrap>div{display:inline-block;padding:7px 20px;font-weight:bold;color:#FFF;text-shadow:0 1px 2px #222;background:transparent;background:rgba(0,0,0,0.8);-webkit-border-radius:15px;-moz-border-radius:15px;border-radius:15px}.fancybox-title-outside-wrap{position:relative;margin-top:10px;color:#fff;text-shadow:0 1px rgba(0,0,0,0.5)}.fancybox-title-inside-wrap{padding-top:10px}.fancybox-title-over-wrap{position:absolute;bottom:0;left:0;color:#fff;padding:15px;background:#000;background:rgba(0,0,0,0.8);max-height:50%;overflow:auto}.fancybox-overlay{position:absolute;top:0;left:0;overflow:hidden;z-index:8010}.fancybox-overlay-fixed{position:fixed;width:100%;height:100%}.fancybox-default-skin{border-color:#f9f9f9;background:#f9f9f9}.fancybox-default-skin-open{box-shadow:0 10px 25px rgba(0,0,0,0.5)}.fancybox-default-overlay{background:#333;opacity:0.8;filter:alpha(opacity=80)}.fancybox-default a.fancybox-close,.fancybox-default a.fancybox-expand,.fancybox-default a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1RkZERjA4NTZBNEMxMUUyOTFGMkY4MEVGREQ0MkRDNCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1RkZERjA4NDZBNEMxMUUyOTFGMkY4MEVGREQ0MkRDNCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU2OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+qKJVUQAADXpJREFUeNrsXQtMVNkZvsOMPHwAoq2KuiLWiixV8G01qxHwkbVZFTWa6G7bWI22ig/wnWxr4itqdN0mRjemGjXZBGtMs4hPQov4fovUagUVUOsTUN4M0/+7njO9DDN35l5mhpnuOcmfYS7nnvPd//7nf6MGi8Ui+eMIkPx0CODeHiblF4PBoHmBlp4RV/a0t8f/B8e1MusjwwxG+jSytUzsZ86QRiIzUQMjMyOLpYWvyqQTMAcaRBRC1I6oLfs5SLEuwNYSVRNVEVWyn2vpgfmDWDwN3MA42YYomKgDUThRBCg1NXVIUlJSQv/+/ft2odGWBm6qrq6ufPPmTemTJ0/uXLp0KXflypX/oMtlRO+Jaojq2ZuxaD5cnJyANjHOdiWKJRoXHBw8NzMz89zDhw+LLS6OZ8+e3b958+aRjh07/oKt1Y6tbXAFIyeDErCDE85BQwzC2Gaf7NixI2X27Nnju3Xr1gmTioqKpHPnzkl5eXnSo0ePpLKyMvnm8PBwqU+fPtKoUaOkxMREqXfv3vJ1+n3J1atXvxs/fvxf6Gs5E6EGe5y3x1RnwLk847V3JOpB9LPc3Nylo0ePjseEK1euSLt375auX79uXcN2HeUbHTx4sLRkyRJp2LBh8ncSocyoqKjf04/v2DloJvd6gBsZpyHHPYliLl68mDZixIiY2tpaadOmTVJGRsZHvRoQIJPaaGxslAljxowZ0tq1a6WgoCCptLT0XI8ePX5Ll98yzptbAtzANEQ4Ax2bk5OTPmbMmE8hBgsXLpRu3bolgzUajU4NinIfs9ksP0B8fLy0Z88eWZxKSkoye/bsOY8d3Fol17UaICPTHuB2r61bt04DaNIS0oIFC2TQAMxBAzDnOn8YkPIafyj+O6yBtbAmcfxz0jq/YXsa9foq/EBC5XWl19mbDuIY/GLjxo3SnTt3rKA4YFlpNzRINTU18qdSdOrr62Vw+FTegzWwFtbEiI2NXdC1a9dwZ1rGGfBgJiaRhw4dmkGvMQwH8dixY004CIK8v3//XqqqqpJ/rqyslCoqKmSw5eXl8nWAxkN9+PBBFhPlG8KaWLtDhw69SCutZ3vrAs4PJVRd17i4OFmHQXvwV60EDbId4DqA2zuguM7v56LG1yZ5H8H2NuoFDsMQQYdwCFnDzvfv35dVnlJz4NAoQU+fPl3WNLYHdNKkSdLOnTutIAG+rq7ufyBoTayNPSIiIj49derUeGfATSrXobvDR44c2RcXTp8+bd2EH0ZwVQl68+bN1oO3bt06+cEmTpwo7dq1ywp62bJlVs0SGBgoz8Ga+I49YmJiYKR+SVP+qhc4Xld7UlndceHGjRtWTvLXjM34GDRokBUcdDTAwIpu27ZNvo65Z86csc5v06aNdR3ZhNMnN2KdO3ce6syPUgMOHR5MagpmXiosLGwmAiaTySoq4DAAAjRGSkqKTJy7y5cvl7KyspoAtw0o4DZgtGvXrpcz4AHOXNfQ0NBA5ls02whWD+C5vAM8NITtWLVqlcxtLmYhISHWA64cfA96qFC9WsXloeQcwJ8/f77ZHPJrmhxqqEZPBcsWHrmQPq7jXp6tCYcIKFUeDiJk2nZMmTJF2rBhg5XDONQQMVtTzvegB6tw5p87As4jlxryIeByStHR0c02UnIOKo9rDzxQenq6dPz4cevvp02bJoNX6nlbRnCXlwzYE4ZBF3Cw8gP5E6Vca3Dg3E1VAie/2goaB5ECDGn9+vVWmcd1aCaroaC5SncXn9gD4/Xr11edATepAIdvXEZu7MO5c+cOAjB4cjAekFdshM05+LS0NPkThxDag8v06tWrZWMD0EePHm0GnBskjAkTJsifjx8/vugMuCO3FieuM1E/oiEFBQWrYD3nzJkjA4Am4TqY+x5aBrQRiHMcYgNuHz58WHr79u29Tp06JYPxLB7V5Naamai8IXqRn58vK1hELvy1802h2uwFELjOVaUaaG7EFi9ezFXiJXvBhBatUsOc+mckKhnFxcXlCLdg2nkkw811+/btJQqcZdWI4D4sLEwGTjYAxkQGiuvk/TUBzdfBmsOHD8fbezpu3LiNbG+LXuANLIXwglRX4ZEjR3LwizVr1kgDBgywRjEcBEADLNfrHBS4jodSGioOGmtgLayJcffu3T0Ug75zFDS7JXRD5IIgoCWhG0Dv3bvX7aGbhR0OcP0/RP8eO3bszsuXL/8LGx08eFCaOXOm9XDxA2ovB6LUHpiL77j3wIEDMmgKlrMJ9CK2V70rySEt6QnEnt1ZemIZmfGBPD0Bw3Pz5k2X0hMJCQlSamqqLNMsPZEVFRW1iEX4bktP2CaEIDZdeEJo1qxZEyIjIyO49+hKQggWGINCuhJ6aCSEDjDx0JQQanEK7uTJk9kEtMTVFNzz588fkjX+vkuXLh5PwbmU9Fy6dOnQ5OTkhH79+v2cQP1UmfR89+5dKVnDu8Thv69YsUJz0lOvqDhLM7e1oSBFvGhmGqLKhmoV+XKnB9FdwJsk9hlI3Yl9vaWUllQkLAxAI/cpRNXNldctKssCuAAugAvgArgALoAL4O4fmt1aHe1PPOzjUVMIu17FoiBr1kqLw2fyEnN4LwCaGMKYL4/Ez1OiYulj94RZWzTgIA+ilh9x9X4WnyIrMCY2Njbt2rVrBQ0NDea6urr67OzsaxSbIgGENEd7rVg8Bpxx+idEn0VGRqYVFhY+t434CTzy4JOJuvkEcBZ3Ik09KjQ0dMm9e/ee2EtV1H9Mrs8litYK3O1ahXXFQY77BAUFDTx79uwfSEw+UQmCDZKTCpvH1SEDDc3R22g0DsjKylowdOjQvo7mX7hwIZ8dzCrtobqbRIUxAfVJtDz9+vjx4xfVMlolJSWvoqKiUB8f3GqHk4HG5nFEc/bv339WDfTLly/LEhIS/oQ0HtM6Jq8DZ/KJ/F9/otnbt2//mxro8vLyysTExK00dyLT64F63n5LgRtY2g1yPGPNmjXfN9JwBLq6urp26tSp3zAV2Iul6wzeBs67iKDKps6fP38/GZYGR6BhdObNm/cdzZ3C7mnWBeQN4LzMAq79KiUl5Vtw0xFos9ncmJ6efoTmTmdvJ8ReMsobwANZdWIi5LWioqJKTa63bNmC2vgsohgmWgZ7oudp4CamCcbFx8f/8dWrV2VqoPft24fumjlM47RXgvY2cBiYIdHR0cuLi4tfqYHOyMjIg05nuh06PkDtsHsaOByiL/Ly8u6qgSZTfzsgIGABMzCoXBidaSlPA+9D9BX5Rw41CJnyR4GBgegfTGbOltEV9dqqTpYvx5xyO8iVK1f+6WjCyJEjo0+cODGDRCWaqcwOzAFz3/gxHc4m6hAOk7+oQ781QH5t8v3WyfJrt1ZXIFFWVqYMJLq3ViChO3QjjfR1q4Zufh0s2ySC4FANNhqNv8vOzr6tBj4nJwdtRV/4RCaLgUeSc3hQUNAicg0eqGkamvclc9xa18mC2mZJnke1tbW3k5KS/lxQUPBUJWVtkXT8aaRHvEMGHl1AD8iq3kpOTv62qKjohe283NzcWyzdXN1qmSxvp5k1t33oqEi0cTWxrwWLN4B7pJTiDeCaxNZjNSBf6SgSdU4BXAAXwAVwAVwAF8B9eejtEOJ/t9+BJYQk5p7yv3tw+pdTXvcOGegwFhigK6Ij87kRDJSwwAB/0+PZLn4doRvCrIEIuxB+IQxDOIawDOEZwjQWrrXRGgp6o3g1Gd09tukGdAGhG4h+/5n0sTvI5EvAkWmdi+4ee7kSdAOhK4jmjHJWuPJ28cqgFvKhGwhdQegOYomeMLfXf1pQvCpn3T12B7qC0B2ELiH62ttXilc4nIORsETiUi03iMSnTxav0OXjrHiFlLNN8SqgtYtXPXjxCl0/auCR7EfSnyX/2/lK8WoyyiNqxSuUV1BmQbmFlV3a+krxagoKUzBGKinlBhS4UOjyleJVCOPidJQEURpUK16htIgSo68Ur/DqUXydhWKsmryjmOtLxSuDsniFMrgaeHQVoZzuk8UrNCCogUcDAxoZaO4Q5h77RvEK3UHoElIDjy4jXytewblKRpcQuoUcAUeXEc37yieKV34ZcyrasHuhOwhdQugWcjSfdRmVSb7Uhu0Xh1OrOmS1/NZVh/5qgPzS5Pulk+W3bm2TLjh0/fhDIKHsO/zan0I3vw2W5TZsdPeogUZ3ELqEJB9rw/5STYOgKwjdQTQP/8JRhOQjbdhyR4+jZgR0A6ErCN1B9PURkkes8abVnSzkwd+x7p4mA11A6AZCVxB9fQAHyhOg/TrNrKdfxWOJfW802rR6KUV0CIlyoQAugAvgArgALoAL4AK4AC6AC+ACuAAugLfy0NOi+rn0Mddtb2xVywjQvasc3JdPczM1AdGRgltlL0OL687WVrtXKw53ikq+m+Z4RlRsXv1qxdc4WxGyl/VS3oN/JKFVgLdkc5uHFlpFM7fo2mQVbaPUHj+4g+t6gCtVnlKTxBGoYCcPHGcjZluF5RTABXD3HU6H/obt4XNmOZW+i9aDqksdcqNjYwV/cMc6QlQ8bbpb4mv86N1anxeVfAfike/he5uKqPhPXgRwAVwAF8AFcAFcABfABXABXADXOv4rwABAehOixiUV0gAAAABJRU5ErkJggg==)}@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 2dppx){.fancybox-default a.fancybox-close,.fancybox-default a.fancybox-expand,.fancybox-default a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpCMTg4NzhCQTZBNEYxMUUyQTQ2NEQ0Nzc1M0U1REU1MSIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpCMTg4NzhCOTZBNEYxMUUyQTQ2NEQ0Nzc1M0U1REU1MSIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+T32etwAAHWhJREFUeNrsnQtU1VX2x388FcQHaIZEiFb4QDQtSszG7IGplM+sCSvNno72GDNb/3+t5WQ1tpoms6an49DY1OhKXVNqZGmlpI6pmamI/ccAGZ+QKIggCv/9vZyD5/743efv8rvcy95rHS/I7/7uuZ977j5777PPPiH19fUai3USwsAZOANnYeAMnIWBM3AWBs7AGTgLAw9q4CEhIU7/HugfmKv35+v3zsBbG/CQCy+gPho1u5d10OTf0K96Bm4PWW2hooUpLVR5VKFLuHXUziuP55Xf69QPwhn8oAWuG8kq3HDRIkSLFI9h8+bNS7n55puv6tatW+/27dt3j46OToyMjOwYGhoajRvV1dVVnTt37sTZs2eLampq/lNZWbnr559/zrvlllv20p/P6T4Ew5FvNXDbk5w1Hzxfjs4wAbIttRhqsdQuptadWgq1tLFjx2auX7/+1V9++WUdATxR76XQh1BaUVGx/OjRo7OWLVvWQ7xmhOhDiBxoZtWhq/du1Jp1hCuqQx3NbUSLEi36gw8+uLlv376DBgwYMDIiIiLal5Mi9a/i5MmTOadPn85LTEz8VBn19T179qw/cOBA4I9wRW1IwO3EiE6gdjm1/tSuW7p06cuHDx/eV2+RkOrZUlZWdr/oU7gyPwTuCNeN6nChkyV0tI7Dhw/v/tJLL2UPHjx4lLN7k2rRvv/+e23fvn1aYWGhdvDgQe3EiRMajVbb39u1a6fFxsZql156qZacnKz17t1bS09P13r06OG0z6Tr/7Jjx45Xhw0bViJHu2LlBM6kKWCHKvoasKOFzu6AUf7hhx+OJ7k7KiqqvdH99u/fr61YsUJbu3atRqO/sQ+yH/r+yNdXv5E0yWqZmZnauHHjtF69ejnqd1l5eflzcXFxf9VPrAEBXAc7UkxUGNHthTrp/N13380cMmTIjUb32LBhg/buu+9qP/zwQyNgskY8tiJwL7JeGj+AgQMHag899JBGo9nwXtXV1e/Qh/+4N9D9BlyBLc27KDGqOwI0tYv27t37P3369EnTP3/37t3aCy+8oP3444+21woLC7N7Tf3rOxrhRr/j5/Pnz9seaULWnn32Wa1fv35N3gPp9tVt2rSZIExJt6H7BbjOrpYqBKO6E0AnkZB6mEVf7RT1SWT2aX/+8581UjG2jqugXT06euPOHgEez7/77ru1p556SiPAdvcgU/KrlStX3jlp0qST7kL3B3A5QcrJMUrqampdqcXn5+c/Q5PZFeqTMAE++eSTtskQoKE6jHS1+rs7wPWWg163Q9UAPCbX1157zTbR6qGTWTpajHSXE6k/gKsTpBzZccKh6UZqYlb//v37qE+Ajn700Ue1U6dONY5q+RpSZ7sa0e6OPEemGaB36NBBe/vtt206Xgd9FUEfr+h0nwIPNRN3MrC1obO7YGSTGnlADxsT4/3332+DHR4ebjeKAVsFbsbl1t9X/R0Nr40+oC/okyr0tyyaSBeYtdF9PcKlKjFy1S+hr+uYJ554Yrz6BEyKU6ZMwQRlG9ny3r6C7K6DIi0YCEZ6ZGSklpOTY5tUVamqqnqI7Py/OlMtVqoUVW+3U3T2JfQVTd24ceMs6myUvLioqEi76667NHKxbaNLrz6aE7YjrxDghQqxqZd//vOfdjoddjrNP4NSU1MP+hJ4qAlVEqYA7yDMv65z5869VYWNEY0J0gi2Xmc390KD+ppyopbq5fe//72tr8r1nS+77LJZvlYtoSaeFy6sEjnCu7z44ovDb7/99qvVC1955RWNRoqdGlHfsBWw9d9W/QeNvqGP6KsqZDo+dubMmWxf9sHbER4mJspoMVHCMuk8ceLEa/VOzccff2xnjajeo5Ww9dD13zD0EX1Fn1Uhi+URf45wvWUigceS2khPSUnppuq3efPm2XSlkb52x4MlawGBJtujOzqaRqNWUVFhe3QRNm7S0Ef0FV6v+lz6IDJOnz59u6+gh3o5uqX7Lj3K2GuvvfYy9cJvv/1W27VrV6MqceTMGAkAIypIloJNr+IRcwAmOCOB11peXm4DXVtba3t0dr2jvqCvsKbQd90oH+pv4G0U4B179OgRT7o7Vb3w/fffd2hnu4INwHqBCYfRrocI2AjX6kc0RitGuyvoRvb/e++9p7fNHz5w4MBFVgNX3fg2inXSYcaMGQOjoqIi5IUFBQXajh07mkyUrkQP+/rrr9feeustjLBGiCp0CVsKAlOLFy/W2rdv36hmXEHX9w99hjeMsIPy95jExMRJvhjloV5cr+pvODwxgwYNulS96F//+leTr6wr6Eaw33nnHe3WW2+1hW310I1gL1myRLvhhhtsjowKHdc70umO4jd4D3ZvPDR0qD9GuNTfcoTbgKenpyeoF65Zs8atoJM64RnBpm+N7XdA1EM3gt2xY0fb74iPqNBxvauJV9/f3NxcPfDh/hrh0p3HCI++6aabEsnRiVBXbI4ePdokAugMOkarlKFDh9rBliKhwxVXTTo9bCmA/vrrr6tBKbcnUNz7yJEjtiU+5ZqLSD319ccIl6s5tnbjjTcmqhdBdxt9VZ2JCgPxaj1sPfS2bdvadG3//v0NYUNgpSDerkx8btvn8uetW7fqHaHr/alSbMDT0tK6qhfBY/NEnehhPP7449o333zj8FoslSGsSvOG9sEHHziEfc899zQ6MegDPiR3nSL5MyZ/nVoZ4A/gcpQDemRycrLdO0aeh6cepByxENjRDz/8sFPov/nNb7SPPvrIFnRyBRsSHR3tcZ9wvapSBPAUq4Gr65a21rVrV7vEHbnSbmQFOBOkPEi9L6HrHRBXUUwEoRAC3rNnj9199ctprqwVKSUlJfrLevoDuF1OII2ySPUieHzexEqgVmJiYjyCroeNBQVvYDv6AH799Vf9/3Xxh+OjZrWG0gQXpl6kmmueihH06dOnN1mVMTIrcd3evXtNw1YHiYHH284frr0+zdinAujSSsGbhwWDSdJVMOqRRx6x2elSNSCe4srDdCXS7lek1l/x8EahN3Zer4vNiOpBAnLfvn1tpqAr9QT7/c0337R9YHJFxyj24k7U0dF7ob9VWg1cv+OgjnTnWfWCTp06NckL8QY2JDU11eYxGlkjRgIPFd8GR7EXT8FLT1U1gqwEbrTzoI48MjufGXl9+o67A95RbMTIznZ2P5iMzsIAzgDr73vxxRfrI5D/tRq4ur0Dw+Yc2aoVdnZTz54ej2x9LMUZbFgj9957r5aXl+fUOVKhI7TrziKGvk94LzrZ5w/g58TkYWu7du2ys52Q1aTPhHInLCuvcwV76tSp2vbt27UZM2ZoGzdudHhPhAHcjaUY9VXOH7oQxA6rgcvRDb2NiNPZdevWHVUvuuqqq5p8RV1BV2HMmjXLobuOkf3TTz/ZrocVAsvEkcmIDxFrlO7GUoxS46655hq7a+hbuNEfwGsFbLRqesNHaWJqJJaSkqKR99mY9+HObgoVxmOPPWZbADBy1wEb6kE2QEUqst45wv/DaVK/Ac5scn0f0ff4+Hi7xH762/G4uLi9/gCO0Q2FeEY8Vm/ZsqVUvXDUqFEeqRXEUqSzgxUauOcSuhobkVAQG5H3xZonoMvYC0a+HrY7sRR9f7HwYWeA19Z+q3mxS8KsWVgnRjhAV8m2devWY+pFY8aMcZrFauS4wMOUUCR0QNQHoqQHaRQG+Pzzz21qRg/bWaTQqI9oeA864N/5wrHzJNVN5hJilQcxhUuo4TvXIykpqXd+fv44enNhalx7586djUmb7iwiQzcDtqM+6d11XA87W6ovvbiCLSHj+TLnEPe88sorbdFI1eHZtGlTL3KuDquj3IpUN2ml1IjRDc+rsri4uHTp0qVF6oX4mhvl87nS5XA25MhVP3Sj2IiMvaipGN7A1vcTfddNlosI9jFfjHBvHB8JHN7EKdk2b978X70tjBUZTG76r6sr6LBSEE+BHY1HeK+OJj15PQBj+Q2P2NnmDmx9f9BX9Bl91wH/TvNyp5sZlSI/IJl8j/S2eK1hJ3EyGtnkE9PS0mLlxdC9yJoVwXvL0pPd9SrVES7VErJo1X1ApLv/TR/kMDF31bnr8foyliJNQ1gp8DLLqZ1AI4fFzhNDx3/72982bmzSqxV/lP5QX1tVJ+gj+qrfdFVeXv6+dmEvp2b1CJcfkpoXjoDDpWKkJxH0EZMnT75MjZHgjSCxxihd2cFrNCts/ajGRAkPGY6SqrrIs32L1NUsYQrXeRLT8WV4VrXHq0QEDe59Gdq8efO2ke1cqzocWD3HZCg9SvUN+6LIgCdqxAg2Jt5XX33VDjb9/dcvv/xygS9Ht5l4uGqPnxYqBc5P6f79+w9S5+1iDthZgHw9vCE5iaqmWHND1+tsCRt9QZ+QB6nfLn7kyJG5EydOPKi52FhlhUqRNrnhlhOoFTySmThq0qRJl6tPgguONAh4h1K96BM+fali9CpE/WAxsmHVIMClt0roG/o+WUZPim/xOa2F7PFxtKkqUbSEvLy8cdddd51dgHzLli026HBw1ER9o9zx5tw2CBUH2IMHD7Z77unTp78kFXOH+PbWai1kU5WErhadkRtiYSomCOjx5PaPT09Pt4vkt9SNseS1bqAPYqwC26n+tnqfphofPyt0OSbQ41CB1A7hMSsr69Pt27cf1et02LuIk0h32mhSk4Dkz66aeq1+UpavkZ2d3WTHmoCdN3v27PsUNeJxdYnmHuGqanG49ZtaN5qY4r/++utxGRkZl+hvAOcIW1OwW8Kd4gbq/xn13VFxA3iQzz33nGFxA9LZ66hv9+Tn51fo9HaL2/otoctcFXVnhK24gdDraBetWLFi9Lhx4/oYdRwTKiwZq8t3kDXy927dus0WjlyN5kEZj5ZSviNSmUgby3eIEd/lhRdeGDpz5swhHTp0MAyO+KJADRYPRowY4bRADamYkwUFBS/RiP+b0Nkeq5IWX6BGhHU79+nT51JSI9dPmDChv7N7I5kSKcPIYsXPhw4dsqXSyQVnBKoQ2EpISLDZ0YCLZTFXJZho0v77okWL/vLiiy8eUGAHToEa5Vp9CSapYmIU3R4nHmOff/75DLLVryZQ8Va49qSrfyTL6EMyBT9WVEitFoglmHTQ9UXG5I4JWXGio9DxeOxA1sFAbDscNWpU/6ioqEhfQibVUUVqamVRUdEWuv9qEY5QVUjgFhkzgB6qAx8lwMvaKmqLSU1N7Tp9+vQraaLrkZaWlkwOiFc1DGtqak6S+vnh4MGD299+++1VZAaWKKBrFNB1ZuLcLa4UqkGhSFnpza5QpHahxJ78P3wjIseMGZM4cuTIK0jnX0J6uktcXFws6e1ocskjRKz6bHV1deWpU6dKy8rKSoqLi/eT+bltwYIFewXgM4rqOKuoj/OaqEkbdLVnlZqzoboRH6GoG1mtU+4ditQu1KDVF3TUJySd1S5kEsgRXKOojVrdiK4T/a43G7cJlmK/EQrocM2+wrIK/LwCUoKtVQC3yGK/XM6aC7ZzwfZmBe5vYeCtHTiLxeFZFgbOwFkYOANnYeAMnIGzMHAGzsLAGTgLA2fgDJyFgTNwFgbOwFkYOANn4CwMnIGzMHAGzsLAGTgDZwkC4BYWFJObuOS+f+yQk1sRZZ0uVKKT27wbxQoWwQY8RIDGjmfU4EItrs7i/7ANBTvbUH8QNRaLtYYCadgfVG8VcIdFXHzdmvv1BWxsPcTm+gmpqanzly1b9sPhw4cr6kjOnTt3/sCBA2VvvPHGt7GxsXPomhFaQ02XNlayCArgAjb2e/amln3fffd9XFFRUV3vQH755ZeyQYMGvSKgXyxUEAP3ADZUxhWAPXHixI/OnDlTW+9CMNrbt2//ND0nXXxYDNxN2NhIi7MD7hwxYsTfTp06VV3vppB6QbX3iVpDIR0G7gbsNmJyHJ+RkfFOaWlpVb0HUlhYiHqL08W3g4G7uF+ksELG0AS5AJNjvYdCEyksl6eo9bcKeGhAOg8hIRHC9EtLSkoasnr16inx8fEx3hhpVvc9NABhY/89itz069Kly+Avvvhiavfu3Tt6c6/i4uJy7ULVCQbuADaqCfWNiYm5Jjc39/7evXt7ffz5qlWrcDThceF9WiOBosOFrQyv8frQ0NCn161b9596E0I6/1Tnzp3/l+53LZuFTZ+LbyJKNmVQe2LFihV7zMCurq6uve222xZjwqXWjR2fprCho6+mNnPRokXbzMCGi//AAw+soHtNFuZgW3bt7W1tRPsGUnvk5Zdf/tYMbMRUnn76aRxYPw2Troi9hDBw+2AUwDwwZ86cXAAzA/yPf/wjDj96mNqV4oMMsXo+a5HAdcGoex988MGVUAVmYJMq+p7uNUOoJqioUH2/WiVwJT4C/Xr3hAkTPsIkZwb28uXLcdTVE9QGi8k31KhfrQ64Eh9BLdM7hg8fvsiTYJSRfPXVV/8HMxLmpDArwxz1qzUCR3wERxqMHThw4BvHjx+vMgN769atB6Ojo5+l+w3XGgoOhzvrV6sCrjWUzkN98azLL7/8T+R2nzQDe+/evcfI9X+e7pcpFhnCXfWr1QDXGuoTIiadSd7fvIKCglIzsAsLC08kJibOp/uN0hqOR4hwp1+tAriAja/7jTExMc9u3779vyZd9op+/fq9LrxIrFlGutuv1gBcjY/M+eabbw6YgV1eXn5m6NCh72IRWWs4H66NJ/0KduBqfOTJlStXmoqPVFVVnR05cmQO3Qun7V0mTMsQBn4BdmN8ZPHixabiIzU1NecmT568TMRHeolF5RBP+xWswGV8BO71w/PnzzcVHzl//nzdzJkzP6V7TaWWKjzUEG8GQjACV+Mj02bPnv252fjI3Llzv6J7PSjWJWM8gR3swKXLnkLtnmnTpi03Gx9ZuHDhJrHqPkjkEYaYUXXBBjxcmGnjxo4dm0OTnKn4yJIlS3AO+2NixcYuPsLAG94Yvu5DkpOT/0Aue6UZ2GvWrNlHZiTSG4aIBeUwX0zmwQYcS1l35uTk/NsM7Ly8vMLIyMhn6F7DjIJRDPzCG4Pu/h1SE7yFvXPnzsPkjc6l+9ykNZx8Fe5LczXYEoFsZ/kkJCR08ObJBQUF5ZmZmZ9WVlbup19/pvYrligDLa8mIDOvAlmsBI5jXqoOHTp0ypsn9+rVq9PatWtvJ5WSIlaD4kRiEAN3IMhuKl2/fn2BtzcYMGBAfG5u7hSaNJHTjWMiOxL0sIAizmYhOz7s+LBrz8ErDl55G55FNhSHZ61dgJjBCxC8xMaLyLyIzGkSQZUIdJwTgTjVjZM5OZmz5aYrz+Z0ZQsT8j/55BNOyOctJy18UxV2n/GmqgDcNoiAGW8b9O/GWK55Vd/MW79RoikrK4u3ftdzcYOWVRFIibsMR9xl27ZtJSZXjFDz6g6ueeUaOlz1TLju+fn5x7wFjspuVte8CrhEIJFtheJgu0tLS7egkltRUdFJb+6VlJTUSaiTKM68cg69Fjku1H4qLi7eNHr06JwjR45UelNkiFPd3BdARx3ZnXv27Nkwfvz4f5SVlXlUu6qkpIRrXnkwyqHQUaj3KKBv3rz56+zs7KUVFRU17t7js88+2y0+NK555WG+C5dCtQp4PRf75XLWzhoXbLe4YDsfSdB0QZuBW2lABA1wFgbOwBk4CwNn4CwMnIGzMHAGzsBZGDgDZ2HgDJyFgTNwBs7CwBk4CwNn4CwMnIEzcBYGzsBZGDgDZ2HgDJyBN9cL2eeHI2EeWz1kwrzcmIrdCDJhHsnzSJi3bEQEY0I+/sFuBOwARjExbAvB4RnYEBUqIGM79kGtYVtIqfi/+mABbvWmKoxqbGQagY1N2OCEjU7Y8ISNT9gAhY1Q2BClNVTXxAapdpoXZaq52G+DGsEWvRHYsoete4629WHLH7b+0bXZWsNWwOjmgh7MwAEtHZtRMardKSKDTa4C+hWaBxWTGXjDG0NNkomkRja4W14DNQtRLYKedye1npqbFZMZeMMbwyidXlhY+KsnNU1KS0urMjIy3qHnjheTbBtfQg9m4Dgy4CmaID0uhYfJlCbSBVpDjSpsdo0MVOD+qCbhse0VHx8fs3r16ilJSUk4RiaNWhcyMyMC0fGxEjjs6SocfufNk7t3797xiy++mNqlSxeUM0XNQT6pyoXAgzy+atWqPd7eoHfv3hfl5ubeHxMTcw392pdabMBBt9gsvBal61DCzkw1NpTQQyk9zUHFZJ40Lzg+KMo45rbbbltstmIyikVqDRWTMzQ+qcrhG5NlqiejHKkPKiZvo3vN1HQVkxm4g5OqfFExGQWA6V6PaA0FgdtrfFJV855UhQ8MHxzd6wFNqZjMwJtaR40nVaGYutmKySjqTve619NgV2sBLqFjsoNd/cTy5ct3m4GOSRjHF9C97hbzBJ9UZSB2FZNxQIYZ6Ah24aAOraGAbw934i6tDTiksWIyjoDBUTBmoOMoGhxJQ/cbK1aV+KQqg2Utu4rJOPTIDHQcuoTDl+h+WVrDYUx8UpX+TWoNJ1XhGK9RONYLx3uZgV5QUFCKY8a0hpOqEIvnk6r0b1IsMGPNcwwOsENY1gx0HKSHwu50vxs1PqnK+E2KiS4Zi8g4qhFHNpqBjiMjcXSkxidVOQQuT6rCIaR34VBSHE5qBjoOR6V7PWkUd2n1wHUVk5EmMRnH7+IYXjPQcQywUdyFgTetmIyDpafioGkcOG0G+vz58/mkKjcqJseI9dAHcaQ6n1TV/CdVhYgcxEFY9V+4cOEms3GXadOmLad73UMtRYYAGLjxSVU45OixJUuW/GAGOk3CtWPHjs2he40TZmg4AzdeMYqjNoTMvKfWrFmzz2QIoDI5OfkPuJ9QWwzcyfFgwyIjI5/Jy8srNAM9Jyfn3yKrqxsDd35SFVKcbyIvcu7OnTsPm4i5IGXjd0KX80lVDj4geVLVz5WVlfszMzM/LSgo8CrXJSEhoYMwPdsGY14KSyACF4k/mDyvIJWSsnbt2tt79erVyZt7HTp0CFtbcBBHNQM3hh0m3PI+NGmm5+bmThkwYEC8t/dbv359gdjWwidVsVnIjg+79uzac/CKw7OqIMuLw7PWLkDM4AUIXmLjRWROk+A0iYBIBDrOiUCc6sbJnK05XXk2pytbmJD/ySefcEI+bznhTVW8qcrMtkEEknjbYPPXvOKNsRYBb9z6nZWVtRgllszA5q3fXNygRda8umPhwoUbzMDetm1biYiPDHcUH2HgSs0rdyq6OZL8/PxjcP1FfORis7D9AdzKNAksKEQnJSV5lUNSVFR0EhXeSktLt9Cvu5F9JbKwOBHIVXqJp084cuRI5ejRo3OKi4s30a8/IZeEYNdy5pVzsdW8Kikp8SgPsKys7Mz48eP/sWfPng30605qx6gFJGyrgSO76dhnn3222+0nVFTUZGdnL928efPXAvZRarbltoBNLrTYLORSqBY7Plzs1w+ufasuZ80F25su9zUvB4uBqys9fCSBhcBbrAERNMBZGDgDZ+AsDJyBszBwBs7CwBk4A2dh4AychYEzcBYGzsAZOAsDZ+AsDJyBszBwBs7AWRg4A2dh4AychYEzcAbOwsAZOAsDZ+AsDJyBM3AWBs7AWRg4A2dh4AycgbMwcAbu+QuEhIwWP/bz8KkvO/ujq37T687x8PV2i/uubk4efESvxRJuwWv0EyNnvpvfiGd8+eJevC6PcB7hzSu7W9h9eITzCPfOennGmc53ZdW4WyXO0eu4q9t5hPMIt8aKsOCbxSM8GIWBM3DW4ZbqTrouyx0rxV07nOaIVS1Jl/MID8IR7ijqN8eZHU4js63Jb1Q/F9bRyzzCedJkYeAMnIWBM3AWBs7AGThLgHqaXuWHuPIU3b2PE090jj88UB7hQTjC5ciZrxthjtYaV7Wk/vAI50mThYGzDndqLTzDI5wl+Ea4o3wUZcT7JD/cXxlWPMIZOANnscJIaIF7fGQMZLVJHe7V6/IeHx7hLAycgbMwcAbOwFkYOANnYeAMnIWBM3AGzsLAGTgLA2fgLAycgTNwFgbOwFkYOANnYeAMnIEzcAbOwFkYOANnYeAMnIWBWyz/L8AAHWgCuybDs4EAAAAASUVORK5CYII=);background-size:46px auto}}.fancybox-dark a.fancybox-close,.fancybox-dark a.fancybox-expand,.fancybox-dark a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1OTJGQjgwRDZBNEQxMUUyOEJDREM1NUU4QUUxNjBFMCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1OTJGQjgwQzZBNEQxMUUyOEJDREM1NUU4QUUxNjBFMCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU2OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+YnXBBgAAC/pJREFUeNrsXGtsFccVHhvbGGxT1BC1qFBT7DpVZRErpdQ8hBRbIJpEgSqqnaaoP6pKDjSOBEi1eTQgqMBYPAK1UahQfjkNjyJERIpAUP9AFFLHpSCkNLXNq45QBakKfvA2Pd9o53Y8zOzO7t17by1mpKPZuzs7883ZM7Nnznx3sx4/fsxGYspmIzQ54A64ZcpJtoLi4mKeZ2VlDcvVJCYBNb927VpmgAOoCloFD5A4p8szpvHs7GyAKKTDlyl/kfIKkm+RfMUrcovOX6b8bwS0nfKPKe9PdhrOSqaCkpKSUgLVQIc/obzAZCay5kkG6PBDyjeTdF+6dCl9wAlwPmXrCOgykjydufjZuCf3Sd6lU2t7enruphw4gS4hcH8gqRCAdaDFsVy/BjwEJvRjAt+dMuCE+QUC9EeSr8mgI2pcln+RvETg/xo7cM+e/0zyrDcgrUEHgR8aGkJ+g2SWreatgBPoMQTqDMnzOtBJmooM/gJJJYG/E9ebEwPxeR3gmGUa2opF495g/Iw0nYvKTRqHxh49esQePnzIcnJyWF5eHr8f5x88eMCv4d7Ro0f7aRz5A8q/G2QyORZvxkaSXD9N3b17l927dy9hIgAKQQdwTTYTnB8zZgzvhPoG9vJcyhpJfhFZ46TsIqroOjVSIDStalyADhqYsp2PGjWKFRYWmjQOGaDjiaT1vqg2/pL8RlTBoREZ9KJFi9i6deueKDd//nzW1NSU6DTMRjwhjcYhBWg78uCkCqp1DpQQABAJoJctW8bmzp3LwcNMoNkFCxawlStXspkzZ7KNGzcO67RpVvLOVSdj49P8vD4MRHGuoqKCd2RwcJDNmjWLrVmzhnV0dLAVK1awO3fucLtub29P1INOqRpXvMZpkW28tLT0Bj3eCSb7xkADKNEwNA3QMAOUhUCz+fn5bMuWLezo0aOJQVpUVJQAabDzm93d3c9GtfFxqg3K5oIpDyYhBt2GDRvY6dOnOeD79+/zgYvj7du3s5MnTyZmEnREVoKuDbntlCzdBHABHuYhT3XQ3owZM4adg4mles15W31dy69sgBLzNNK8efO4TQvzwBOB3VdWVrJVq1YlNArgeCKiHl0bcttRgPf4XYSNy1NeQ0MD7whA79q1i3V2dvJj2Pzs2bP5gA2h9Z5kgF/QLXBFgjbFuaqqKm4OmD0wEI8dO8anv7Nnz7KxY8fyWeT8+fOJemQTM7RxIZk3Zy2B2WuaVcQAFNcAFFPe8ePHEzaNa9A0QB86dCgBzmJWeZ3enPtS9srv7+/nmg/zyoejBRPyAT1Ix1+P/Mr3btxrWLVwgWnADNSBBmDqjGMCrQoW036grbxDqqiJsp9RnquzczwBOEwYgNAWOpGbm5twYWFOwtXFNZz3A00Ct3Zz0gsJ+MVU0faAxvjUB+0jl80GncB55KpNG+RdarMrthWQt6xiKZYLsa2ARvRieUSHJxTNH8TCNqZVPkzwtbABodBOljdYf0CPtxkLW2/eHTYXS/OxaZ5m3r3NqCss6DiCnt8mzf6KDt+gfKxl0BMO/O+9oGdXWoOecpo6daoIM7+ihJnHe0X+QyKHmY+IMDPkypUrmQE+ZcqUoAWB1iUWx1evXs0McLd55YA74A64A+6AO+AjISW9lz958mSr1Y/Jb+nt7c0McNXBsikfh38UB/Bsku/TIVza6XRcRvk3SAq8IiAdfEFg/0H5pyRwbTsQyA3T4di8Q3JnJ1H2S2r8p7CYMItlSv+k4w8obyV/vDctwAnwM5StJ5DYzsuzXeEbgptgUOyh/B3qwJcpA06gawhgKx1O8Fs8BC0kNIuKm5S/ReD3xQq8uLg4h8D8luTNICpTUHhCo3V5xf8eST2tih4mDZxAYxG8j+QVFXQQFyvATJ5YxnmCNWktgR+MDNzTNGIor+piKCkCDvkIsRY/zQdt0LaooNMkaLMlksZpINbSzXv9Qm0p1LiQN2jAfmgNHFMegfg7yQQ/0HEB9wH/b5LvEPgbtqbyGzHlmV7xuk6EEV1dajuUvkqywUrjpO1v0k3dQRwVk7nYzuMWZiLvUJSS1q8FaXwJSW4Ybek6gj3QgYEBdvv2bZ6LrRabupTruR4ms8Zp+sumdJVumBT2kcvaBjFB5aOIJLYJLTQtSy91upimxyGtxj0vb5LOnnWA1YEJgXaxYWWyX3EtTBvA5GEzmkqVrY+tNo69Tux5ylvdCxcuZKdOnWIHDhwYto8f5B4YfP0X/fzx76mV2ZgIwMA8YMMiLV26lNXW1vJreAq6wSyINUG+jVd2uhE4FXjOpFVTAlhoWpTF3ia2wOfMmcOvnTt3jjMnEg3m5FitgtQyKjZV4xPDPEbVZseNG8eam5tZWVkZP3fixAm2devWBMsC59Ax22Wccn2iH/CisCsR2Wb37NnDxo8fz4HCrvfv35+gOCGBRSF2liOkorSFJ3T0pVTFVfpCr7YlokFdXR27fPky3/5evHgxq6+vTzxuMevIAzhk6vMDfl03qv1GPfbuxfGtW7c4tQnkGpwD+Wb9+vW8I+Ie+cVk24aK7QngVPBznxuN5gD2hKDqgXe4du1advDgQX4OfMTW1tZhY8KmXrWMik3VeKfmBhOnJJHDXAR9SbCXW1pa2LZt2/i1goICrWMVVL/SiU+Nvgp5hpUewZ35Ua79vEO8bFSimNyGSrTxAy/vTIMQTx7iJyZT+QtlvUHa9nNToV1h9zrtyWPCtg1got8dRlOB90UF2mwGjU5j4hgahemItyRMCJrGWNB5hhZttcmeoWkhAU+sR/5/T4YXEoh2laihuideQCiAsJjN4NENNt09UUBL197XxRdNb853SL7UPUaLBa6VBJmLdw7xxDXWcRUEIBHLs2kwDvGp821TMNQ3kkX2/h7Za12G4iq/I9B1kQL76LHnTr4ah2MUMgRXH0fQE3/K+2GaNI7/e9YEBT0D3VpUQBUtJNkdZJPKm87qmlIX2vhREOgogf3XESdPUWD/bVOcMK6tlAkIixG4n8e0lfI+5b8m0DfTuXn1FgFe7O2yhdm8wi5cG+Utadu80gxesV1Y5YU3yrygUqFXpN9zkrBd2EnyJ89hGvLGUGb2OQkEAHziSahFQjJKcyw4B9wBd8Ad8HjmcccQygBwxxBKKXDHEGKOIeS7WHYMoRDhCccQcgwhxxBStP30MYTEPiY2YgUzCDtwtsyijDCEEBvs69NvTIudNpPWM8oQAovCdB+eAjZuTWaVUYaQ+LoNBKwJMIPAEJKv44nIe/nJMIRU4JEYQrwi6fspYpN2+fLlbMmS/5knQOOa/MkHC22L39ONwKMwhEQZmUUBWhOYQQBaU1PDv3QjCDaCURTEotCAf85P46EZQuJYfAFBfAVk9erV/PM7+I1vq+zYsYMziMR9Nt/WUq5P9AMemiEkmwo4hYIRBNm5cydra2vjUyL+2wwGkUjyp6osU+oZQjKJIV3hCUzEz0SpSJ7HBesCDKHq6mp+raurizU2NibKq9/Jskh9fsCvC+B+nED1Oo7Fx4yQYO9gBoFkg3NnzpxhmzZt4nO5KCO+wmcbg2EKQyhHKfg5gSnXgQvqgMxRAckGf3uH5sGG2717N7dpaB7nbNhwQQwhVeNgCL0mgzVVrjI15ekNnBVMj/gI0uHDh4cNYBV0VIaQCrxdBhvGVABUEMUwd6tlQShDh3y+i6VtQyrTbpxVkmEIiS9K6u6DPUPTNhSRtDOEoH3M4wCJGQO/BclMeIa2lA9NW44hpG3c9PijgJauOYaQYwg5hpBP0NMxhBxDyDGEmGMIpWef0zGEwkYV2AhNDrgD7oA74A64A+6AO+AOuAPugDvgTw/w0ItlWsW/TFm54fJmQzhC3NtguO8iQm9hV+lhQ8INjzUJ54Pq9rs3LI44TeViTGVSYyrKo2+UfparJmRgtzVKT6QpI8CTaVzptJtVQmsLbGef2UaePY7EofUowOUpT55JyglUfkCHyxUz2/zUmYoD7oCncHAa/Q118AW9OWXfJexAjTQdipeO8hY8Ekc9zlRS/epOxtd46t3a/3tTuWgwj4spvne4ibrtQgfcAXfAHXAH3AF3wB1wB9wBd8DDpv8KMABmoXlBk8maWwAAAABJRU5ErkJggg==)}.fancybox-dark-skin{background:#2A2A2A;border-color:#2A2A2A;color:#fff;border-radius:4px;box-shadow:0 0 10px rgba(0,0,0,0.3) inset !important}.fancybox-dark-overlay{background:#000;opacity:0.8;filter:alpha(opacity=80)}@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min--moz-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 192dpi), only screen and (min-resolution: 2dppx){.fancybox-dark a.fancybox-close,.fancybox-dark a.fancybox-expand,.fancybox-dark a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDoyMzAwM0E4MDZBNEQxMUUyQUMyMDg1MkQ4RkQxRDJCNCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDoyMzAwM0E3RjZBNEQxMUUyQUMyMDg1MkQ4RkQxRDJCNCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU4OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+WJRMjgAAI75JREFUeNrsXQuwFsWV7ksIIk9hIRDChitceaiJbtwUEmJZywYlsoaquKGI0VoQNJaICioXtYjysPReFYgiKcUHGzaa0jyKQJSQWqxUCErlsZZReV0exiKKugS8gMQkuueb2/2n7zDTfbqn5/8vbp+qrp5//nl0f/PN6XO6e/rUffjhhyJK9aRThCACHgGPEgGPgEeJgEfAo0TAI+AR8CgR8BNZOhe9wKc+9akkr6ura7df/83Z5vzmSrp/yPT7tddeO7EA54Id4gGEEICtrluLjrvOZYFtA5UBOl6dMZRG0b6RlA+nNJBSb0o95TGtlA5RepPSDgJwG+Vb6fjnKd+XBXDW9gnHcBdQLfvOoXQp/b6I8pEMpveUaTClf079t41AfYbOfYK2f5vH8hMScBOIjP8A2DdpexrlpwdULSNlmkPpVQL5cbrOQ/KtqJlK6VQNsJFnpFPor9sp30vpHkqn5xzX7vo+/8tr4x575T1PSZ97wpmFJrBTOTamUraT0h2U+mYBaACPlXKu0Vfec6csQ90JCbgDsxso/ZISXu9+riCnmcxgdt61+8ky/HIYyQmpUvLA1vZ9jXI0XmNtx9oeAEfFmLa1HGX5HWH+tROe4SkVci9lT1Hq5fD6sxnMOd5wHZTpKQL9Xkp1JyTDte0ulP0X5Tdy1Q53n+95hjLciLIS6F1OKIanwF5D+aUcq8WmTtL3y9vnop4yynQpylwm6J1KAhsbj1E2gcu0WujwnHMmoOxlqZegDNe276L8G4xGywqQicW24wrc+xu0eXeHt8Nlof+d8kYbY20Vt6kVmzrhgG15I+aWYb10Cgg0ZChtPxJKVRS1UoqqHkorCfShHZXhkP+UvXlOTDOpAhdn0AQmV6WkytQbdQqpz0My/D8ofbFohTkWCEeXu+pxwzlfRDdAR2M4OoOabazm9iaGYrhvT2ZGOZqI5ad0JLPwekr9bawuYqEUaVtc7p3TDvSXdewQDO9BBbuOYyG4MLyoSgnMcKTriOU9OwLgV1Pqmwcah2V5YL///vvi6NGj4s9//rO1EBhMwHHqeH1wwQZ6VnkyfqOO3yz85hUZ9UDrTbKd0mlc+9bGNAX0e++9154ZnTqJ7t27i4997GOZDwbH63XB8SeffLLo3LmzSI/wmPL0dup3C6Xhu3bt+rBWDMcg72k2dhcFG/LBBx+II0eOiL/97W+Zb0GaODge+//61786lcHC8gZZ55qplK9ZHKFc15sD9he+8AWxdOlS8fGPf7wd6Mhx7l/+8pcEVCWnn366+M53viN69OhRYaUNdBNBctqFyTUDXHVOcQpvahTzwF64cKE466yzxH333dcO9MOHDyfHA3wdbDyc4cOHi/vvv78d6Gl1Y3v4ln6cCTUBnPQ3piaMtHWZctmdBvuOO+4Qx44dE62trWLkyJHHgZ4Ge8mSJQmT8TAGDRrUDnQcjwfkyvKceo2guv9jLRh+rs0Mc2F3pVEYMyZhNvYBQMVQgA5Qu3TpklwLjSJyBTZ0O1SMslaGDBki7rnnnsp107qfU0ZDmcfUAvCzQ7m7uo6dPHlyAqYOkAJ9xIgRCdO7du2aWCtnnnlmO7CVwDLBNR944IHKvizrpoB8tuqA09MfxdHfHH2pm25z584Vr7zySgKqfpwOelNTU6Lbm5ubjwMbagfgzp49W7z66quVe5x00klO7YpFj4+qBcOHcAclbAIwwGoIwJszZ47YsWNHLugNDQ1i8eLFuWDjfAU2BPa4a59MkbqXBfgnGSYUG3w4NVmgp8EC6NDvaFB1VaTAvvnmm8XWrVsr+7t165bo/SIgZ9SvXy0A7xlSKQKsNOg33nij2L59u5WhUEk4f968ee3AxvU4YPv0H3UIwItOwgRoMOV00KHTt23bVjEJ8wC//fbbk+OKgs2sQ89aAF6KAHQwWlUWauO73/1u5SFkCezsyy+/PHko6vWHrjeZgrWSIoC3ZvXYmX5zRPcgcT7s70WLFrXT12nBm3DaaaeJu+++O2E7zlMeqSvozDq0dgjAi0raXR81alTivAA8E+AABY0orBccn/ZIS2D64VoA/oZeYRObOUxXvX5pdz3LqYG5qDzONOhguqkbwKeMGfV7pxaAv+Zbgaxj9b6UPLABIgBHP8vu3bsz7XSArjxSHXTOIIZD2V+rOuBUqK15zM4rcLrDX23rIzSq1y/PqWlsbBQvvviiuPXWW0VLS0uuc3TGGWcknqjefZC+L6esWUxH3WvB8JdCKURdx1533XWVvpA02DfddFPi9uM/gIrfO3fuPA501W/y9NNPt1NFAeWlWgD+vEsrb3pN9Y4lgIiPVVXfh+6uA2w8HJWgPtIeKYDFufPnzxebN2+uXNdkk+eV0VDmF6oO+K5du16nbLup4TSpG/0/gKHsbJhyYPkf//jHxAkCgKpvRN0D7rru5uN/OD09e/ZM2A4nSAdb2fUcEmTVIbVvB9X9DzVxfKgg6zl63MZyMFN33xXoYK7e66d7kGmPFN0A0O3f+ta3jgPbld0W/f1sEcyKjtrjG5lNpq8O0p0/pgFc6OasAeG8jiioFTXGmSVZYBcYsVfpi8TwX9XKtQeVWjhemm2KgmrYAGrajcdDyer1Ux1eWYMLrmAzPc0WWeeauPbQ41SeDx9Kq5Ws31y7F6BDXUAXo8FE3qtXr1y1ALChuwEwjkfeu3fvXLA5ZcirD+paZE5KqM4rfE79J46VwmWWGqEBq9MjNXmijs96MLY3jWml/EnWVdQa8FYq2LdNLOeoFpsTwkkc5trubWD3/cTu1poDLgv3bUpvm9SJieFZDVSIcvncO0etoG7317p7VpeDVLjGkAy3Mdf2JgRmeCOx+0CHAFwr4CpKvzKx1ZfhPirFheGWc34l6yY6EsNROAg+Ozlkq4hLhYsw3OeBp8p0CHUqapmUxfDEUqTtK23MtTHU0HCxGM49n1HGKwnsXSF7vUIyXBX2acrv4XpwTAuBNUDAsZRs3qS2fS9h/bQILJ1CgZ3aRgP6PRcdyrFUuOagq/7OOP57tDlXlCCdQ4ANx0QVWG5/SPkVlP0D7Zqg/687NznnHred1dll8x45lkoO8Otp84qQers0hqcqgSmxkyh/wvQ6c/W8pcvU+Xo5ZcIKcJMI7PdFSdIpJNg5oF9G+X0cXeqyz/c8QxnuQ1nLBDuISkmrlRz1chNlW2jXI5R6ZamILPXgu4Kmo+PzLuUzymggq8bwHB0J6+Uc5RxxGkyu+edq7mk5yvK5aoFdig636Gp8dnceJTSo7/ioDR87PeOYd2QZzgttZ1fVDmfm2HicstMoLaB0wEdn++h0ea+FuLcsQ9WX5gymUhyZjnSQ/rqD8npKN+vzXHxUiOUhbZX3qKfDbse9Q/VKukpd0ZsOHjy4XQOnb3su9ns5/Z4o2j5CPe4c7sMngfr6KeWrhVzsN6v/Zs+ePVUFvHNolWJzaCyWyW+xmCTtv0H8fTnrMyiNEH9fzroPJcyTwLSsw3IkBgnLWWPaxiuibc7MPo7FUm0J5mlaPM/j1u1meJf76PcPKP9BEbOwo4FeF8M7nmCNZpQIeAQ8SgQ8Ah4BjxIBj4BHiYBHwKNEwCPgEfAoEfAIeJQIeAQ8SgQ8Ah4BjxIBj4BHiYBHwKNkSOGZV1iNHmJbQpS77HXoQNzcRdBsqxnlzdh64403qgt4GiRX4G0Auz6ArDmM6f/TU/P040y/8/6rKuBcsB0Zj+Wi/0m0TeIcKXNM7sRXcZjIidWN8S0OJnMelPlblBBTCCv+YlLn/wi5oGNWECUOsDagfd7G4Aw3AW0AGSD+K6VxlP6F0mfr7LXBwih9hRYli+RLGnOBCpa7e44utZHy/6Z0NAusrMmmLg+lQzCcATQa7PGiLXTixfS7e0iVIh/YWTLdQPuxHupa2r2K8p/T7w+yWG8DvijoQUP0crYpIVge5n/vonw9pSk62JwYmS7HaPu7y3utl/e+QZbFmzw+KqXMMOvpwmHxwJsp7aHtpZTqs0DiAst9EDng16MMKAulubJsxjoUBToY4MxApFNoc5sMctrPBjIXUG70b8Pyfv0QnFSWbQo38GkR0MtmOKKBPEv5k5Q+nccgrlrglINzzYxyfBplpE2ouCFFQlLWkuGXwUrQY5dx4x67MtsW/dsWL1nbf6G0bC7jxuCsKcPldlfKHqW0mrZ72XQhB6QQQJviJafKhzLjy7fHZF1yiVVTHS63P0HZRizd4RDK3NpoFlEpnMDUOWWdJuvyCRPTa2kWnkrZZsrHcFid9xDKaDS59844Zoys06mhmB5KpSA2GRb9HcYxEzkMDNlo2hYftjSSw2TdRoUAPYRKwRfDGygfxDUTTV0CZTSavp6xlg+SdXT+Ojoo4PX19f2l2TfYBjZXlXDCprsc46JaLKAPlnXtXxOzkMDuIvsmGjhgcxosjivP0e8mE9DHudHyBlnnk2phpaygG492KKyV+RyGu+pyhu3Ncm60HHV+sKoqhdgNp2a6C9guFfexUFwsFdt+Rn2mp52j0gAnsIekn7BPwTmNJ+cBcNjt8tAd3PgHhUcgUx+GP6R7kCY3n6MjOVaF7wOwXd8V9NR+YPBQqYATu6fI/gZrEA3X3jauHvdhOMc05ICekV9ImHzdqX3hjlrQhTEMtjWv16+IpcIYgnMe8cnbx11iL2tfzipxiOkzau/evUdDM/xaHexQfeVcJ8jVQinq7DjUDZhcG5ThxO4ecqSmn8PrFnzYKu+/ImvQurDawPb/BUzE8sOhGD5NjdRwGksXNnFUissQm6U304kIDnXD9I1pQRhO7MZDaUGPmS+7fYarivY7+4Qr82G3xvK9tDmMWP5BUYaPN3VPlmGtcPtROMeWZJ1knVtP2QUhVMpUl0EIF8ZyzwulUkKXzQsrk0qRpuBbck6HsffPwz32rqAKuYvgdwiYh9htCCmGKFUqUCl3cqavasnZh8lGA0itHPFl+AXpSTpcS8LF5OKCjf0I5Yjwj4ijCcCxDw8Av7FfxT52uaaPiZhzHrAaX0SlTPCxtX1tcZtaAZhZ4R/1tw2he1UsZR+ryaeOqf8mFAF8nI/e49rSHLWiA6lHBh8+fLhYtWqV2LRpk1i7dq04//zzK8eD7S5uvSvIlnqO89LhpL8xWr0/a/TEVYf7eppKEKRUjwA+ZswYsXDhwkR/IyHEI8JAjhs3rnIuQkIWde89dLj6DT3+livDPxf6awRXsKGjW1tb24E9adIkceeddyYsRoL+xvGHDh0q7N4HrOM5ef+bpiuf7eJih5ooo85X4Xr18LvXXHONmDx5ctI4qgYTQUuxjXj3lUpJS6XIPG79fIfVoZVgivSzroA3FAHQR60oAaOhRnQAEVx67NixCeMVqxG09MCBA0nUb/1bG+w3AZi3bQLY5QEJbe1zF8BP5aiEkK+kavDAbCUIv9vc3CwaGhoqjIfdDZ3d0tIi5s6dmzBeVRZRY/UYyUVZbmN8zvWH+gA+qCw9lycw+3RLZMCAAWLZsmWib9++yX5UDGAD1Oeff14sXry4YndDEEtZqZNqSwr4T/oA3resgmVtQw/rYI8YMULce++9CYAAFZVRUb7XrFkjHn744QrbIXgIeV6mT/ixgtLXB/CTi4DpqnZ0po4ePVosWrQoaTiVTQ0wEdF75cqVYt26dQnQeABKl2eFWg/RQHo+lG4+gPes5isJcJUosHWPEW/AXXfdJX7961+3i3uP/hP9dweRHj6e5vuiA0q12pOyxAR4azULojd28+fPT37DxlY6GSrj1ltvFRdddFE7z1B1YnUwOVwVwG1doCbRbectW7aIWbNmJUBCZYDVUDGwzWfMmCGuuOKK5JpoNNV+XSX5SF5ZPRvVox2G4Xn9GGAwGkUl27dvT8A9ePBg8jCUHofNPXHiRHHLLbckuhugI2E/dL7vNInAcsAH8LerULDjWA7zTsn+/fvFVVddJXbv3p08DAUwHKDPf/7zidmoH6/6yWshKYze9AF8D4cpIR8GrgUVAlu78pqRKw/1AkcH4CrQYbMPHTo0sccHDhxYKQcehq7TQ5eP+Zbs8gG8pUiBOUNYeddHYwmvUZl70M/oS/nhD3+YuPpQPzgPtvspp5yS2OboH8+y6V1US1ESaee1+AD+IrdxDMl4dT6sFDBdd2hWrFghli5dWvEqlZWCY5qamo6z6UOVhbvIjSYv+QD+u9C6m8ty9RtAgtHKPITArb/tttsS1aOcHhzfu3fvdufXKh6bvN5vnAGXIxYtHP2VV2gX1psAAtN1sxH6fObMmUmj2qdPn0RnL1iwoJ3F42raFalLat/OvNEem2sPwcIuDelli9RN8vZl/Zd1Xl6kwnRFsE9ZKaqDa8eOHWLq1KmZFQfzOQ+SMyfRQ31u9PU0Ic+WwQyOOknvQ1JmY9aDVufhwegeqk+j6fOmav+tN3rUFjyxcs4RNTfFhcF5MTSzzrMxXd8PMKEyik4EKqIqDedhmOrn3gyXM4jWcVtqLlu55+ZVGACDyTAdYRZihF63XFxnXYWqE7AyzbriqBTIqiINnss+kzqxsS3vmDLL5oMVB/ANciqu0Umw5VxPzQSuLeq37TplTVeW268Bq8KAy/nOS3z0oktFXYC3vW22a3EfvGPdltrmhnMZDnlcflbhpBM5VoHNTAupUrJmTHGBt9QN2DzKAZIFuPx2pTmUSrFtl6FSyvq+R0oz5/seF4ZDlstP5Nivui/oZaiUora4oW7AZDm3fGzA5XeI8zg9gi76Mv2a5wFvUhl5/5uuXbQB1eQW7jeargwH6E/SDX9WlnXC1cnc/4s04sx8A2HyhAuGPvMLrqYbvctpYFytkzIZ7mqt2BpLicHVruA5A05PFDb5tT6d+VzrJDTDXa0VZtuDr4/3lA64BH21MoN8Cs61xU2WCddiMbHahyhSHqO0utqrK8+kG25xAZ1bcV8LxabHuQSw1GOLrLtX2bwBJ5Zj4PBiunFLtSwU7htQoqXSIut8zBe3QpPyCHRMpfgyFWCfi962sc/F+eEwnHM/Btj7ZF0LTR8pPAtSPvXxlL8e2tlx1d9cq8fD43xd1rGlCNhBAJc330rZWMpf5bDaxmgOi13Yb2K8je2yTmNlHb280uCAayw4j/LNLo1USEuFY6G4NOKyLuept7co2KFUir6NOXXjKF/OtVJMDAzBcJvnafjmcrmsywGuU1RNlaJvw3qZRemrtH2Qw3aXfS66m7Mvo3wo8yWog6xLEGYHZ3hGRX5M6WzV92Jie1GgXYC3EABlRcCmH7n4FDVjeMarjGGnCZRfqnft2kzA0I0mo+H8A8ooy7rX9ol3rc1CjnODgBYjKW+k9I5Nb4duNPPugbJQmifL9qSPU9TRGK4f854cNRpK27PTA9OmRtNliI3ZaILFs0XbB6xNsmzOTlHNAHccPmultIx+DqMcr/D35QQaJ7Xgqo5wD3mvCfLey2RZvJwiX+A7hwCaG7MsNbsKI9w/o31oqLrRbwS/Q0Lwu8+o4HcF5mrjxN9Tek60Bb5rF/zOlSyhWN45JLtdgsWljsUQFRZCXyv/7k/7YS1gln06vGN3mSBHZMKoOfo6ENYR4R13iLbwjm/ngeMCZqAPrcphuClsYt5DyagIgMKkmg0h1p51Aa/Ig+BIXbU+mIoSuNGMEgGPgEeJgEfAI+BRIuAR8CgR8Ah4lAh4BDwCHiUCHgGPEgHvwFJ4xGfQoLZFmH3CxJQRoSotob6tzxv50dctrwrgaZBcgbcBXHSIzbawjm3ozzQYXpNRe9dAoUx2I9AeBpExeJweRMZChVhMFwsU4uvfgzLHskfb6RoYRMZgMgaR30kD4wKsDWiftzE4w01AG0AGiJgigTAlmCbx2Tp7bbDOUl/Rfq3uL2nMBSpYXe05uhSWRWo3TcI02O36UDoEwxlAo8FGVKeplC7WI2KFUCnygZ0l0w1yshGmYqwSbSsdfZDFehvwRUEPZqVwgafUg9IN9HMX5espTUmHH7MFlXY5RtvfXd5rvbz3DbIs3uSpSph1E9jpYEqpwmEl35tFW+TZpTIc4nEgcYHlPogc8OtRBtH2YetcWTZjHULF/ukUEmwDq6fQ5jbKm1XkWQ4wNkBdI4Bn/EbI4SZZtikusdp8QS+b4UNo81nKn8yLFu4Ty57LdA74cvvTKCNtQsUN4YQ/q5lKMRTkMlgJtD3B9GDyAHFhtkFvG++VUe4LpWVzmS0WaIdguNzuKtq+w19N271supADUgig8/ZllA9lxloCj8m65BKrpjpcbiNC4UbKr7CxwgRGSJViAtqkNihNk3X5hInptTQLEUpsM+VjOKzOewhlNJrce2ccM0bW6dRQTA+lUkZRtonyYRwzkcPAkI2mpcG0xQgdJus2KgToIVQKIuthHvcgrplo6hIoo9H09Yy1fJCsY0NNzcL6+vr+0uwbbAObq0q48eq5x7ioFgvog2Vd+9fELCSwu8i+iQYO2JwGi+PKc/S7yQT0cW60vEHW+aRaWCkr6MajHQprZT6H4a66nGF7s5wbLUedH6yqSiF2w6mZ7gK2S8V9LBQXS8W2n1Gf6WnnqDTACewh6SfsU3BO48l5ABx2uzx0Bzf+QUpDqsHwh3QP0uTmc3Qkx6rwfQC267uCntoPDB4qFXBi9xTZ3+AVy95lOC4kwzmmIQf0jPxCwuTrTu0Ld9SCLoxhsK15vX5FLBXGEJzziE/evlDrz6YW+x3FXX/WheHX6mCH6ivnOkGuFkpRZ8ehbsDk2qAMJ3b3kCM1/Rxet+DDVqbIKhzGl7WGuFywvZ6zhjiX4dPUSA2nsXRhE0eluAyxWXoznYjgUDdM35gWhOHEbjyUFvSY+bLbZ7iqaL8zV5eHWCFfsnwvbQ6zxYHgMHy8qXuyDGuF24/CObYk6yTr3HrKLgihUqa6DEK4MJZ7XiiVErpsXliZVIo0Bd+SczqMvX8e7jGrggj9hZiZyBEpFtGoEBoMcdlM8exdo1W5qpacfZhsNMAUrcrG8AvSk3S4loSLyZUHNuKtqTjHAFsFosZvFXTa9Zohyms4D1iNL6JSJvjY2r62uP4bYKpQjlnqAoxC/GM8FB914kMIpsk6oQjg43z0HteWNgECFisgzz//fLF27VqxadMmsWrVqnbxj/FQTA/GZUYvty6Weo7z0uGkvzFavT9r9MRVh/s4PO+++25FP27cuFEcO3YsUSfQ4UgISo3IsUoQ9lGPFh5oGWtXHa5+D8iLGmti+OdCf43g69YfOnQo+Q09DuYj3XnnnWLSpEmVY6CCEJJdhVgvMoU6QB3P8VEpZ7u42KEmyqjzVSBSSGNjYwIkApTif1gs0N+zZ88W11xzTeU4HHPkyJFK1O9QZfFwzM7yAbyhSKF91IouetBpxD++8sorxcGDB5P9yloBoy+55BKxcOHCygPCW6AsGNc3rChptPMafAA/lVPgkK+kfi3Y2ogCq/a/+eab4qqrrhK7d++uBKQGuGD6mDFjxAMPPJCEZFcCpquGN3T5GIQZ6gP4oLL0HFegQhCGVwmYe/3114sXXngheRgKdFgpQ4cOFQ8//LAYMGBA5XhlNlZDUhh90gfwvmUXjNOBBesDoCuvEqpk8eLF4ic/+UmyH28CLAMAi/jIjzzyiBgxYkQ7s1E1pGU3mBzsTICfXPApB1M70M8w+ZADeKRHH31UrFy5sl3waagQPACol9GjR1fO57Cc2+XAlG4+gPcUHUiUTtdBf+aZZ8Rdd91VcfuVeQgrZdGiRZVzldVSRenhA/j7IkpwMX022CraPkTtEAI9DG8TTFb6fOLEiWLGjBlJ46hYDJ2Pt2D+/PntVFKV5XBVAM9bXTm931WUo4PrqEZy+vTp4itf+Upi/uFhqAYWx86ZM0ds27Yt06a3decGWrr6qC/gwcX08WlWjyAABKi6Lr/tttvEueeem5iJSn8D1AMHDiRg6wsOwGbHObZ+lMBywAfwt/OYWy2B1QFmq3vDDGxubhYNDQ3JfoAN9dK1a1exa9cuMXfu3KTTq2IqUCMLW74aS3an7vGmT6O5x3TRUIGE8q4PNQFQ1f6BAwcmjg0cHNjWCmyACkdo1qxZ7cCGGRkabIcAHLt8GN5ShN1p/e2iStK2M/q/lyxZkqgG7Fe6HIx/6qmnxIoVK/7OIPkQshpKn7DCnoRp8QH8RW7jGLKBVOfrtnNTU1MCMFSMsjqgm5cuXSrWrFlznK2udHZRdpsegOXaL/kA/rvQupvL8nRImN69eycmIdirBiDQcIYegAiodn7jrMPliEULR3+ZIrdyK5U+RrcsFixYkOj0Pn36iP3794uZM2e2AxsWChdsHzY7sn1n3miPjeEQLOzSYFIhrmqFGxUFDZ7q0/7FL36RpMwOH1Itys4uGj7GFeCc8zeaALUNIj9bBjM4oVqgIgBmlopRnUrQ1wDbFIvNp9H0eVO1/9b7uvYQrJxzRM1NcWFwFpPzzstjOsBUjWXeRCAbSDY2+6hKw3nw0H7uzXA5g2idz+vq+jrmxVQDwGByr169kv5umIJqxCcPCNdZV6HqBKxMs644KgWyyrXBC7VWoA4gN9pgNcvmgxUH8A3pUIw+OddTc4lpzznWxVMuWDcEa91QGHA533mJj150qagL8La3zXYt7oN3rNtS29xwLsMhj8vPKpx0IscqsJlpIVUKJyCpZ92AzaMcIFmAy29XmkOpFNt2GSqlrO97pDRzvu9xYThkuR5IOkTvmq8u91EpRW1xQ92AyXJu+diAy+8Q55kA9WkoTXGUs1htY30e+BzGu7Bdk1u432i6MhygP6ni1JdhnXB1skukb99GnJlvIEyecMHQ51v7q+lG73IaGFfrpEyGu1ortsZSYnC1K3jOgNMThU1+rU9nPtc6Cc1wV2uF2fbg6+M9pQMuQV+tzCCfgnNtcZNlwrVYTKz2IYqUxyitrvbqyjPphltcQOdW3NdCselxLgEs9dgi6+5VNm/AieUYdLyYbtxSLQuF+waUaKm0yDof88Wt0KpuBDqmUnyZCrDPRW/b2Ofi/HAYzrkfA+x9sq5vO3ZohQNce+rjKX89tLPjqr+5Vo+Hx/m6rGNLEbCDAC5vvpWysZS/ymG1jdEcFruw38R4G9tlncbKOnp5pcEB11hwHuWbXRqpkJYKx0JxacRlXc5Tb29RsEOpFH0bc+rGUb6ca6WYGBiC4TbP0/DN5XJZlwNcp6iaKkXfhvUyi9JXafsgh+0u+1x0N2dfRvlQ5ktQB1mXIMwOzvCMivyY0tmq78XE9qJAuwBvIQDKioBNP3LxKWrG8IxXGcNOEyi/VO/atZmAoRtNRsP5B5RRlnWv7RPvWpuFHOcGAS1GUt5I6R2b3g7daObdA2WhNE+W7Ukfp6ijMVw/5j05ajSUtmenB6ZNjabLEBuz0QSLZ4u2D1ibZNmcnaKaAe44fNZKaRn9HEY5XuHvywk0TmrBVR3hHvJeE+S9l8myeDlFvsB3DgE0N2ZZanYVRrh/RvvQUHWj3wh+h4Tgd59Rwe8KzNXGib+n9JxoC3zXLvidK1lCsbxzSHa7BItLHYshKiyEvlb+3Z/2w1rASjTp8I7dZYIckQmj5ujrQFhHfE21Q7SFd3w7DxwXMAN9aFUOw01hE/MeSkZFABQm1WwIsfasC3hFHgRH6qrxwVGUEhrNKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8Aj4FEi4BHwKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8Aj4FEi4BHwKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8D/n0npkT3r6uomys0zHU9tMv1p+6CX7tvoeL+X5XV/GhkeGe4kZ0rm3M18I+aFvLnHfSPDI8PLlZc72HUiwyPD/ayXeSadb7NquOuo5N2Hq9sjwyPDq2NFVOHNigyPrn2UCHjU4QV1Jx33bxwrhWuHUxuxriPp8sjwjyDD83r9Gk12ODGza8E36kyLddQUGR4bzSgR8Ah4lAh4BDxKBDwCHgGPcoJ6ml7zQ2yeIvc6Bk+0sRYeaGT4R5Dhijl3pxiWN9a4riOVJzI8NppRIuBRhxuthXmR4VE+egzPm4+iMT7I/PBazbCKDI+AR8CjfER0+MuO1sjLJ/h9I8M7ksTwjlGHR8CjRMAj4FEi4BHwKBHwCHgEPEoEPAIeJQIeAY8SAY+AR8CjRMAj4FEi4BHwKBHwCHgEPEoEPAIeJQIeAY8SAY+AR8CjRMAj4FEi4B1f/k+AAQDJjrwQhWD6twAAAABJRU5ErkJggg==);background-size:46px auto}}.fancybox-light a.fancybox-close,.fancybox-light a.fancybox-expand,.fancybox-light a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1NjIzNzFGMDZBNTUxMUUyQkVBRUY3ODU0RDc4OTlCQyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1NjIzNzFFRjZBNTUxMUUyQkVBRUY3ODU0RDc4OTlCQyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE5QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+xE3ZhQAAC3lJREFUeNrsnXtMVNkZwO+8gEFEXFBBXSuLrAZHirrZbf9oGvFZqKQrfygBxCBs0ljTl6mb3W2axqai7rb43iauViVREv7gJT6ID4gSaxYrgom6rLLVqgjDMAwww2Pm9vvgXPdye+fOzH0MoOckJ4N37uN3v/u9znfOXHUsyzKTsemZSdooOAV/3cGNwg06nW7CQYp5PqMK58U7NZCnx31yd49X9EB38z5ZTSQeoJrh8SHQQ0k3kW2cCiLsMPQh6AOkD+I2kKJHyRM2ypQwHhcGPRx6hMlkmnrgwIEfQ/vp3LlzLREREXNgWwTuPDw87Ojv739itVr/ff/+/QsbN2681gcNgF3kpljZ+sPvfkgZgaOhJ5jN5verqqo+dzgcz1g/m9Pp/M+dO3c+jo2NnUHOpQ+UcYQzAHADkXAs9MV5eXlZHR0d37AyW29vb/OxY8feJ+c0aAXOQc+GnrJ3794/uFyuHlZhGxoastXV1W3wBS8XHB+lmUh6aXFx8Wdut3uIVal5PJ7B27dvbyLX0KsJbiI6bcnPzy8cHBx0sio3EETfuXPnPiDXUgUcJTAVDXHKlCkr29vbv2U1auB5mlJSUqLEpC4HHH30LOjvnT179rAfknPL+Y5rjx8//i25piJw9NfoixOMRuNqu93+wtsFQX3YnJyclwsXLmxvbm62C7+/d++eDb7ryMrK6sR9vbWBgYG2mJiYqbzIKwvcSHT7h7t27fpUSlKbN29+sWTJEtZisbCLFy+2tbS0vIJ/8OCBFVSgMykpiU1ISGA3bNjQJXWupqam9cLAKAau9xFsMIyHr1ix4gMJV+UGKTMQKZnQ0FAmJCQkKjc31w1R0tba2mqFvz16vT7aYDAw8MmAB2HAk3i8nS8uLm6dP0HJ6CPgoJWb4+PjF3qN/zqd4cyZMyaQeif8HYO5B9zM9K1bt1rhb5a3Dfe1V1RUGPR4B14apAvv+QpIvvJxLokyRUVFxUqdBPT3rZKSEh3YQid0lDoDEo7moFHAAN5dVlamS05OjpQ6Fxz7jj8S1/tIpkZS1bCwsHBfJ0pMTIw+efIkwncgPHZO0tBspaWlerCDSJ9Aev00oXEGYwQUtFqH3gfESPIPeUm/rxOhIULihWnsDOj4yek1fj0d0lkPeJseX+cBtbL7IwAp8FeDgO7u7hdSJ3n48GFXdnY2C7AxCAy+GsGtAN454rrAFuEGojIzM1nwQJLwcOwjcm3Z4G4ycnG2tbU9lHKHmzZtGuagIYjgxW3Hjx/XnzhxQsfBE32flpGR4ZZyh5Dufk2urUjiONTqv3Llyk0pd4jBB1JUDrr79OnThkWLFk1Hgz116pQeOK0Q8Ue8y7Jlyxgpd/j8+fML/kg8qCEfnkyHr5C/YMGCSKUhX5hkHZksSZZYWvtIq7QWxqLNaqa1YwYSBQUFH8GjdmkxkKipqfmRmgMJ/tAtDoduBw8e/JPaQ7fGxsZsLYZu/MHyHIQ/fPjwH8GLONWQ9K1bt3K0GiwL4VHylm3btuV1dHTIHsqBv74HWeVPtC5P8OHNIgWh5wEY4RNeQcjsTwqrBjin8ybibdBVJoSHhy89evToL+/evVva2dl5D3IbG9oBdvDNXTab7e6jR4/+WV1dvQmGZnHkWJO/SZ4YuE4IG0ARMmhFTzGBKgEPWplZq/o4S6RKp1Jk1ccny4QtnXWj4BScglNwCk7BKTgFp+AUfAINJGSOOzVvwgHO6yVxGaN8rlTBFXfcZADtYX2MBYO5JkusNIEVKZwE4KYVndAd+AlgblaDgaxeBegp0GdUV1cXuFyur4eGhpobGxs/gW1Y6w7RVOlllOG4lXA4mbqgqqpqt8fjGTPjsH///g/J93qxawRwLVFOo1JJA/RH6enpvweVGPP0Zs2aFc34MUMcLInzJZ0I6rEPp/6EFdnu7u7HycnJFma0uKnTQuKBnGwMdE1NzedYuBSBfpqTk7Ma9plJdHxcwTloNLjE8+fP/80bdFZWFq43mU08jaR+aw0+BvrChQvFYtA2m+2/mZmZabDP28Q1mogt+Nv1zNhKryLwMdCXLl3aLwZttVqfrVy5EiegUqD/gBldozgzgI6zE2jMODkbRm5EcpJWJ1o0/z6acfM+My9fvvzr1NTUXwlPCL7buW/fvq+6urq+i4yMdISFhQ3iyqCAgolezxqNRnd9ff03FRUVT2BTDwlibm+5ii+J4yxDHEAXiUlag/nOoWvXrv2ZPLFQJaqC0n6nv7+/gw1SGxgYwPUq8bz0QRTcr5BvMBiMTPCaW41cBU/irKurO8IEYdkSxLJh0PMvhPot6p/9Nc7a2tptq1at+o3QOHGFzaFDh061tbXdR+M0m80DUsaJhmi32509PT2DQuO8efPm04aGhnZinC4lxjkmWnoLPABhy87OLoR9lkKfL+EOZ/B6DK+jK3yLpAh+ucNAApBkqO/t7e0sLCzMhH3mMd9PwPoTcPSC4KPTIuRLJlcA/xLgMyZKyPeWg+8V5uDYHA5He0FBQRpRjdCJkB3+H3xlZWWRGDwY33cTKa31Br9bDL60tLSQ5DiajIDkjDlZ4qb6oHdkZGR8BWqzh1tUQC7iaWpqatPU9yuQAl/y8eXl5Z+AcT6F9KAdgshfmdHVRGatVMVXAPJ3/BlGcnAzrzzRi5+YO6lRVxFyKs1BOLVB0EFeQYhbp+LRSlOUSpzWDt/sMvNkWLNCVYWCU3AKTsEpOAWn4BScglNwCv6GD93oKD/YElciCd5T82vl0HitEJJqXGkOS81caQ5/Jo+lOZfSlUOaqAqRNk6lRDU0NPxucHDwbn9//7/Kysq2MKPzP7jWxaBTYkxKKqgSx6NAphUVFaXz54uwjl5RUYGV3AXMaJUXn7hOznW0NE7d/Pnz4/hguIpo/fr1OysrK7fyJc/IWUmkkcQRZOry5cuXOByOpyI/93XjNEwgkldjKsVfcJy4mpmfn/+znp6eZ2LwOAHmL3xQwHl6jt5kdm5ubpoXeA9OPcI+ib7glcxzvnrfSgCd+6XtvKysrPV2u13snXEenPT1BS9nhRDnj3H1TjQxqkBW/+D0OK4aSklLS8u12Wxir4Lw4HS7FLycFUIj0KtXr56zZs2ad+HpmqDrAlQnncvlCgF1mRoTExO/Y8eO/NDQ0DDhbhcvXvz7unXrcKXGSxKsFK0Qiq2trf1seHh4IAjrbDwA/xdm9Cf0ilYIoXHFO53OziCuELLBNfEFSOHjFYBkBy5GhZcf4XSfs76+vhj0eigYafeNGzf+IdRv0bvz1zjXrl37NhhootvtNsoxTlC3UDTOefPmJW7fvn2LCV9lJtjt6tWrR1JTU4vh73alxqmmO1yamZmZD/BWMaMEB3CQuMMof9xh0ALQli1bfgF5y0sxaFxByoM2qRGAVAn5eXl56SBp0cCDa3WlJD0eSRYuwZ4Jkl4H0M99RMuoCZFkEYCIpKSkxRDiv/UC/YU/yVWwwVFNokpKSvLlJlXjOQJiW1tbnwpWDrEIDcnWlyQf6WPkvmhaQ1VBw5xz/fr13RjG+/r6XpaXl39KwnlA40052aGSugr34mnhyqE+8jlSnpC7QkhLcGFBiHtJjKyCkOrgtHb4RpeZ6QohCk7BKTgFp+AUnIJTcApOwSk4BafgFJyCU/DXFzzgxWQ6nS4dPixevt7D/4fIm7Z3ejmuBfY9FxCIr5+fi9S+d4pNZeN2X+eWOjZQDjVVpUWlfbRRFcGj/5j3T4tQhcTK1fxjQHJF4wKu5OKCm6ZeJWBpwbafS3gbvveoVkPqcsD5Lo/vSSwAFebjhi0CNdtDIycFp+DqGafXfENofL4iJz93CdRQZblDLugIomC1GuehqqJ16FaSa7zxae2EV5UWL+rRovGxY1V0svxvNDRyUnAKTsEpOAWn4BScglNwCj5x2v8EGAAYJEdp3vkt5wAAAABJRU5ErkJggg==)}.fancybox-light-skin-open{box-shadow:0 10px 25px rgba(0,0,0,0.5)}@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 2dppx){ .fancybox-light a.fancybox-close{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpEMEQwOUQ1MjZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpEMEQwOUQ1MTZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+z3OoagAAHXpJREFUeNrsnQl4VEW2gG93J510OkASQzCQjMQl8IZN1iCjAREHCMoDGRHQECBsEuAhIomCTxAElcGERRg/UBwdBgOouMAoH09kcWNk2CKrGBCSEEIWyNadrd+pTlVSuXQnfZf0lnO+r75Op/v2vffv06fOOVV1SmOxWAQU54kGgSNwBI6CwBE4CgJH4CgIHIEjcBQEjsBRELhHA9doNEhKJHIV1Z2As5No6d/i1uB+bbQa7jUE3ghg8qijoHWiphV9AYIIMGnVosb+Z2nOL8BlwKWeWFP7YRoOsA80X66x5zx0e8AZ5EpoVfSxknvOvgCLxc6FylUmtwfOgWYg9bT50eZLn1uht2rVynfRokV/jI2N7REeHn5vmzZtIgwGQ1tfX1+jTqczks+srq4uraqqKqmoqMgtLS29XFhYeCYjI+PHefPmHc3Ozi6Dt1TQxr6Y28A7G7j1wKaakuM5bSaa6w+tNbS20CKhRUPrBq0PtAHt27cfkp6evuzy5cuHKysrSywyBb6E4oKCgq+PHz8+e/DgwR3oOf3pNWiZoinx0BzhZpNHcwLnNJpobiC0UAq6E7T7ofWHNnDAgAFjfvjhh61ms/mWRWUB+IWg7WlLliyJptegp9ek8SrgFDbRKAO0IGjtod0LrQe0B6A9HBUVNRJAp4M2l1uaWYj5uXbt2prp06eH02vyseH9eCZw+rMltthItfouaF2g9YM2CNrQjRs3vlpSUnLd4mSBLzfr7NmzT9Nr86XX6pnAOXvNTEgYtHuo+fgTtCHQAY4+duzYbouLpaio6F3Q9lB6rVqp2u4uwBnsVtDupJ1ib2KnoQ175JFHEvLz83+zuIlAn3Fq586d93HQPQe4CHY47Rj7ElsNLe7pp5+eBSYk1+JmAp3q799//31PqdBdCpz+HH2pGbmTwib2ejC0EU899dQsk8lUZHFTgQ4178cff+xO70Hj1sA5b8RIbXY01Wwr7KFDh04Fzc6zuLmApl/dt29ftKPei6uAMz/bQL2Re6jNJmZkRNu2bce6k81uSiBiPQlRahDz05sDuNLQnrl/BHgbaMH0kWi7H3gjM++///4/e1LatbS09N3AwMAkmo+pUTu01yq4Ng0HPIDabyOFr1+3bl1/T4NNxGg0JkKANEqOq+gM4Cw/YuRhd+rUKXDKlCnTPHVwAUxh2gcffBDiTsD5HImBangAzfr5bN68+YmAgIBgTwWu1WrvHDVq1EJHbLmzgfuJYPv26tWrTf/+/YcrvTAIwctccSwTsOMz9+7d29ZdgLMgx59quD8bNEhNTR3h4+PjL/eCCgsLBQiSTF27dtXPmDHjkpTOibw3KSnpUrdu3bTg+5sKCgrk20uNxgiKM1eh2VUFuNic+NO/fYKDg31jYmL+rAR2YmJixblz5/zBJPkcPny4Y0JCQpYj0Ml7Zs6cefWnn37qCMf6nz592n/cuHEVSqBDBzpt4cKFBjW1XC5wH26kho3WaJcuXdrLz8+vlVzY06ZNq8jNzdX7+/sLBoPB2v7zn/90mDhx4tXGoFPN/v3UqVMR7FjymJOTowdNlw0dbHnI/Pnz41wNXMcNh/lxCX3t8OHDY+VeCAQc5QSQr6+voNfrrQ2+PCs88Ocj7EFnZuTEiRN/YMexzyCPV65c0U+ePNkk97qCgoIm0PtzCXDe92awrcNWoaGh+o4dO/aQcxFkXPKXX37RkQALtMradDqdFRiDfvz48YhJkyY1gE7+nj17dubJkyc7kveSY9jx7LPIIxyrgShSVkcKX9wjS5YsUc2saGXab36U3ardYA7uldtZwnGBAwcOvFhTUyOQZg2BARQBCK9ZoRMTQaCDjb/CwuQ5c+ZkgmZHkfeQ95JjyP/Z51RXV1sfhwwZkgPgAuR2nvAL6asWcKmhvY66gME0dxJMI0zfL7/88r9HjBjxjJIhKwB4AczHfUxbGUACDn4F1kb+7tu37yWAXAP2/W4xbPI6uIWC2WwWysvLBYh2r/7jH/+IUDK35ubNm4vAtLwh1I78OzW05zXchwsMNJGRkeGKvnkAsm7duvvAjz/PwPKazswL0XToHDuCZt9t74sB8yGYTCYBPksxbCJwnmhXmxS+WYFDONxB8c8NwKxduza6Z8+eNqETbSaQSbOl2eQYptnwGVchPI9QY9YYnKuTK4FrBRtT0Fq3bh2qio0TQSc2mP2fdYh8x0iEvIfBJpqtJmwKPMLVwMXz/khvblTLdWLQ+/Tpc451ovxrPEjWSTJTojZses4gVwK3OaMVfuYGNUNgAiw1NbVTv379ztobCGH/Zx5J7969r/7973+PUHvyKXxeoKuA24Pu1RPIyXQWVyevbAUv5SrfpABh9fkjR450FpsRsXlhgdLRo0dvC45UupZSVwIXT4S3/g86rFI1Yc+dO/fXf//739F858ibER46eQ/xWkg4D755RFO5FxnXU+Iq4LZgW+XWrVs31IINAdBvEACRyNUKk+8c+cagMuDMT6e5lyy1oJPpcYJKE/vlAGcT4dkKBOuFXL9+PVst2BDC323PzyZRJGn2/HQu4dVBLehwvvOuBs7Dtl7I1atXs5XCJokoANVouE787B49elyAkP08+Z8t6MS0EE0nqd34+HjF0OG8F1wJXLymxgr98OHDvyq5kAULFpwH2FH2wnUWQXbv3j2TpADS0tLsRqR8ihf6gQ5Tp069rOTacnNzj7gaOL+Gxgp806ZNF+HmZeWdSXr2wIEDHfkIUgybRpC/b968OYp5J/bSAHyKlzzu27fvTrnpWeKhvPbaa0ddDbxKDD0/P78iMzPzpNz0bJcuXarEqVVR1u/Kli1b/sB7LBz0C8y88B0qe4RjLXLTs2VlZd9u27atzB00nLW6lWJ79uw5JPdCwEQEhIeHmwk4EqKTJkpERdrzxQH6fQD1IjuOfQZ5jIyMrIAvSvagdlZWVrrQcBmisqhVxlQ3trIhhDaSEyfa4xscHOyXk5PzN/AUAuVcDBnXnDJlivnKlSt+RFsJNJJidSQ3QgeRL0Hw05FoNoENX2DFRx99pA8JCZEFB66hMC4urvPevXsLqXI1OJ9s70DiZE4tBU7SsWQFGpls/xi0J6CN/fbbbz9RMqGyoKDAMn78+LLo6OjyadOm/QbwHD6WvHfWrFkXO3XqVP7kk0+WgZlTNLnz2rVrafRetXK4qTV7lqgaGc8k6ViSmCcr0YZCI/Px/gI/7alqLJIC7S5xxbH8IqwVK1ZE03vVqAVc7uxZH2pGgjizYqQXpzt48OCEhx566L89OWEF2r0BTNKLpN+kzoGghkmRm7winSRZ4VtOm4nvQBMTE3dB717oqbDBzbz+6quvrqb3WKPmZ8sFzrwVM9WAMvq3dU71hQsXysAzeM9TgZ88eXLxxo0bb6jpnSjxUvgviy0PDBbqJ+MbqCejO3Xq1LNdu3Yd5Emw8/LyPg4LC5sJf5Y0puGumJBvobaNmJNS2lhBAatpGTZs2BbwFC57Cmzw+c8lJCQk03uqEpqh9IdS4DXUjJRRjSilNt0KHYKGsnHjxr1RWlpa4O6wwbPKffPNNyf961//yhfql5uoX2ulORdVQRsN7ckxY8b8j8lkuumui6kAdsGqVasG03swCM24qErtZYPtBNGyQQr9LxMmTHiuuLj4urvBBjOS/dZbbw2j124U3HzZoLiYAVvy3WBhLIMeGxs748aNG5nuAhsU4CxEpg/Sa24lSCh24Kplg3VfCLyHFaHxo55La9oC6c/UOq0ZPAD/PXv2xPfu3XuoK212ZmbmjpEjRy7PyMjIo/2PmXaUNY4Cd7ZbeNuJRdCNFHgrEXTr9ObU1NS+06ZNm2I0GkOdCRr6klzoGN944okndsPTW7SjlwTbnYBrOJuup+F/K9rYskK2YkIXERHhv23btjFkEZaSdUEOZv7Kz5w5s2PixIl/O3bs2HX4VzHnxjIX0OJRwEWazq8DMgr1C2cDOOhWbScr39LS0kbExMQM1ev1gWqCBg+kGMzGrpSUlA/37t2bbct9lRO+u11VN6rtbGozW17IgJPGFmPVTeoPCQnRk3VCcXFxAyMjI7vLnT5XVVVVlp2dfezAgQNfJScnH8zJyblJtblUlIaQHbq7ZRk9Cp3XdrbMMECoXwHHL12pmwJtMBh0SUlJ9w4ePPiPUVFRd8GXER4YGBgCv4BWYH78ampqqomZIBOQwLUrKiwszIZA67dDhw6dAJ86o6SkhCXVyilk9rxOq4mn4nV1C0WFIZlt9+Pg+3HQfcXgOTdNXCjSIjScRcDGWSuoBps5yGbOVjcoHOlxwKWcS7BfKFLPNR66o6VQedgVHPTbCkWqFa57Uu1ZJaVQLaL0MAPeZClUtfMinlpdmdf65ij222yVlrGctZ1JpgjcSwSBexNwFBXtJgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBwBI6CwBE4AkdB4AgcBYEjcBQEjsAROAoCR+AoCByBoyBwBI7AURA4AkdB4AgcRSFwXGB1u0hhiMBbOHB7C2JtbmWDwJWD5pd/swIHRPgiBqoXKvAo4Eo7XVHZJtJYDRU/ob6kHatzKy7FUVeGQ8H5Ww5wEWwCmNRPCeSanr5OAJMiM8VCfa0qa2EwUqfdk4BrXfUztAGbACa1yO8cN25czOnTp/9qMpm+qKqq+jonJ+f9jRs3joHXwoTamuWsoLBW42m9uOTKkgqPZ4V4hPrqzKTiW3tof4Q2cN68eS8C6FJbhR337t37AbznAWh30y+HHK+VW7RRrRhE9cqcagK3ATucwn541qxZiwB2eSN7PNTMnDlzGry3q1BbujQAgcuEPX369EXl5eWlTZUvPXTo0Mfw/j7QyLa5Rk8DrnWxzSZF3tslJib2T0tLe9Hf37/JzY2Cg4PbCg1LM3mUaF0NOyEhoe/atWsXGQwGh/ZUvnnzZr5QX35JQOBNwzZS2GHPPPNMnw0bNrwcEBDg6AbWlo8//ni/IKNWbIvwUkQ2m2j1ndRmDxo7duzzxcXFt6SUnz548CAp0BsL7V7qpfhhp9k47P8irt+YMWPm37p1S1LF/J9//vmQTqcbDsd3p25kIItEWzxwG7DbMdijRo2aB3a4SCLsH/z8/EYKtdsd/EGo3U1FsQ/uFcA52L4c7M7EFDz++ONzAXahFNjHjh07AjaebDtGKu5HUVPiTz0VTYsG3hjs4cOHzy4qKiqQAvvkyZNHwXsZTWGz6NKghinxFuAMtpGH/eijjyYVFhbekAI7IyPjWGBgIMmfxFDYd6gN26OBi2CTJBPZeCN2yJAhzxYUFORJgX3mzJkTISEhT4pgB6gN22OB24H90KBBg2beuHHjukTYZ4KCguKF2n06iQvZQajfzZDVHG/Oxld79hEa1r9VDNxHhcCGL8hupOnTsAEDBnTesWPH0jvuuKOto591+vTpiw8++ODbYOvJZhiV9Cb19GX/RkZ5NA7+r1FF5UaX+L2fq4SGG24rCraUAteKtNsKu1+/fp127dq1LDQ0NMzRDzp16tTVhx9+eBvY+kp6XQZ6g/4OhPIaBaDF0PkS2SausVEmQQl0JcBZyO5L4ZAtZEJjYmKiv/jii1fbtm3bztEPAm8kb+DAgV+DZrPtf6vpT9ssNF3/2xZsjQLgTLNJdf1SbpSpRKjfN1QjdzxVyRCbltNEEoi069OnT+fdu3evDAsL6yDlM00mU1V1dXUNfDbZk4FtFSD5hnx9ffU6spm9QiH7S0AknH/ixIkDCQkJWy5fvpwF/yYb+N0S6ncirJE1zKig09RSz4GYjS5RUVEjs7Ozf7N4meTk5Fy45557htNIOYwNerjCS/GhZoSE2v3279+/w+KlQgc9+tJ7bS02xc4agOCjSkPPnj1jBS+V7t27x/ID10o6ZqX58LpdS8geO94KHO4tWKifmKRolEkp8Do3qqysrMhbgZeUlBSKfHGXAGewiX9qAtfuO28FnpGRcZhzCZVt3auCl0L87a4gT+Tl5V31tg7zxo0bl+HeHqNpBpd6KWyAgUSXdxFPZdCgQYn5+fm5ngqXzHspLy8vIxORbt68ef3777/f1atXr8fpKFME9VD0SoArCXz4UZ26geHBgwd327lz57Lg4GCHNybNysoqgS/r819//fUqPCWj8jdplFehht2UYCL5nczZ9vElQv1GpyzEb2BWnDmZU5y4sk59GDJkSJft27cT6Hc4+rlXrlwpgOO2nD9/PhOekm1zi+hNVnI5DIsToNdwCSs+l1LJ2fAaGxlTpwAXQ2fzTcKGDh3aPT09fWmbNm1CJEC/NnDgwLcyMzN/46CXCAr3vpSQKZSVLXRWaG8rAGowrBYXFze3qKhI0hjm77//fjkqKipRqN0xvBu0SDq0FijUb3zqsflwtQYg7I3Sx8oZpb906dKFyMjI8ULtTNl7SBaSG/FRPFLvLUNsduehjB49+rlbt25Jgn7hwoUzISEhY+H4/hR6swyzecuo/W3QyUwrgC5pptWZM2dOAvS/cGObbNReJ+CofdPT28aPH/9CMYiCqRI4L8XBeeB10OPj4xeWlpaWSIF+9OjRH3Q6Hc68auxkQiOT7ydPnvyiVOhHjhw54OPjM4x6LuHU78e5hY5Cnzp16kuOrHjg5ZtvvtlFpl/QThTX+EiEPnjGjBmLTSZTmZRcRxIIzW3gGh+Jq9a6EOjA738bW0hlQ8s/EmrX+HQQcI2P3QtiiaEqOvWBhOskqZ/79ttvH05OTn7TbDabHPmsoKCgEEeiP3cVH2ediECneRiWFKpLDaxZs+YAeCGalStXLtTr9X6NfU5eXl4ON03B84q9uHidZmtqXsi6y8EpKSmvVlRUmO2Zk+rq6sqnnnpqIjVHYTQIQhsuEzqBOOjll19eSsyLrT5z165dm2jUGUWzknp0C5VBJ2mAAZMmTZp57ty5H8nIS2VlZUVWVtaZ1NTUV8hr0KK5oS6P88PdtZpEK+qBsGoSpKMtpyMvxfRvj6wm4a71Uvy5fLSGdpIVAtZLUU3EFYHYpBusCIQ1r7wLuEcKAkfgCFw2cBSFnRQCR+AIHAWBI3AUBI7AURA4AkfgKAgcgaMgcASOgsAROAJHQeAIHAWBI3AUBI7AETgKAkfgKAgcgaMgcASOwFEQOAJHQeAIHAWBI3AEjoLAETgKAkfgKMqB41r72wWLGyDw5vl1co9NFbSxIHDlsOv2gRPqSzaxSqOsTBMr2VTTnOA9qsiYZNINi5KRQmRsoww/Cp0vuUoa23DUZlGyFlfVTQZwtlMtAc22Bm5F/9ZT4KTqG9tel9S5rSu7J4bubOBawYOEanfdHp4C3fR68+bNY69du/Z+VVXV1yaT6Ytffvnlzfj4eFLBk1Tmv4N+If70i9JoXGkXXVWZU2YVTVbJk1RYJptwPLB///6PbNWpNZvNZSkpKaSa58NCbZnV9kLDvdQ0LaoUqgLgbIfDbnPmzHmWFHG3VxwYoJuTk5OXCfWbMN0GHYE3DZyUSCU7//X57rvvPm2qwDup1gzQX4P3P9Jc0KXcg4/gecLsuJYWb29UyNbry5Yte4HUJ1+xYsU3opeZByMITio86YnA2QZ0NXl5ebmOHADQfZcsWbJAC7J8+fJ9osDJudA90KQYmA2fMGFCIngmVY7uHwHmpWLx4sWvw7FDqHnpoIZ58XYbTuAEUy/lTzt27HiXlLSWAv2VV155g0LvrgZ0bwbOIkwj1XKyPc2Q7du3b5UKfenSpW/CsY+qAd1rgYu0vDX1VnpAG5aenv5PKdBJ5X2A/lc1oHs7cOal+NFIk+zN1hPa8G3btkmFXgkejGLoXg2cg66j4TqD3otA/yeIFOjQ5yqG7vXAG4Fu1fStIFKhg7u4moPOtlT3cwS6qzcwrQtMhPotBtRu/NYFehrukyQV2W6GbBs24sMPP/xnI1G/LehVEBilwrF/lgrdqZEml5/WiCBrnZiNZF9ENU3FFsfHx+8CZ0Q/efLkMY4kByES1S1cuHAOxEaalJSUPdxLxfSxUo3gSClwfuTFh6ZN9fTRtxmgW5oI95nGE6lKTEz8v/LycsOsWbPiHIW+YMGC2SQN8MILL+wWndeiRkSqBnAG2p/6xwZuMMBXxaycRQJ0A70e3ezZs38uLS0NBICxjkJ/7rnnksgvF+B/Qf9dI0oDuAQ4f4MB1N61CQsLC503b16/yMjICPh56sGOkr3SNArNlqWsrKzSZDJVsOd2bKn1F1ddXa0j5sRsNhugBeTm5lYfOXIkNyYmpp1D9gkE7mEWXL9l/vz5nwn1m+0pHx9V0GkyX5gNBvRbvXr1yyUlJYUWLxHSka5bty5NqN2l9l56r35iM+ksL0VHTQjJL3dftGjRPLIboMXLhEB//vnnn4N7vJ/eq1Go39DJqcB9qBkhrtiAS5cuHbd4qVy8ePGYULvxXhS9Zx+5wLUKbThzx/Tt27fvLHipREREdKamhM1/kd0nOcNl83hRc5RfqxAw67krsrOzz3krcLi3C0L9DoeKtgZWCpzNcCp9//33P1C6t6VbjufBPcG9bYU/ywRuMpErhti0NLhgbmHMpk2bVkJkV+wtnSXEENXvvffeBri3WGj30XyNvxK3UMlUN9Zp+lFXicxuCrrrrrvCkpKS+oaHh7dXK/CRIuR8JPCprKz0haDHnwY/xsmTJ3fr0KGDUYpmb968+d3p06dvh6dkp/F8oXba3G1a7sy5hVoutDfQiDOAC+19BOftP8/ndMj529BfX1h6evpjY8eO7SYF9jvvvPPus88+uxOeXqOwi7nQvkas4c7KpVi4b5tNoiwTwda6CDb55VV/8skng0aPHi0J9oYNGzbPnj37E3hKpmEU2tNsVySv2MlZjqGSXpiz0rMaG0krdk8+n3322UiQB6TAXr9+/aa5c+cy2AVC7cbXJnpvyueYe+gAhE40AEFscyjL6UAb+SWI1A5y7dq1G+gABBmYjuR+KaoNQHjLEFswN64Z9/nnn++SAhs62eo1a9bIgt3SB5HjPv30050SYVelpaW9zcGOkAK7JU+TiNuxY8d2qZnA1NTU9UpgtwTg/ESgDkL9RKBtUmGvXr16nWjQWDJsrwbODemxSflkqtsjAPtDidMiVIPdEoDzkzkHbN269R2psFetWrVGLdjeDpxNVw6D1hUCmonAr0IK7Ndffz1NTdgtAbiR2u7eX3311QfOmOijJnCPWjYoyuHo2oE48mbi+oFmr3nppZf20NwIiyDNqkWQTgrtXSUkjVBVWFiY7yDstYsXL/6KhuviRJTTYKse2jvZhndJSEiYTKLExqYjL1++fJWg4mqHluylkBH0mN27d2+x5aWQ5YIrV65cIdSv0bwNNi6MleaHEy2PJq7h+vXrl2ZlZZ0lqxrMZnPp2bNnf5o6dWoSvPYn6qvbXIXsCuCeXNzAj4JvRVuAcHtxgxKhvrhBnc12ZXEDbyjf4SfUl/BgM6IqKXQT54l4R/kOF0nLLVDjYug8fCzB1FIEgSNwBC4bOIrCjgeBI3AEjoLAETgKAkfgKAgcgSNwFASOwFEQOAJHQeAIHIGjIHAEjoLAETgKAkfgCBwFgSNwFASOwFEQOAJH4CgIHIGjIHAEjoLAETgCR0HgCBwFgSNwFASOwBE4CgJH4CgIHIGjIHAEjsBRPB24RqMZQf/sKvHQNxp7sanrhvMmSzxfBv3c3c3JQ4s651xxRoX8rlRzXnfwF5Gi5sllnBc1HDW8eSXDzT4HNRw1XJ73ktKYzW/Kq3G0Bq698zhq21HDUcOd40U44ZeFGu6NgsARONpwp9pOeN9jjngpjvrh0Ed86U62HDXcCzXcXtYvuTE/HDTTX+EvqmsT3tEbqOHYaaIgcASOgsAROAoCR+AIHMVDI01Z80OaihQd/ZxGItFkV0SgqOFeqOFMc14XaZi9scYv3el6UMOx00RB4GjDG/UWUlDDUbxPw+3NR+E0XpX54a6aYYUajsAROIqX2PAMid5IhoefFzXcnQSXDSJwBI6CwBE4CgJH4CgIHIEjcBQEjsBREDgCR0HgCByBoyBwBI6CwBE4CgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBw95f/F2AAPX2XGJHD060AAAAASUVORK5CYII=);background-size:46px auto}.fancybox-light a.fancybox-expand,.fancybox-light a.fancybox-nav span{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpEMEQwOUQ1MjZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpEMEQwOUQ1MTZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+z3OoagAAHXpJREFUeNrsnQl4VEW2gG93J510OkASQzCQjMQl8IZN1iCjAREHCMoDGRHQECBsEuAhIomCTxAElcGERRg/UBwdBgOouMAoH09kcWNk2CKrGBCSEEIWyNadrd+pTlVSuXQnfZf0lnO+r75Op/v2vffv06fOOVV1SmOxWAQU54kGgSNwBI6CwBE4CgJH4CgIHIEjcBQEjsBRELhHA9doNEhKJHIV1Z2As5No6d/i1uB+bbQa7jUE3ghg8qijoHWiphV9AYIIMGnVosb+Z2nOL8BlwKWeWFP7YRoOsA80X66x5zx0e8AZ5EpoVfSxknvOvgCLxc6FylUmtwfOgWYg9bT50eZLn1uht2rVynfRokV/jI2N7REeHn5vmzZtIgwGQ1tfX1+jTqczks+srq4uraqqKqmoqMgtLS29XFhYeCYjI+PHefPmHc3Ozi6Dt1TQxr6Y28A7G7j1wKaakuM5bSaa6w+tNbS20CKhRUPrBq0PtAHt27cfkp6evuzy5cuHKysrSywyBb6E4oKCgq+PHz8+e/DgwR3oOf3pNWiZoinx0BzhZpNHcwLnNJpobiC0UAq6E7T7ofWHNnDAgAFjfvjhh61ms/mWRWUB+IWg7WlLliyJptegp9ek8SrgFDbRKAO0IGjtod0LrQe0B6A9HBUVNRJAp4M2l1uaWYj5uXbt2prp06eH02vyseH9eCZw+rMltthItfouaF2g9YM2CNrQjRs3vlpSUnLd4mSBLzfr7NmzT9Nr86XX6pnAOXvNTEgYtHuo+fgTtCHQAY4+duzYbouLpaio6F3Q9lB6rVqp2u4uwBnsVtDupJ1ib2KnoQ175JFHEvLz83+zuIlAn3Fq586d93HQPQe4CHY47Rj7ElsNLe7pp5+eBSYk1+JmAp3q799//31PqdBdCpz+HH2pGbmTwib2ejC0EU899dQsk8lUZHFTgQ4178cff+xO70Hj1sA5b8RIbXY01Wwr7KFDh04Fzc6zuLmApl/dt29ftKPei6uAMz/bQL2Re6jNJmZkRNu2bce6k81uSiBiPQlRahDz05sDuNLQnrl/BHgbaMH0kWi7H3gjM++///4/e1LatbS09N3AwMAkmo+pUTu01yq4Ng0HPIDabyOFr1+3bl1/T4NNxGg0JkKANEqOq+gM4Cw/YuRhd+rUKXDKlCnTPHVwAUxh2gcffBDiTsD5HImBangAzfr5bN68+YmAgIBgTwWu1WrvHDVq1EJHbLmzgfuJYPv26tWrTf/+/YcrvTAIwctccSwTsOMz9+7d29ZdgLMgx59quD8bNEhNTR3h4+PjL/eCCgsLBQiSTF27dtXPmDHjkpTOibw3KSnpUrdu3bTg+5sKCgrk20uNxgiKM1eh2VUFuNic+NO/fYKDg31jYmL+rAR2YmJixblz5/zBJPkcPny4Y0JCQpYj0Ml7Zs6cefWnn37qCMf6nz592n/cuHEVSqBDBzpt4cKFBjW1XC5wH26kho3WaJcuXdrLz8+vlVzY06ZNq8jNzdX7+/sLBoPB2v7zn/90mDhx4tXGoFPN/v3UqVMR7FjymJOTowdNlw0dbHnI/Pnz41wNXMcNh/lxCX3t8OHDY+VeCAQc5QSQr6+voNfrrQ2+PCs88Ocj7EFnZuTEiRN/YMexzyCPV65c0U+ePNkk97qCgoIm0PtzCXDe92awrcNWoaGh+o4dO/aQcxFkXPKXX37RkQALtMradDqdFRiDfvz48YhJkyY1gE7+nj17dubJkyc7kveSY9jx7LPIIxyrgShSVkcKX9wjS5YsUc2saGXab36U3ardYA7uldtZwnGBAwcOvFhTUyOQZg2BARQBCK9ZoRMTQaCDjb/CwuQ5c+ZkgmZHkfeQ95JjyP/Z51RXV1sfhwwZkgPgAuR2nvAL6asWcKmhvY66gME0dxJMI0zfL7/88r9HjBjxjJIhKwB4AczHfUxbGUACDn4F1kb+7tu37yWAXAP2/W4xbPI6uIWC2WwWysvLBYh2r/7jH/+IUDK35ubNm4vAtLwh1I78OzW05zXchwsMNJGRkeGKvnkAsm7duvvAjz/PwPKazswL0XToHDuCZt9t74sB8yGYTCYBPksxbCJwnmhXmxS+WYFDONxB8c8NwKxduza6Z8+eNqETbSaQSbOl2eQYptnwGVchPI9QY9YYnKuTK4FrBRtT0Fq3bh2qio0TQSc2mP2fdYh8x0iEvIfBJpqtJmwKPMLVwMXz/khvblTLdWLQ+/Tpc451ovxrPEjWSTJTojZses4gVwK3OaMVfuYGNUNgAiw1NbVTv379ztobCGH/Zx5J7969r/7973+PUHvyKXxeoKuA24Pu1RPIyXQWVyevbAUv5SrfpABh9fkjR450FpsRsXlhgdLRo0dvC45UupZSVwIXT4S3/g86rFI1Yc+dO/fXf//739F858ibER46eQ/xWkg4D755RFO5FxnXU+Iq4LZgW+XWrVs31IINAdBvEACRyNUKk+8c+cagMuDMT6e5lyy1oJPpcYJKE/vlAGcT4dkKBOuFXL9+PVst2BDC323PzyZRJGn2/HQu4dVBLehwvvOuBs7Dtl7I1atXs5XCJokoANVouE787B49elyAkP08+Z8t6MS0EE0nqd34+HjF0OG8F1wJXLymxgr98OHDvyq5kAULFpwH2FH2wnUWQXbv3j2TpADS0tLsRqR8ihf6gQ5Tp069rOTacnNzj7gaOL+Gxgp806ZNF+HmZeWdSXr2wIEDHfkIUgybRpC/b968OYp5J/bSAHyKlzzu27fvTrnpWeKhvPbaa0ddDbxKDD0/P78iMzPzpNz0bJcuXarEqVVR1u/Kli1b/sB7LBz0C8y88B0qe4RjLXLTs2VlZd9u27atzB00nLW6lWJ79uw5JPdCwEQEhIeHmwk4EqKTJkpERdrzxQH6fQD1IjuOfQZ5jIyMrIAvSvagdlZWVrrQcBmisqhVxlQ3trIhhDaSEyfa4xscHOyXk5PzN/AUAuVcDBnXnDJlivnKlSt+RFsJNJJidSQ3QgeRL0Hw05FoNoENX2DFRx99pA8JCZEFB66hMC4urvPevXsLqXI1OJ9s70DiZE4tBU7SsWQFGpls/xi0J6CN/fbbbz9RMqGyoKDAMn78+LLo6OjyadOm/QbwHD6WvHfWrFkXO3XqVP7kk0+WgZlTNLnz2rVrafRetXK4qTV7lqgaGc8k6ViSmCcr0YZCI/Px/gI/7alqLJIC7S5xxbH8IqwVK1ZE03vVqAVc7uxZH2pGgjizYqQXpzt48OCEhx566L89OWEF2r0BTNKLpN+kzoGghkmRm7winSRZ4VtOm4nvQBMTE3dB717oqbDBzbz+6quvrqb3WKPmZ8sFzrwVM9WAMvq3dU71hQsXysAzeM9TgZ88eXLxxo0bb6jpnSjxUvgviy0PDBbqJ+MbqCejO3Xq1LNdu3Yd5Emw8/LyPg4LC5sJf5Y0puGumJBvobaNmJNS2lhBAatpGTZs2BbwFC57Cmzw+c8lJCQk03uqEpqh9IdS4DXUjJRRjSilNt0KHYKGsnHjxr1RWlpa4O6wwbPKffPNNyf961//yhfql5uoX2ulORdVQRsN7ckxY8b8j8lkuumui6kAdsGqVasG03swCM24qErtZYPtBNGyQQr9LxMmTHiuuLj4urvBBjOS/dZbbw2j124U3HzZoLiYAVvy3WBhLIMeGxs748aNG5nuAhsU4CxEpg/Sa24lSCh24Kplg3VfCLyHFaHxo55La9oC6c/UOq0ZPAD/PXv2xPfu3XuoK212ZmbmjpEjRy7PyMjIo/2PmXaUNY4Cd7ZbeNuJRdCNFHgrEXTr9ObU1NS+06ZNm2I0GkOdCRr6klzoGN944okndsPTW7SjlwTbnYBrOJuup+F/K9rYskK2YkIXERHhv23btjFkEZaSdUEOZv7Kz5w5s2PixIl/O3bs2HX4VzHnxjIX0OJRwEWazq8DMgr1C2cDOOhWbScr39LS0kbExMQM1ev1gWqCBg+kGMzGrpSUlA/37t2bbct9lRO+u11VN6rtbGozW17IgJPGFmPVTeoPCQnRk3VCcXFxAyMjI7vLnT5XVVVVlp2dfezAgQNfJScnH8zJyblJtblUlIaQHbq7ZRk9Cp3XdrbMMECoXwHHL12pmwJtMBh0SUlJ9w4ePPiPUVFRd8GXER4YGBgCv4BWYH78ampqqomZIBOQwLUrKiwszIZA67dDhw6dAJ86o6SkhCXVyilk9rxOq4mn4nV1C0WFIZlt9+Pg+3HQfcXgOTdNXCjSIjScRcDGWSuoBps5yGbOVjcoHOlxwKWcS7BfKFLPNR66o6VQedgVHPTbCkWqFa57Uu1ZJaVQLaL0MAPeZClUtfMinlpdmdf65ij222yVlrGctZ1JpgjcSwSBexNwFBXtJgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBwBI6CwBE4AkdB4AgcBYEjcBQEjsAROAoCR+AoCByBoyBwBI7AURA4AkdB4AgcRSFwXGB1u0hhiMBbOHB7C2JtbmWDwJWD5pd/swIHRPgiBqoXKvAo4Eo7XVHZJtJYDRU/ob6kHatzKy7FUVeGQ8H5Ww5wEWwCmNRPCeSanr5OAJMiM8VCfa0qa2EwUqfdk4BrXfUztAGbACa1yO8cN25czOnTp/9qMpm+qKqq+jonJ+f9jRs3joHXwoTamuWsoLBW42m9uOTKkgqPZ4V4hPrqzKTiW3tof4Q2cN68eS8C6FJbhR337t37AbznAWh30y+HHK+VW7RRrRhE9cqcagK3ATucwn541qxZiwB2eSN7PNTMnDlzGry3q1BbujQAgcuEPX369EXl5eWlTZUvPXTo0Mfw/j7QyLa5Rk8DrnWxzSZF3tslJib2T0tLe9Hf37/JzY2Cg4PbCg1LM3mUaF0NOyEhoe/atWsXGQwGh/ZUvnnzZr5QX35JQOBNwzZS2GHPPPNMnw0bNrwcEBDg6AbWlo8//ni/IKNWbIvwUkQ2m2j1ndRmDxo7duzzxcXFt6SUnz548CAp0BsL7V7qpfhhp9k47P8irt+YMWPm37p1S1LF/J9//vmQTqcbDsd3p25kIItEWzxwG7DbMdijRo2aB3a4SCLsH/z8/EYKtdsd/EGo3U1FsQ/uFcA52L4c7M7EFDz++ONzAXahFNjHjh07AjaebDtGKu5HUVPiTz0VTYsG3hjs4cOHzy4qKiqQAvvkyZNHwXsZTWGz6NKghinxFuAMtpGH/eijjyYVFhbekAI7IyPjWGBgIMmfxFDYd6gN26OBi2CTJBPZeCN2yJAhzxYUFORJgX3mzJkTISEhT4pgB6gN22OB24H90KBBg2beuHHjukTYZ4KCguKF2n06iQvZQajfzZDVHG/Oxld79hEa1r9VDNxHhcCGL8hupOnTsAEDBnTesWPH0jvuuKOto591+vTpiw8++ODbYOvJZhiV9Cb19GX/RkZ5NA7+r1FF5UaX+L2fq4SGG24rCraUAteKtNsKu1+/fp127dq1LDQ0NMzRDzp16tTVhx9+eBvY+kp6XQZ6g/4OhPIaBaDF0PkS2SausVEmQQl0JcBZyO5L4ZAtZEJjYmKiv/jii1fbtm3bztEPAm8kb+DAgV+DZrPtf6vpT9ssNF3/2xZsjQLgTLNJdf1SbpSpRKjfN1QjdzxVyRCbltNEEoi069OnT+fdu3evDAsL6yDlM00mU1V1dXUNfDbZk4FtFSD5hnx9ffU6spm9QiH7S0AknH/ixIkDCQkJWy5fvpwF/yYb+N0S6ncirJE1zKig09RSz4GYjS5RUVEjs7Ozf7N4meTk5Fy45557htNIOYwNerjCS/GhZoSE2v3279+/w+KlQgc9+tJ7bS02xc4agOCjSkPPnj1jBS+V7t27x/ID10o6ZqX58LpdS8geO94KHO4tWKifmKRolEkp8Do3qqysrMhbgZeUlBSKfHGXAGewiX9qAtfuO28FnpGRcZhzCZVt3auCl0L87a4gT+Tl5V31tg7zxo0bl+HeHqNpBpd6KWyAgUSXdxFPZdCgQYn5+fm5ngqXzHspLy8vIxORbt68ef3777/f1atXr8fpKFME9VD0SoArCXz4UZ26geHBgwd327lz57Lg4GCHNybNysoqgS/r819//fUqPCWj8jdplFehht2UYCL5nczZ9vElQv1GpyzEb2BWnDmZU5y4sk59GDJkSJft27cT6Hc4+rlXrlwpgOO2nD9/PhOekm1zi+hNVnI5DIsToNdwCSs+l1LJ2fAaGxlTpwAXQ2fzTcKGDh3aPT09fWmbNm1CJEC/NnDgwLcyMzN/46CXCAr3vpSQKZSVLXRWaG8rAGowrBYXFze3qKhI0hjm77//fjkqKipRqN0xvBu0SDq0FijUb3zqsflwtQYg7I3Sx8oZpb906dKFyMjI8ULtTNl7SBaSG/FRPFLvLUNsduehjB49+rlbt25Jgn7hwoUzISEhY+H4/hR6swyzecuo/W3QyUwrgC5pptWZM2dOAvS/cGObbNReJ+CofdPT28aPH/9CMYiCqRI4L8XBeeB10OPj4xeWlpaWSIF+9OjRH3Q6Hc68auxkQiOT7ydPnvyiVOhHjhw54OPjM4x6LuHU78e5hY5Cnzp16kuOrHjg5ZtvvtlFpl/QThTX+EiEPnjGjBmLTSZTmZRcRxIIzW3gGh+Jq9a6EOjA738bW0hlQ8s/EmrX+HQQcI2P3QtiiaEqOvWBhOskqZ/79ttvH05OTn7TbDabHPmsoKCgEEeiP3cVH2ediECneRiWFKpLDaxZs+YAeCGalStXLtTr9X6NfU5eXl4ON03B84q9uHidZmtqXsi6y8EpKSmvVlRUmO2Zk+rq6sqnnnpqIjVHYTQIQhsuEzqBOOjll19eSsyLrT5z165dm2jUGUWzknp0C5VBJ2mAAZMmTZp57ty5H8nIS2VlZUVWVtaZ1NTUV8hr0KK5oS6P88PdtZpEK+qBsGoSpKMtpyMvxfRvj6wm4a71Uvy5fLSGdpIVAtZLUU3EFYHYpBusCIQ1r7wLuEcKAkfgCFw2cBSFnRQCR+AIHAWBI3AUBI7AURA4AkfgKAgcgaMgcASOgsAROAJHQeAIHAWBI3AUBI7AETgKAkfgKAgcgaMgcASOwFEQOAJHQeAIHAWBI3AEjoLAETgKAkfgKMqB41r72wWLGyDw5vl1co9NFbSxIHDlsOv2gRPqSzaxSqOsTBMr2VTTnOA9qsiYZNINi5KRQmRsoww/Cp0vuUoa23DUZlGyFlfVTQZwtlMtAc22Bm5F/9ZT4KTqG9tel9S5rSu7J4bubOBawYOEanfdHp4C3fR68+bNY69du/Z+VVXV1yaT6Ytffvnlzfj4eFLBk1Tmv4N+If70i9JoXGkXXVWZU2YVTVbJk1RYJptwPLB///6PbNWpNZvNZSkpKaSa58NCbZnV9kLDvdQ0LaoUqgLgbIfDbnPmzHmWFHG3VxwYoJuTk5OXCfWbMN0GHYE3DZyUSCU7//X57rvvPm2qwDup1gzQX4P3P9Jc0KXcg4/gecLsuJYWb29UyNbry5Yte4HUJ1+xYsU3opeZByMITio86YnA2QZ0NXl5ebmOHADQfZcsWbJAC7J8+fJ9osDJudA90KQYmA2fMGFCIngmVY7uHwHmpWLx4sWvw7FDqHnpoIZ58XYbTuAEUy/lTzt27HiXlLSWAv2VV155g0LvrgZ0bwbOIkwj1XKyPc2Q7du3b5UKfenSpW/CsY+qAd1rgYu0vDX1VnpAG5aenv5PKdBJ5X2A/lc1oHs7cOal+NFIk+zN1hPa8G3btkmFXgkejGLoXg2cg66j4TqD3otA/yeIFOjQ5yqG7vXAG4Fu1fStIFKhg7u4moPOtlT3cwS6qzcwrQtMhPotBtRu/NYFehrukyQV2W6GbBs24sMPP/xnI1G/LehVEBilwrF/lgrdqZEml5/WiCBrnZiNZF9ENU3FFsfHx+8CZ0Q/efLkMY4kByES1S1cuHAOxEaalJSUPdxLxfSxUo3gSClwfuTFh6ZN9fTRtxmgW5oI95nGE6lKTEz8v/LycsOsWbPiHIW+YMGC2SQN8MILL+wWndeiRkSqBnAG2p/6xwZuMMBXxaycRQJ0A70e3ezZs38uLS0NBICxjkJ/7rnnksgvF+B/Qf9dI0oDuAQ4f4MB1N61CQsLC503b16/yMjICPh56sGOkr3SNArNlqWsrKzSZDJVsOd2bKn1F1ddXa0j5sRsNhugBeTm5lYfOXIkNyYmpp1D9gkE7mEWXL9l/vz5nwn1m+0pHx9V0GkyX5gNBvRbvXr1yyUlJYUWLxHSka5bty5NqN2l9l56r35iM+ksL0VHTQjJL3dftGjRPLIboMXLhEB//vnnn4N7vJ/eq1Go39DJqcB9qBkhrtiAS5cuHbd4qVy8ePGYULvxXhS9Zx+5wLUKbThzx/Tt27fvLHipREREdKamhM1/kd0nOcNl83hRc5RfqxAw67krsrOzz3krcLi3C0L9DoeKtgZWCpzNcCp9//33P1C6t6VbjufBPcG9bYU/ywRuMpErhti0NLhgbmHMpk2bVkJkV+wtnSXEENXvvffeBri3WGj30XyNvxK3UMlUN9Zp+lFXicxuCrrrrrvCkpKS+oaHh7dXK/CRIuR8JPCprKz0haDHnwY/xsmTJ3fr0KGDUYpmb968+d3p06dvh6dkp/F8oXba3G1a7sy5hVoutDfQiDOAC+19BOftP8/ndMj529BfX1h6evpjY8eO7SYF9jvvvPPus88+uxOeXqOwi7nQvkas4c7KpVi4b5tNoiwTwda6CDb55VV/8skng0aPHi0J9oYNGzbPnj37E3hKpmEU2tNsVySv2MlZjqGSXpiz0rMaG0krdk8+n3322UiQB6TAXr9+/aa5c+cy2AVC7cbXJnpvyueYe+gAhE40AEFscyjL6UAb+SWI1A5y7dq1G+gABBmYjuR+KaoNQHjLEFswN64Z9/nnn++SAhs62eo1a9bIgt3SB5HjPv30050SYVelpaW9zcGOkAK7JU+TiNuxY8d2qZnA1NTU9UpgtwTg/ESgDkL9RKBtUmGvXr16nWjQWDJsrwbODemxSflkqtsjAPtDidMiVIPdEoDzkzkHbN269R2psFetWrVGLdjeDpxNVw6D1hUCmonAr0IK7Ndffz1NTdgtAbiR2u7eX3311QfOmOijJnCPWjYoyuHo2oE48mbi+oFmr3nppZf20NwIiyDNqkWQTgrtXSUkjVBVWFiY7yDstYsXL/6KhuviRJTTYKse2jvZhndJSEiYTKLExqYjL1++fJWg4mqHluylkBH0mN27d2+x5aWQ5YIrV65cIdSv0bwNNi6MleaHEy2PJq7h+vXrl2ZlZZ0lqxrMZnPp2bNnf5o6dWoSvPYn6qvbXIXsCuCeXNzAj4JvRVuAcHtxgxKhvrhBnc12ZXEDbyjf4SfUl/BgM6IqKXQT54l4R/kOF0nLLVDjYug8fCzB1FIEgSNwBC4bOIrCjgeBI3AEjoLAETgKAkfgKAgcgSNwFASOwFEQOAJHQeAIHIGjIHAEjoLAETgKAkfgCBwFgSNwFASOwFEQOAJH4CgIHIGjIHAEjoLAETgCR0HgCBwFgSNwFASOwBE4CgJH4CgIHIGjIHAEjsBRPB24RqMZQf/sKvHQNxp7sanrhvMmSzxfBv3c3c3JQ4s651xxRoX8rlRzXnfwF5Gi5sllnBc1HDW8eSXDzT4HNRw1XJ73ktKYzW/Kq3G0Bq698zhq21HDUcOd40U44ZeFGu6NgsARONpwp9pOeN9jjngpjvrh0Ed86U62HDXcCzXcXtYvuTE/HDTTX+EvqmsT3tEbqOHYaaIgcASOgsAROAoCR+AIHMVDI01Z80OaihQd/ZxGItFkV0SgqOFeqOFMc14XaZi9scYv3el6UMOx00RB4GjDG/UWUlDDUbxPw+3NR+E0XpX54a6aYYUajsAROIqX2PAMid5IhoefFzXcnQSXDSJwBI6CwBE4CgJH4CgIHIEjcBQEjsBREDgCR0HgCByBoyBwBI6CwBE4CgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBw95f/F2AAPX2XGJHD060AAAAASUVORK5CYII=);background-size:46px auto}}.fancybox-light-overlay{opacity:0.9;filter:alpha(opacity=90);background:#555555;background:-moz-radial-gradient(center, ellipse cover, #999 0%, #555 100%);background:-webkit-gradient(radial, center center, 0px, center center, 100%, color-stop(0%, #999), color-stop(100%, #555));background:-webkit-radial-gradient(center, ellipse cover, #999 0%, #555 100%);background:-o-radial-gradient(center, ellipse cover, #999 0%, #555 100%);background:-ms-radial-gradient(center, ellipse cover, #999 0%, #555 100%)}
</style>
<style data-href="/assets/gulp/print-240f8bfaa7f6402dfd6c49ee3c1ffea57a89ddd4c8c90e2f2a5c7d63c5753e32.css" media="print">
.print_only{display:block}body{overflow:hidden}.print_logo{position:absolute;top:0;left:0}.site_header_area{position:relative}.nav_is_fixed .site_header_area{position:absolute;top:0}.site_header_area .brand1,.site_header_area .brand2{display:none}.site_header_area .brand_area{width:23%}.site_header_area .grace_logo img.grace_logo_white{display:none}.custom_banner_container{height:68px}.custom_banner_container img{display:none}.custom_banner_container .banner_header_overlay{display:none}a[href]:after{content:""}.module{padding:1em 0}#sticky_nav_spacer{display:none}.nav_is_fixed #sticky_nav_spacer{display:block}.main_carousel.module .slick-slider .grid_layout{width:100%}.main_carousel.module .slick-slider .right-col{width:3.75in !important;float:left;margin-right:12px;margin-bottom:1em}.main_carousel.module .slick-slider .left-col{width:3.75in !important;float:left}.definition_teaser{display:none;color:white !important}.double_teaser .column{width:48%;float:left}.double_teaser .column+.column{margin-left:1%;margin-top:0}#home .site_header_area{position:relative}#site_footer{border-top:1px solid gray}#site_footer .upper_footer{padding:1em 0}#site_footer .footer_science_calendar footer{display:none}#site_footer .footer_science_calendar .col1,#site_footer .footer_science_calendar .col2,#site_footer .footer_science_calendar .col3{display:inline-block;width:30%;padding:1%}#site_footer .sitemap,#site_footer .share{display:none}#site_footer .lower_footer{height:auto}#site_footer .lower_footer .nav_container{display:none}#primary_column{width:60%;float:left;overflow:hidden;position:relative;display:block}#secondary_column{width:32%;float:right;position:relative;font-size:80%}.double_teaser .column{width:46%}.double_teaser .column+.column{float:right}.grid_view .module_title{display:block}body #page .grid_gallery.grid_view li.slide{width:19%;margin:1%;float:left;clear:none}body #page .grid_gallery.grid_view .bottom_gradient{margin-top:0}body #page .grid_gallery.grid_view .bottom_gradient div{margin-top:.3em}.gradient_line{display:none}.multi_teaser,.teasers_module,.multimedia_teaser,.filter_bar,.tertiary_nav_container,.secondary_nav_mobile,.carousel_teaser,.image_of_the_day,.view_selectors,.related,.primary_media_feature,.fancybox-overlay,#fancybox-lock,.suggested_features,.homepage_carousel,#site_footer .brand_area{display:none}
</style>
<script src="/assets/public_manifest-7136bdf7a3a8424b034f333778efdc8dd66c789f18847281907defafc52d017e.js">
</script>
<!--[if gt IE 8]><!-->
<script src="/assets/not_ie8_manifest.js">
</script>
<style>
</style>
<!--[if !IE]>-->
<script src="/assets/not_ie8_manifest.js">
</script>
<style>
</style>
<!--<![endif]-->
<script src="/assets/vendor/jquery.fancybox-9d361e5f98a5c0f233a25df9252dbadea6897af1c0ef221d7465e1205941ea0d.js">
</script>
<script src="/assets/mb_manifest-86cde101f747092d1465039f6da3fd2930c66319387c196503d3f70665195392.js">
</script>
<!-- /twitter cards -->
<meta content="summary_large_image" name="twitter:card"/>
<meta content="News " name="twitter:title"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." name="twitter:description"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_142497main_PIA03154-200.jpg" name="twitter:image"/>
<style type="text/css">
.fancybox-margin{margin-right:17px;}
</style>
<style type="text/css">
.at-icon{fill:#fff;border:0}.at-icon-wrapper{display:inline-block;overflow:hidden}a .at-icon-wrapper{cursor:pointer}.at-rounded,.at-rounded-element .at-icon-wrapper{border-radius:12%}.at-circular,.at-circular-element .at-icon-wrapper{border-radius:50%}.addthis_32x32_style .at-icon{width:2pc;height:2pc}.addthis_24x24_style .at-icon{width:24px;height:24px}.addthis_20x20_style .at-icon{width:20px;height:20px}.addthis_16x16_style .at-icon{width:1pc;height:1pc}#at16lb{display:none;position:absolute;top:0;left:0;width:100%;height:100%;z-index:1001;background-color:#000;opacity:.001}#at_complete,#at_error,#at_share,#at_success{position:static!important}.at15dn{display:none}#at15s,#at16p,#at16p form input,#at16p label,#at16p textarea,#at_share .at_item{font-family:arial,helvetica,tahoma,verdana,sans-serif!important;font-size:9pt!important;outline-style:none;outline-width:0;line-height:1em}* html #at15s.mmborder{position:absolute!important}#at15s.mmborder{position:fixed!important;width:250px!important}#at15s{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);float:none;line-height:1em;margin:0;overflow:visible;padding:5px;text-align:left;position:absolute}#at15s a,#at15s span{outline:0;direction:ltr;text-transform:none}#at15s .at-label{margin-left:5px}#at15s .at-icon-wrapper{width:1pc;height:1pc;vertical-align:middle}#at15s .at-icon{width:1pc;height:1pc}.at4-icon{display:inline-block;background-repeat:no-repeat;background-position:top left;margin:0;overflow:hidden;cursor:pointer}.addthis_16x16_style .at4-icon,.addthis_default_style .at4-icon,.at4-icon,.at-16x16{width:1pc;height:1pc;line-height:1pc;background-size:1pc!important}.addthis_32x32_style .at4-icon,.at-32x32{width:2pc;height:2pc;line-height:2pc;background-size:2pc!important}.addthis_24x24_style .at4-icon,.at-24x24{width:24px;height:24px;line-height:24px;background-size:24px!important}.addthis_20x20_style .at4-icon,.at-20x20{width:20px;height:20px;line-height:20px;background-size:20px!important}.at4-icon.circular,.circular .at4-icon,.circular.aticon{border-radius:50%}.at4-icon.rounded,.rounded .at4-icon{border-radius:4px}.at4-icon-left{float:left}#at15s .at4-icon{text-indent:20px;padding:0;overflow:visible;white-space:nowrap;background-size:1pc;width:1pc;height:1pc;background-position:top left;display:inline-block;line-height:1pc}.addthis_vertical_style .at4-icon,.at4-follow-container .at4-icon{margin-right:5px}html>body #at15s{width:250px!important}#at15s.atm{background:none!important;padding:0!important;width:10pc!important}#at15s_inner{background:#fff;border:1px solid #fff;margin:0}#at15s_head{position:relative;background:#f2f2f2;padding:4px;cursor:default;border-bottom:1px solid #e5e5e5}.at15s_head_success{background:#cafd99!important;border-bottom:1px solid #a9d582!important}.at15s_head_success a,.at15s_head_success span{color:#000!important;text-decoration:none}#at15s_brand,#at15sptx,#at16_brand{position:absolute}#at15s_brand{top:4px;right:4px}.at15s_brandx{right:20px!important}a#at15sptx{top:4px;right:4px;text-decoration:none;color:#4c4c4c;font-weight:700}#at15sptx:hover{text-decoration:underline}#at16_brand{top:5px;right:30px;cursor:default}#at_hover{padding:4px}#at_hover .at_item,#at_share .at_item{background:#fff!important;float:left!important;color:#4c4c4c!important}#at_share .at_item .at-icon-wrapper{margin-right:5px}#at_hover .at_bold{font-weight:700;color:#000!important}#at_hover .at_item{width:7pc!important;padding:2px 3px!important;margin:1px;text-decoration:none!important}#at_hover .at_item.athov,#at_hover .at_item:focus,#at_hover .at_item:hover{margin:0!important}#at_hover .at_item.athov,#at_hover .at_item:focus,#at_hover .at_item:hover,#at_share .at_item.athov,#at_share .at_item:hover{background:#f2f2f2!important;border:1px solid #e5e5e5;color:#000!important;text-decoration:none}.ipad #at_hover .at_item:focus{background:#fff!important;border:1px solid #fff}.at15t{display:block!important;height:1pc!important;line-height:1pc!important;padding-left:20px!important;background-position:0 0;text-align:left}.addthis_button,.at15t{cursor:pointer}.addthis_toolbox a.at300b,.addthis_toolbox a.at300m{width:auto}.addthis_toolbox a{margin-bottom:5px;line-height:initial}.addthis_toolbox.addthis_vertical_style{width:200px}.addthis_button_facebook_like .fb_iframe_widget{line-height:100%}.addthis_button_facebook_like iframe.fb_iframe_widget_lift{max-width:none}.addthis_toolbox a.addthis_button_counter,.addthis_toolbox a.addthis_button_facebook_like,.addthis_toolbox a.addthis_button_facebook_send,.addthis_toolbox a.addthis_button_facebook_share,.addthis_toolbox a.addthis_button_foursquare,.addthis_toolbox a.addthis_button_google_plusone,.addthis_toolbox a.addthis_button_linkedin_counter,.addthis_toolbox a.addthis_button_pinterest_pinit,.addthis_toolbox a.addthis_button_stumbleupon_badge,.addthis_toolbox a.addthis_button_tweet{display:inline-block}.at-share-tbx-element .google_plusone_iframe_widget>span>div{vertical-align:top!important}.addthis_toolbox span.addthis_follow_label{display:none}.addthis_toolbox.addthis_vertical_style span.addthis_follow_label{display:block;white-space:nowrap}.addthis_toolbox.addthis_vertical_style a{display:block}.addthis_toolbox.addthis_vertical_style.addthis_32x32_style a{line-height:2pc;height:2pc}.addthis_toolbox.addthis_vertical_style .at300bs{margin-right:4px;float:left}.addthis_toolbox.addthis_20x20_style span{line-height:20px}.addthis_toolbox.addthis_32x32_style span{line-height:2pc}.addthis_toolbox.addthis_pill_combo_style .addthis_button_compact .at15t_compact,.addthis_toolbox.addthis_pill_combo_style a{float:left}.addthis_toolbox.addthis_pill_combo_style a.addthis_button_tweet{margin-top:-2px}.addthis_toolbox.addthis_pill_combo_style .addthis_button_compact .at15t_compact{margin-right:4px}.addthis_default_style .addthis_separator{margin:0 5px;display:inline}div.atclear{clear:both}.addthis_default_style .addthis_separator,.addthis_default_style .at4-icon,.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300bs,.addthis_default_style .at300m{float:left}.at300b img,.at300bo img{border:0}a.at300b .at4-icon,a.at300m .at4-icon{display:block}.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300m{padding:0 2px}.at300b,.at300bo,.at300bs,.at300m{cursor:pointer}.addthis_button_facebook_like.at300b:hover,.addthis_button_facebook_like.at300bs:hover,.addthis_button_facebook_send.at300b:hover,.addthis_button_facebook_send.at300bs:hover{opacity:1}.addthis_20x20_style .at15t,.addthis_20x20_style .at300bs{overflow:hidden;display:block;height:20px!important;width:20px!important;line-height:20px!important}.addthis_32x32_style .at15t,.addthis_32x32_style .at300bs{overflow:hidden;display:block;height:2pc!important;width:2pc!important;line-height:2pc!important}.at300bs{overflow:hidden;display:block;background-position:0 0;height:1pc;width:1pc;line-height:1pc!important}.addthis_default_style .at15t_compact,.addthis_default_style .at15t_expanded{margin-right:4px}#at_share .at_item{width:123px!important;padding:4px;margin-right:2px;border:1px solid #fff}#at16p{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);z-index:10000001;position:absolute;top:50%;left:50%;width:300px;padding:10px;margin:0 auto;margin-top:-185px;margin-left:-155px;font-family:arial,helvetica,tahoma,verdana,sans-serif;font-size:9pt;color:#5e5e5e}#at_share{margin:0;padding:0}#at16pt{position:relative;background:#f2f2f2;height:13px;padding:5px 10px}#at16pt a,#at16pt h4{font-weight:700}#at16pt h4{display:inline;margin:0;padding:0;font-size:9pt;color:#4c4c4c;cursor:default}#at16pt a{position:absolute;top:5px;right:10px;color:#4c4c4c;text-decoration:none;padding:2px}#at15sptx:focus,#at16pt a:focus{outline:thin dotted}#at15s #at16pf a{top:1px}#_atssh{width:1px!important;height:1px!important;border:0!important}.atm{width:10pc!important;padding:0;margin:0;line-height:9pt;letter-spacing:normal;font-family:arial,helvetica,tahoma,verdana,sans-serif;font-size:9pt;color:#444;background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);padding:4px}.atm-f{text-align:right;border-top:1px solid #ddd;padding:5px 8px}.atm-i{background:#fff;border:1px solid #d5d6d6;padding:0;margin:0;box-shadow:1px 1px 5px rgba(0,0,0,.15)}.atm-s{margin:0!important;padding:0!important}.atm-s a:focus{border:transparent;outline:0;transition:none}#at_hover.atm-s a,.atm-s a{display:block;text-decoration:none;padding:4px 10px;color:#235dab!important;font-weight:400;font-style:normal;transition:none}#at_hover.atm-s .at_bold{color:#235dab!important}#at_hover.atm-s a:hover,.atm-s a:hover{background:#2095f0;text-decoration:none;color:#fff!important}#at_hover.atm-s .at_bold{font-weight:700}#at_hover.atm-s a:hover .at_bold{color:#fff!important}.atm-s a .at-label{vertical-align:middle;margin-left:5px;direction:ltr}.at_PinItButton{display:block;width:40px;height:20px;padding:0;margin:0;background-image:url(//s7.addthis.com/static/t00/pinit00.png);background-repeat:no-repeat}.at_PinItButton:hover{background-position:0 -20px}.addthis_toolbox .addthis_button_pinterest_pinit{position:relative}.at-share-tbx-element .fb_iframe_widget span{vertical-align:baseline!important}#at16pf{height:auto;text-align:right;padding:4px 8px}.at-privacy-info{position:absolute;left:7px;bottom:7px;cursor:pointer;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:10px;line-height:9pt;letter-spacing:.2px;color:#666}.at-privacy-info:hover{color:#000}.body .wsb-social-share .wsb-social-share-button-vert{padding-top:0;padding-bottom:0}.body .wsb-social-share.addthis_counter_style .addthis_button_tweet.wsb-social-share-button{padding-top:40px}.body .wsb-social-share.addthis_counter_style .addthis_button_google_plusone.wsb-social-share-button{padding-top:0}.body .wsb-social-share.addthis_counter_style .addthis_button_facebook_like.wsb-social-share-button{padding-top:21px}@media print{#at4-follow,#at4-share,#at4-thankyou,#at4-whatsnext,#at4m-mobile,#at15s,.at4,.at4-recommended{display:none!important}}@media screen and (max-width:400px){.at4win{width:100%}}@media screen and (max-height:700px) and (max-width:400px){.at4-thankyou-inner .at4-recommended-container{height:122px;overflow:hidden}.at4-thankyou-inner .at4-recommended .at4-recommended-item:first-child{border-bottom:1px solid #c5c5c5}}
</style>
<style type="text/css">
.at-branding-logo{font-family:helvetica,arial,sans-serif;text-decoration:none;font-size:10px;display:inline-block;margin:2px 0;letter-spacing:.2px}.at-branding-logo .at-branding-icon{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAMAAAC67D+PAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF////+GlNUkcc1QAAAB1JREFUeNpiYIQDBjQmAwMmkwEM0JnY1WIxFyDAABGeAFEudiZsAAAAAElFTkSuQmCC")}.at-branding-logo .at-branding-icon,.at-branding-logo .at-privacy-icon{display:inline-block;height:10px;width:10px;margin-left:4px;margin-right:3px;margin-bottom:-1px;background-repeat:no-repeat}.at-branding-logo .at-privacy-icon{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAkAAAAKCAMAAABR24SMAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABhQTFRF8fr9ot/xXcfn2/P5AKva////////AKTWodjhjAAAAAd0Uk5T////////ABpLA0YAAAA6SURBVHjaJMzBDQAwCAJAQaj7b9xifV0kUKJ9ciWxlzWEWI5gMF65KUTv0VKkjVeTerqE/x7+9BVgAEXbAWI8QDcfAAAAAElFTkSuQmCC")}.at-branding-logo span{text-decoration:none}.at-branding-logo .at-branding-addthis,.at-branding-logo .at-branding-powered-by{color:#666}.at-branding-logo .at-branding-addthis:hover{color:#333}.at-cv-with-image .at-branding-addthis,.at-cv-with-image .at-branding-addthis:hover{color:#fff}a.at-branding-logo:visited{color:initial}.at-branding-info{display:inline-block;padding:0 5px;color:#666;border:1px solid #666;border-radius:50%;font-size:10px;line-height:9pt;opacity:.7;transition:all .3s ease;text-decoration:none}.at-branding-info span{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.at-branding-info:before{content:'i';font-family:Times New Roman}.at-branding-info:hover{color:#0780df;border-color:#0780df}
</style>
<script async="" charset="utf-8" src="//s7.addthis.com/static/layers.b01bacf303e2cf5c81a0.js" type="text/javascript">
</script>
<style type="text/css">
.at-share-dock.atss{top:auto;left:0;right:0;bottom:0;width:100%;max-width:100%;z-index:1000200;box-shadow:0 0 1px 1px #e2dfe2}.at-share-dock.at-share-dock-zindex-hide{z-index:-1!important}.at-share-dock.atss-top{bottom:auto;top:0}.at-share-dock a{width:auto;transition:none;color:#fff;text-decoration:none;box-sizing:content-box;-webkit-box-sizing:content-box;-moz-box-sizing:content-box}.at-share-dock a:hover{width:auto}.at-share-dock .at4-count{height:43px;padding:5px 0 0;line-height:20px;background:#fff;font-family:Helvetica neue,arial}.at-share-dock .at4-count span{width:100%}.at-share-dock .at4-count .at4-share-label{color:#848484;font-size:10px;letter-spacing:1px}.at-share-dock .at4-count .at4-counter{top:2px;position:relative;display:block;color:#222;font-size:22px}.at-share-dock.at-shfs-medium .at4-count{height:36px;line-height:1pc;padding-top:4px}.at-share-dock.at-shfs-medium .at4-count .at4-counter{font-size:18px}.at-share-dock.at-shfs-medium .at-share-btn .at-icon-wrapper,.at-share-dock.at-shfs-medium a .at-icon-wrapper{padding:6px 0}.at-share-dock.at-shfs-small .at4-count{height:26px;line-height:1;padding-top:3px}.at-share-dock.at-shfs-small .at4-count .at4-share-label{font-size:8px}.at-share-dock.at-shfs-small .at4-count .at4-counter{font-size:14px}.at-share-dock.at-shfs-small .at-share-btn .at-icon-wrapper,.at-share-dock.at-shfs-small a .at-icon-wrapper{padding:4px 0}
</style>
<style type="text/css">
div.at-share-close-control.ats-dark,div.at-share-open-control-left.ats-dark,div.at-share-open-control-right.ats-dark{background:#262b30}div.at-share-close-control.ats-light,div.at-share-open-control-left.ats-light,div.at-share-open-control-right.ats-light{background:#fff}div.at-share-close-control.ats-gray,div.at-share-open-control-left.ats-gray,div.at-share-open-control-right.ats-gray{background:#f2f2f2}.atss{position:fixed;top:20%;width:3pc;z-index:100020;background:none}.at-share-close-control{position:relative;width:3pc;overflow:auto}.at-share-open-control-left{position:fixed;top:20%;z-index:100020;left:0;width:22px}.at-share-close-control .at4-arrow.at-left{float:right}.atss-left{left:0;float:left;right:auto}.atss-right{left:auto;float:right;right:0}.atss-right.at-share-close-control .at4-arrow.at-right{position:relative;right:0;overflow:auto}.atss-right.at-share-close-control .at4-arrow{float:left}.at-share-open-control-right{position:fixed;top:20%;z-index:100020;right:0;width:22px;float:right}.atss-right .at-share-close-control .at4-arrow{float:left}.atss.atss-right a{float:right}.atss.atss-right .at4-share-title{float:right;overflow:hidden}.atss .at-share-btn,.atss a{position:relative;display:block;width:3pc;margin:0;outline-offset:-1px;text-align:center;float:left;transition:width .15s ease-in-out;overflow:hidden;background:#e8e8e8;z-index:100030;cursor:pointer}.at-share-btn::-moz-focus-inner{border:0;padding:0}.atss-right .at-share-btn{float:right}.atss .at-share-btn{border:0;padding:0}.atss .at-share-btn:focus,.atss .at-share-btn:hover,.atss a:focus,.atss a:hover{width:4pc}.atss .at-share-btn .at-icon-wrapper,.atss a .at-icon-wrapper{display:block;padding:8px 0}.atss .at-share-btn:last-child,.atss a:last-child{border:none}.atss .at-share-btn span .at-icon,.atss a span .at-icon{position:relative;top:0;left:0;display:block;background-repeat:no-repeat;background-position:50% 50%;width:2pc;height:2pc;line-height:2pc;border:none;padding:0;margin:0 auto;overflow:hidden;cursor:pointer;cursor:hand}.at4-share .at-custom-sidebar-counter{font-family:Helvetica neue,arial;vertical-align:top;margin-right:4px;display:inline-block;text-align:center}.at4-share .at-custom-sidebar-count{font-size:17px;line-height:1.25em;color:#222}.at4-share .at-custom-sidebar-text{font-size:9px;line-height:1.25em;color:#888;letter-spacing:1px}.at4-share .at4-share-count-container{position:absolute;left:0;right:auto;top:auto;bottom:0;width:100%;color:#fff;background:inherit}.at4-share .at4-share-count,.at4-share .at4-share-count-container{line-height:1pc;font-size:10px}.at4-share .at4-share-count{text-indent:0;font-family:Arial,Helvetica Neue,Helvetica,sans-serif;font-weight:200;width:100%;height:1pc}.at4-share .at4-share-count-anchor{padding-bottom:8px;text-decoration:none;transition:padding .15s ease-in-out .15s,width .15s ease-in-out}
</style>
<style type="text/css">
#at4-drawer-outer-container{top:0;width:20pc;position:fixed}#at4-drawer-outer-container.at4-drawer-inline{position:relative}#at4-drawer-outer-container.at4-drawer-inline.at4-drawer-right{float:right;right:0;left:auto}#at4-drawer-outer-container.at4-drawer-inline.at4-drawer-left{float:left;left:0;right:auto}#at4-drawer-outer-container.at4-drawer-shown,#at4-drawer-outer-container.at4-drawer-shown *{z-index:999999}#at4-drawer-outer-container,#at4-drawer-outer-container .at4-drawer-outer,#at-drawer{height:100%;overflow-y:auto;overflow-x:hidden}.at4-drawer-push-content-right-back{position:relative;right:0}.at4-drawer-push-content-right{position:relative;left:20pc!important}.at4-drawer-push-content-left-back{position:relative;left:0}.at4-drawer-push-content-left{position:relative;right:20pc!important}#at4-drawer-outer-container.at4-drawer-right{left:auto;right:-20pc}#at4-drawer-outer-container.at4-drawer-left{right:auto;left:-20pc}#at4-drawer-outer-container.at4-drawer-shown.at4-drawer-right{left:auto;right:0}#at4-drawer-outer-container.at4-drawer-shown.at4-drawer-left{right:auto;left:0}#at-drawer{top:0;z-index:9999999;height:100%;animation-duration:.4s}#at-drawer.drawer-push.at-right{right:-20pc}#at-drawer.drawer-push.at-left{left:-20pc}#at-drawer .at-recommended-label{padding:0 0 0 20px;color:#999;line-height:3pc;font-size:18px;font-weight:300;cursor:default}#at-drawer-arrow{width:30px;height:5pc}#at-drawer-arrow.ats-dark{background:#262b30}#at-drawer-arrow.ats-gray{background:#f2f2f2}#at-drawer-open-arrow{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA0AAABcCAYAAAC1OT8uAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyNpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDE0IDc5LjE1MTQ4MSwgMjAxMy8wMy8xMy0xMjowOToxNSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIChNYWNpbnRvc2gpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjk3ODNCQjdERUQ3QjExRTM5NjFGRUZBODc3MTIwMTNCIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjk3ODNCQjdFRUQ3QjExRTM5NjFGRUZBODc3MTIwMTNCIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6OTc4M0JCN0JFRDdCMTFFMzk2MUZFRkE4NzcxMjAxM0IiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6OTc4M0JCN0NFRDdCMTFFMzk2MUZFRkE4NzcxMjAxM0IiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz7kstzCAAAB4ElEQVR42uyWv0oDQRDGb9dYimgVjliID2Ca9AGfwtZob2Grja1PIFj7EhGCYK99VPBPOkVMp8X5rc6FeN7dfjOksMjAxwXZ3667OzvfLKRr682l5ZV9aDh+fxsnRHhoDzqGLjFBi4XOoFtoAxowoB893o/w7WpAl/+QgQMBwwRdTPhUC2lAV/wDA7qy5WOgq9psHejqTqkKdLE7KYCv0JZjMgBgB58raBG6mP1K6j2pT099T+qMUOeeOss1wDcEIA1PnQXy576rAUI0oFMoC7VCnn40Gs8Pd4lAiXNUKmJ0lh1mPzGEWiyUCqAGW3Pwv4IvUJsFO9CHgP3Zr6Te0xwgAf3LxaAjS241pbikCRkOg+nSJdV4p8HOPl3vvRYI5dtrgVDvvcWovcWovcWovcWovcWovcWovQChWNywNpqvdAKtQp/QNmPUIQ6kwwqt2Xmsxf6GMPM1Pptsbz45CPmXqKb+15Gz4J/LZcDSNIqBlQlbB0afe1mmUDWiCNKFZRq0VKMeXY1CTDq2sJLWsCmoaBBRqNRR6qBKC6qCaj2rDIqaXBGiXHEaom00h1S+K3fVlr6HNuqgvgCh0+owt21bybQn8+mZ78mcEebcM2e5+T2ZX24ZqCph0qn1vgQYAJ/KDpLQr2tPAAAAAElFTkSuQmCC);background-repeat:no-repeat;width:13px;height:23px;margin:28px 0 0 8px}.at-left #at-drawer-open-arrow{background-position:0 -46px}.ats-dark #at-drawer-open-arrow{background-position:0 -23px}.ats-dark.at-left #at-drawer-open-arrow{background-position:0 -69px}#at-drawer-arrow.at4-drawer-modern-browsers{position:fixed;top:40%;background-repeat:no-repeat;background-position:0 0!important;z-index:9999999}.at4-drawer-inline #at-drawer-arrow{position:absolute}#at-drawer-arrow.at4-drawer-modern-browsers.at-right{right:0}#at-drawer-arrow.at4-drawer-modern-browsers.at-left{left:0}.at4-drawer-push-animation-left{transition:left .4s ease-in-out .15s}.at4-drawer-push-animation-right{transition:right .4s ease-in-out .15s}#at-drawer.drawer-push.at4-drawer-push-animation-right{right:0}#at-drawer.drawer-push.at4-drawer-push-animation-right-back{right:-20pc!important}#at-drawer.drawer-push.at4-drawer-push-animation-left{left:0}#at-drawer.drawer-push.at4-drawer-push-animation-left-back{left:-20pc!important}#at-drawer .at4-closebutton.drawer-close{content:'X';color:#999;display:block;position:absolute;margin:0;top:0;right:0;width:3pc;height:45px;line-height:45px;overflow:hidden;opacity:.5}#at-drawer.ats-dark .at4-closebutton.drawer-close{color:#fff}#at-drawer .at4-closebutton.drawer-close:hover{opacity:1}#at-drawer.ats-dark.at4-recommended .at4-logo-container a{color:#666}#at-drawer.at4-recommended .at4-recommended-vertical{padding:0}#at-drawer.at4-recommended .at4-recommended-item .sponsored-label{margin:2px 0 0 21px;color:#ddd}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item{position:relative;padding:0;width:20pc;height:180px;margin:0}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img a:after{content:'';position:absolute;top:0;left:0;right:0;bottom:0;background:rgba(0,0,0,.65);z-index:1000000;transition:all .2s ease-in-out}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item.at-hover .at4-recommended-item-img a:after{background:rgba(0,0,0,.8)}#at-drawer .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img,#at-drawer .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img a,#at-drawer .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img img{width:20pc;height:180px;float:none}#at-drawer .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption{width:100%;position:absolute;bottom:0;left:0;height:70px}#at-drawer .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption .at-h4{color:#fff;position:absolute;height:52px;top:0;left:20px;right:20px;margin:0;padding:0;line-height:25px;font-size:20px;font-weight:600;z-index:1000001;text-decoration:none;text-transform:none}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption .at-h4 a:hover{text-decoration:none}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption .at-h4 a:link{color:#fff}#at-drawer.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption small{position:absolute;top:auto;bottom:10px;left:20px;width:auto;color:#ccc}#at-drawer.at4-recommended .at4-logo-container{margin-left:20px}#at-drawer.ats-dark.at4-recommended .at4-logo-container a:hover{color:#fff}#at-drawer.at4-recommended .at-logo{margin:0}
</style>
<style type="text/css">
.at4-follow.at-mobile{display:none!important}.at4-follow{position:fixed;top:0;right:0;font-weight:400;color:#666;cursor:default;z-index:10001}.at4-follow .at4-follow-inner{position:relative;padding:10px 24px 10px 15px}.at4-follow-inner,.at-follow-open-control{border:0 solid #c5c5c5;border-width:1px 0 1px 1px;margin-top:-1px}.at4-follow .at4-follow-container{margin-left:9pt}.at4-follow.at4-follow-24 .at4-follow-container{height:24px;line-height:23px;font-size:13px}.at4-follow.at4-follow-32 .at4-follow-container{width:15pc;height:2pc;line-height:2pc;font-size:14px}.at4-follow .at4-follow-container .at-follow-label{display:inline-block;height:24px;line-height:24px;margin-right:10px;padding:0;cursor:default;float:left}.at4-follow .at4-follow-container .at-icon-wrapper{height:24px;width:24px}.at4-follow.ats-transparent .at4-follow-inner,.at-follow-open-control.ats-transparent{border-color:transparent}.at4-follow.ats-dark .at4-follow-inner,.at-follow-open-control.ats-dark{background:#262b30;border-color:#000;color:#fff}.at4-follow.ats-dark .at-follow-close-control{background-color:#262b30}.at4-follow.ats-light .at4-follow-inner{background:#fff;border-color:#c5c5c5}.at4-follow.ats-gray .at4-follow-inner,.at-follow-open-control.ats-gray{background:#f2f2f2;border-color:#c5c5c5}.at4-follow.ats-light .at4-follow-close-control,.at-follow-open-control.ats-light{background:#e5e5e5}.at4-follow .at4-follow-inner .at4-follow-close-control{position:absolute;top:0;bottom:0;left:0;width:20px;cursor:pointer;display:none}.at4-follow .at4-follow-inner .at4-follow-close-control div{display:block;line-height:20px;text-indent:-9999em;margin-top:calc(50% + 1px);overflow:hidden}.at-follow-open-control div.at4-arrow.at-left{background-position:0 -2px}.at-follow-open-control{position:fixed;height:35px;top:0;right:0;padding-top:10px;z-index:10002}.at-follow-btn{margin:0 5px 5px 0;padding:0;outline-offset:-1px;display:inline-block;box-sizing:content-box;transition:all .2s ease-in-out}.at-follow-btn:focus,.at-follow-btn:hover{transform:translateY(-4px)}.at4-follow-24 .at-follow-btn{height:25px;line-height:0;width:25px}
</style>
<style type="text/css">
.at-follow-tbx-element .at300b,.at-follow-tbx-element .at300m{display:inline-block;width:auto;padding:0;margin:0 2px 5px;outline-offset:-1px;transition:all .2s ease-in-out}.at-follow-tbx-element .at300b:focus,.at-follow-tbx-element .at300b:hover,.at-follow-tbx-element .at300m:focus,.at-follow-tbx-element .at300m:hover{transform:translateY(-4px)}.at-follow-tbx-element .addthis_vertical_style .at300b,.at-follow-tbx-element .addthis_vertical_style .at300m{display:block}.at-follow-tbx-element .addthis_vertical_style .at300b .addthis_follow_label,.at-follow-tbx-element .addthis_vertical_style .at300b .at-icon-wrapper,.at-follow-tbx-element .addthis_vertical_style .at300m .addthis_follow_label,.at-follow-tbx-element .addthis_vertical_style .at300m .at-icon-wrapper{display:inline-block;vertical-align:middle;margin-right:5px}.at-follow-tbx-element .addthis_vertical_style .at300b:focus,.at-follow-tbx-element .addthis_vertical_style .at300b:hover,.at-follow-tbx-element .addthis_vertical_style .at300m:focus,.at-follow-tbx-element .addthis_vertical_style .at300m:hover{transform:none}
</style>
<style type="text/css">
.at4-jumboshare .at-share-btn{display:inline-block;margin-right:13px;margin-top:13px}.at4-jumboshare .at-share-btn .at-icon{float:left}.at4-jumboshare .at-share-btn .at300bs{display:inline-block;float:left;cursor:pointer}.at4-jumboshare .at4-mobile .at-share-btn .at-icon,.at4-jumboshare .at4-mobile .at-share-btn .at-icon-wrapper{margin:0;padding:0}.at4-jumboshare .at4-mobile .at-share-btn{padding:0}.at4-jumboshare .at4-mobile .at-share-btn .at-label{display:none}.at4-jumboshare .at4-count{font-size:60px;line-height:60px;font-family:Helvetica neue,arial;font-weight:700}.at4-jumboshare .at4-count-container{display:table-cell;text-align:center;min-width:200px;vertical-align:middle;border-right:1px solid #ccc;padding-right:20px}.at4-jumboshare .at4-share-container{display:table-cell;vertical-align:middle;padding-left:20px}.at4-jumboshare .at4-share-container.at-share-tbx-element{padding-top:0}.at4-jumboshare .at4-title{position:relative;font-size:18px;line-height:18px;bottom:2px}.at4-jumboshare .at4-spacer{height:1px;display:block;visibility:hidden;opacity:0}.at4-jumboshare .at-share-btn{display:inline-block;margin:0 2px;line-height:0;padding:0;overflow:hidden;text-decoration:none;text-transform:none;color:#fff;cursor:pointer;transition:all .2s ease-in-out;border:0;background-color:transparent}.at4-jumboshare .at-share-btn:focus,.at4-jumboshare .at-share-btn:hover{transform:translateY(-4px);color:#fff;text-decoration:none}.at4-jumboshare .at-label{font-family:helvetica neue,helvetica,arial,sans-serif;font-size:9pt;padding:0 15px 0 0;margin:0;height:2pc;line-height:2pc;background:none}.at4-jumboshare .at-share-btn:hover,.at4-jumboshare .at-share-btn:link{text-decoration:none}.at4-jumboshare .at-share-btn::-moz-focus-inner{border:0;padding:0}.at4-jumboshare.at-mobile .at-label{display:none}
</style>
<style type="text/css">
.at4-recommendedbox-outer-container{display:inline}.at4-recommended-outer{position:static}.at4-recommended{top:20%;margin:0;text-align:center;font-weight:400;font-size:13px;line-height:17px;color:#666}.at4-recommended.at-inline .at4-recommended-horizontal{text-align:left}.at4-recommended-recommendedbox{padding:0;z-index:inherit}.at4-recommended-recommended{padding:40px 0}.at4-recommended-horizontal{max-height:340px}.at4-recommended.at-medium .at4-recommended-horizontal{max-height:15pc}.at4-recommended.at4-minimal.at-medium .at4-recommended-horizontal{padding-top:10px;max-height:230px}.at4-recommended-text-only .at4-recommended-horizontal{max-height:130px}.at4-recommended-horizontal{padding-top:5px;overflow-y:hidden}.at4-minimal{background:none;color:#000;border:none!important;box-shadow:none!important}@media screen and (max-width:900px){.at4-recommended-horizontal .at4-recommended-item,.at4-recommended-horizontal .at4-recommended-item .at4-recommended-item-img{width:15pc}}.at4-recommended.at4-minimal .at4-recommended-horizontal .at4-recommended-item .at4-recommended-item-caption{padding:0 0 10px}.at4-recommended.at4-minimal .at4-recommended-horizontal .at4-recommended-item-caption{padding:20px 0 0!important}.addthis-smartlayers .at4-recommended .at-h3.at-recommended-label{margin:0;padding:0;font-weight:300;font-size:18px;line-height:24px;color:#464646;width:100%;display:inline-block;zoom:1}.addthis-smartlayers .at4-recommended.at-inline .at-h3.at-recommended-label{text-align:left}#at4-thankyou .addthis-smartlayers .at4-recommended.at-inline .at-h3.at-recommended-label{text-align:center}.at4-recommended .at4-recommended-item{display:inline-block;zoom:1;position:relative;background:#fff;border:1px solid #c5c5c5;width:200px;margin:10px}.addthis_recommended_horizontal .at4-recommended-item{border:none}.at4-recommended .at4-recommended-item .sponsored-label{color:#666;font-size:9px;position:absolute;top:-20px}.at4-recommended .at4-recommended-item-img .at-tli,.at4-recommended .at4-recommended-item-img a{position:absolute;left:0}.at4-recommended.at-inline .at4-recommended-horizontal .at4-recommended-item{margin:10px 20px 0 0}.at4-recommended.at-medium .at4-recommended-horizontal .at4-recommended-item{margin:10px 10px 0 0}.at4-recommended.at-medium .at4-recommended-item{width:140px;overflow:hidden}.at4-recommended .at4-recommended-item .at4-recommended-item-img{position:relative;text-align:center;width:100%;height:200px;line-height:0;overflow:hidden}.at4-recommended .at4-recommended-item .at4-recommended-item-img a{display:block;width:100%;height:200px}.at4-recommended.at-medium .at4-recommended-item .at4-recommended-item-img,.at4-recommended.at-medium .at4-recommended-item .at4-recommended-item-img a{height:140px}.at4-recommended .at4-recommended-item .at4-recommended-item-img img{position:absolute;top:0;left:0;min-height:0;min-width:0;max-height:none;max-width:none;margin:0;padding:0}.at4-recommended .at4-recommended-item .at4-recommended-item-caption{height:74px;overflow:hidden;padding:20px;text-align:left;-ms-box-sizing:content-box;-o-box-sizing:content-box;box-sizing:content-box}.at4-recommended.at-medium .at4-recommended-item .at4-recommended-item-caption{height:50px;padding:15px}.at4-recommended .at4-recommended-item .at4-recommended-item-caption .at-h4{height:54px;margin:0 0 5px;padding:0;overflow:hidden;word-wrap:break-word;font-size:14px;font-weight:400;line-height:18px;text-align:left}.at4-recommended.at-medium .at4-recommended-item .at4-recommended-item-caption .at-h4{font-size:9pt;line-height:1pc;height:33px}.at4-recommended .at4-recommended-item:hover .at4-recommended-item-caption .at-h4{text-decoration:underline}.at4-recommended a:link,.at4-recommended a:visited{text-decoration:none;color:#464646}.at4-recommended .at4-recommended-item .at4-recommended-item-caption .at-h4 a:hover{text-decoration:underline;color:#000}.at4-recommended .at4-recommended-item .at4-recommended-item-caption small{display:block;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;font-size:11px;color:#666}.at4-recommended.at-medium .at4-recommended-item .at4-recommended-item-caption small{font-size:9px}.at4-recommended .at4-recommended-vertical{padding:15px 0 0}.at4-recommended .at4-recommended-vertical .at4-recommended-item{display:block;width:auto;max-width:100%;height:60px;border:none;margin:0 0 15px;box-shadow:none;background:none}.at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img,.at4-recommended-vertical .at4-recommended-item .at4-recommended-item-img img{width:60px;height:60px;float:left}.at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption{border-top:none;margin:0;height:60px;padding:3px 5px}.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption .at-h4{height:38px;margin:0}.at4-recommended .at4-recommended-vertical .at4-recommended-item .at4-recommended-item-caption small{position:absolute;bottom:0}.at4-recommended .at-recommended-label.at-vertical{text-align:left}.at4-no-image-light-recommended,.at4-no-image-minimal-recommended{background-color:#f2f2f2!important}.at4-no-image-gray-recommended{background-color:#e6e6e5!important}.at4-no-image-dark-recommended{background-color:#4e555e!important}.at4-recommended .at4-recommended-item-placeholder-img{background-repeat:no-repeat!important;background-position:center!important;width:100%!important;height:100%!important}.at4-recommended-horizontal .at4-no-image-dark-recommended .at4-recommended-item-placeholder-img{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACIAAAAfCAYAAACCox+xAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyNpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDE0IDc5LjE1MTQ4MSwgMjAxMy8wMy8xMy0xMjowOToxNSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIChNYWNpbnRvc2gpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjlFNUUyQTg3MTI0RDExRTM4NzAwREJDRjlCQzAyMUVFIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjlFNUUyQTg4MTI0RDExRTM4NzAwREJDRjlCQzAyMUVFIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6OUU1RTJBODUxMjREMTFFMzg3MDBEQkNGOUJDMDIxRUUiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6OUU1RTJBODYxMjREMTFFMzg3MDBEQkNGOUJDMDIxRUUiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz6oCfPiAAABfUlEQVR42uyWTU/DMAyGm3bdBxp062hHe+PC//9HCIkDYpNAO7CPAuWN5Eohyhpno2GHWqq8pO78xHHsiLquH4L/l6cwuBAZaOPKs//YBFIJIR59UiAt7huYi90aE/UQakTDLaL26RUEAAJqiefm93T9Bpj1X4O0bY0OIUXCpYBJvYDAUWyAUCWliHGTcnpqRMaM72ImRAJVknYG+eb4YEDIBeU0zGnsBLK1ODogYSsLhDwIJeVVk18lzfNA4ERGZNXi59UCIQhiYDilpSm/jp4awLxDvWhlf4/nGe8+LLuSt+SZul28ggaHG6gNVhDR+IuRFzOoxGKWwG7vVFm5AAQxgcqYpzrjFjR9zwPH5LSuT7XlNr2MQm5LzqjLpncNNaM+s8M27Y60g3FwhoSMzrtUQllgLtRs5pZ2cB4IhbvQbGRZv1NsrhyS8+SI5Mo9RJWpjAI1xqKL+0iEP180vy214JbeR12AyOgsHI7e0NfFyKv0ID1ID+IqPwIMAOeljGQOryBmAAAAAElFTkSuQmCC)!important}.at4-recommended-vertical .at4-no-image-dark-recommended .at4-recommended-item-placeholder-img{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA8AAAAOCAYAAADwikbvAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyNpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDE0IDc5LjE1MTQ4MSwgMjAxMy8wMy8xMy0xMjowOToxNSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIChNYWNpbnRvc2gpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjAzREMyNTM2MTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjAzREMyNTM3MTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6MDNEQzI1MzQxMjRGMTFFMzg3MDBEQkNGOUJDMDIxRUUiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6MDNEQzI1MzUxMjRGMTFFMzg3MDBEQkNGOUJDMDIxRUUiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz5GfbtkAAAAxklEQVR42qRSTQvCMAxduk53mEOHKFPP/v8/5cGTiIibivVFUomlG7gFHvloXpKmJefcPhkmNyvGEWj+IOZA6ckPImoxxVwOLvCvXUzkpayNCpRQK64IbOBnAYGAXMeMslNlU+CzrIEdCkxi5DPAoz6BE8ZuVNdKJuL8rS9sv62IXlCHyP0KqKUKZXK9uwkSLVArfwpVR3b225kXwovibcP+jC4jUtfWPZmfqJJnYlkAM128j1z0nHWKSUbIKDL/msHktwADAPptQo+vkZNLAAAAAElFTkSuQmCC)!important}.at4-recommended-horizontal .at4-no-image-gray-recommended .at4-recommended-item-placeholder-img,.at4-recommended-horizontal .at4-no-image-light-recommended .at4-recommended-item-placeholder-img,.at4-recommended-horizontal .at4-no-image-minimal-recommended .at4-recommended-item-placeholder-img{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACIAAAAfCAYAAACCox+xAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyNpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDE0IDc5LjE1MTQ4MSwgMjAxMy8wMy8xMy0xMjowOToxNSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIChNYWNpbnRvc2gpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjAzREMyNTMyMTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjAzREMyNTMzMTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6OUU1RTJBODkxMjREMTFFMzg3MDBEQkNGOUJDMDIxRUUiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6OUU1RTJBOEExMjREMTFFMzg3MDBEQkNGOUJDMDIxRUUiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz6dfDQvAAABg0lEQVR42uyWS0vDQBDH82jaKNW0qUltbl68e/Di98eLBz+CCB5EBaWIpUat/4UJLMuame1j7SEDYbqbKfPLvHbDi8ur8+D/5T4K9kR6xrr27D+xgdS3N9d3PilQFmcNzN6mxkbdhxrQcoGofXkFAUAINcVzrG2vsP8KmJdtg7SlxoRQouBywOReQOAosUDoklPEpEU5XDciqeB/iRAig6pIO4P8CHysBBDqg0palrR2Alkwjj5RsDUDoRqhorpq6quifRkInKiIPLf4eWIgQoLoWbq0stXXn10DmDeoR2PsL/E84N0Hk5Wypc70dMkGGhzOoeb4gpjW34K6GEFljFkGu6XTZJUCEMQBVCHs6kI60MycB47FyUmo20oPvYJCzhVnvIsR3zg5ghoRTNpyHKTBBhIJTt6pFsoZ9iLDZswcB5uBULhnho0a66eazaFDca59Hym1e4guQ4rCO4Fu/T4Sw8Gk+c3MghN6H+8CRKVg4tB6fV8XI6/SgXQgHYir/AowAMU5TskhKVUNAAAAAElFTkSuQmCC)!important}.at4-recommended-vertical .at4-no-image-gray-recommended .at4-recommended-item-placeholder-img,.at4-recommended-vertical .at4-no-image-light-recommended .at4-recommended-item-placeholder-img,.at4-recommended-vertical .at4-no-image-minimal-recommended .at4-recommended-item-placeholder-img{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA8AAAAOCAYAAADwikbvAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyNpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNS1jMDE0IDc5LjE1MTQ4MSwgMjAxMy8wMy8xMy0xMjowOToxNSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIChNYWNpbnRvc2gpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjAzREMyNTNBMTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjAzREMyNTNCMTI0RjExRTM4NzAwREJDRjlCQzAyMUVFIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6MDNEQzI1MzgxMjRGMTFFMzg3MDBEQkNGOUJDMDIxRUUiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6MDNEQzI1MzkxMjRGMTFFMzg3MDBEQkNGOUJDMDIxRUUiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz65Fr9cAAAA0ElEQVR42qRRuQrCQBDd3SSaIgYNosSrtLew8f+xsfAnYmEVRMR4YHwjExjCbsBk4DHHzptjR2+2u7VqJ3efjTNQ/EEMgbgiv46H/QNTDPnhCv/mYiLPI21EIIaaUEVgBj+oETQQypgRtidsXfNJpsACBXo28gWgUd9AjrEL0TXhiSh/XhWudlZI/kCdLPtFUGMRCni9p6kl+kAq/D5UavmzX2fNd87obsCSfztnrOR0rjvTiRImkoyAQQNRyZ2jhjenGNVBOpF1WZatyV8BBgBJ+irgS/KHdAAAAABJRU5ErkJggg==)!important}#at-drawer.ats-dark,.at4-recommended.ats-dark .at4-recommended-horizontal .at4-recommended-item-caption,.at4-recommended.ats-dark .at4-recommended-vertical .at4-recommended-item-caption{background:#262b30}#at-drawer.ats-gray,.at4-recommended.ats-gray .at4-recommended-horizontal .at4-recommended-item-caption{background:#f2f2f2}#at-drawer.ats-light,.at4-recommended.ats-light .at4-recommended-horizontal .at4-recommended-item-caption{background:#fff}.at4-recommended.ats-dark .at4-recommended-vertical .at4-recommended-item{background:none}.at4-recommended.ats-dark .at4-recommended-item .at4-recommended-item-caption a:hover,.at4-recommended.ats-dark .at4-recommended-item .at4-recommended-item-caption a:link,.at4-recommended.ats-dark .at4-recommended-item .at4-recommended-item-caption a:visited,.at4-recommended.ats-dark .at4-recommended-item .at4-recommended-item-caption small,.at4-recommended.ats-dark .at4-recommended-item-caption,.at4-recommended.ats-dark .at-logo a:hover,.at4-recommended.ats-dark .at-recommended-label.at-vertical{color:#fff}.at4-recommended-vertical-logo{padding-top:0;text-align:left}.at4-recommended-vertical-logo .at4-logo-container{line-height:10px}.at4-recommended-horizontal-logo{text-align:center}.at4-recommended.at-inline .at4-recommended-horizontal-logo{text-align:left}#at4-thankyou .at4-recommended.at-inline .at4-recommended-horizontal{text-align:center}.at4-recommended .at-logo{margin:10px 0 0;padding:0;height:25px;overflow:auto;-ms-box-sizing:content-box;-o-box-sizing:content-box;box-sizing:content-box}.at4-recommended.at-inline .at4-recommended-horizontal .at-logo{text-align:left}.at4-recommended .at4-logo-container a.at-sponsored-link{color:#666}.at4-recommended-class .at4-logo-container a:hover,.at4-recommendedbox-outer-container .at4-recommended-recommendedbox .at4-logo-container a:hover{color:#000}
</style>
<style type="text/css">
.at-recommendedjumbo-outer-container{margin:0;padding:0;border:0;background:none;color:#000}.at-recommendedjumbo-footer{position:relative;width:100%;height:510px;overflow:hidden;transition:all .3s ease-in-out}.at-mobile .at-recommendedjumbo-footer{height:250px}.at-recommendedjumbo-footer #bg-link:after{content:'';position:absolute;top:0;left:0;right:0;bottom:0;background:rgba(0,0,0,.75)}.at-recommendedjumbo-footer:hover #bg-link:after{background:rgba(0,0,0,.85)}.at-recommendedjumbo-footer *,.at-recommendedjumbo-footer :after,.at-recommendedjumbo-footer :before{box-sizing:border-box}.at-recommendedjumbo-footer:hover #at-recommendedjumbo-footer-bg{animation:atRecommendedJumboAnimatedBackground 1s ease-in-out 1;animation-fill-mode:forwards}.at-recommendedjumbo-footer #at-recommendedjumbo-top-holder{position:absolute;top:0;padding:0 40px;width:100%}.at-mobile .at-recommendedjumbo-footer #at-recommendedjumbo-top-holder{padding:0 20px}.at-recommendedjumbo-footer .at-recommendedjumbo-footer-inner{position:relative;text-align:center;font-family:helvetica,arial,sans-serif;z-index:2;width:100%}.at-recommendedjumbo-footer #at-recommendedjumbo-label-holder{margin:40px 0 0;max-height:30px}.at-mobile .at-recommendedjumbo-footer #at-recommendedjumbo-label-holder{margin:20px 0 0;max-height:20px}.at-recommendedjumbo-footer #at-recommendedjumbo-label{font-weight:300;font-size:24px;line-height:24px;color:#fff;margin:0}.at-mobile .at-recommendedjumbo-footer #at-recommendedjumbo-label{font-weight:150;font-size:14px;line-height:14px}.at-recommendedjumbo-footer #at-recommendedjumbo-title-holder{margin:20px 0 0;min-height:3pc;max-height:78pt}.at-mobile .at-recommendedjumbo-footer #at-recommendedjumbo-title-holder{margin:10px 0 0;min-height:24px;max-height:54px}.at-recommendedjumbo-footer #at-recommendedjumbo-content-title{font-size:3pc;line-height:52px;font-weight:700;margin:0}.at-mobile .at-recommendedjumbo-footer #at-recommendedjumbo-content-title{font-size:24px;line-height:27px}.at-recommendedjumbo-footer a{text-decoration:none;color:#fff}.at-recommendedjumbo-footer a:visited{color:#fff}.at-recommendedjumbo-footer small{margin:20px 0 0;display:inline-block;height:2pc;line-height:2pc;font-size:14px;color:#ccc;cursor:default}.at-mobile .at-recommendedjumbo-footer small{margin:10px 0 0;height:14px;line-height:14px;font-size:9pt}.at-recommendedjumbo-footer .at-logo-container{position:absolute;bottom:20px;margin:auto;left:0;right:0}.at-mobile .at-recommendedjumbo-footer .at-logo-container{bottom:10px}.at-recommendedjumbo-footer a.at-sponsored-link{color:#ccc}.at-recommendedjumbo-footer div #at-recommendedjumbo-logo-link{padding:2px 0 0 11px;text-decoration:none;line-height:20px;font-family:helvetica,arial,sans-serif;font-size:9px;color:#ccc}@keyframes atRecommendedJumboAnimatedBackground{0%{transform:scale(1,1)}to{transform:scale(1.1,1.1)}}
</style>
<style type="text/css">
.at-resp-share-element{position:relative;padding:0;margin:0;font-size:0;line-height:0}.at-resp-share-element:after,.at-resp-share-element:before{content:" ";display:table}.at-resp-share-element.at-mobile .at4-share-count-container,.at-resp-share-element.at-mobile .at-label{display:none}.at-resp-share-element .at-share-btn{display:inline-block;*display:inline;*zoom:1;margin:0 2px 5px;padding:0;overflow:hidden;line-height:0;text-decoration:none;text-transform:none;color:#fff;cursor:pointer;transition:all .2s ease-in-out;border:0;font-family:helvetica neue,helvetica,arial,sans-serif;background-color:transparent}.at-resp-share-element .at-share-btn::-moz-focus-inner{border:0;padding:0}.at-resp-share-element .at-share-btn:focus,.at-resp-share-element .at-share-btn:hover{transform:translateY(-4px);color:#fff;text-decoration:none}.at-resp-share-element .at-share-btn .at-icon-wrapper{float:left}.at-resp-share-element .at-share-btn.at-share-btn.at-svc-compact:hover{transform:none}.at-resp-share-element .at-share-btn .at-label{font-family:helvetica neue,helvetica,arial,sans-serif;font-size:9pt;padding:0 15px 0 0;margin:0 0 0 5px;height:2pc;line-height:2pc;background:none}.at-resp-share-element .at-icon,.at-resp-share-element .at-label{cursor:pointer}.at-resp-share-element .at4-share-count-container{text-decoration:none;float:right;padding-right:15px;font-size:9pt}.at-mobile .at-resp-share-element .at-label{display:none}.at-resp-share-element.at-mobile .at-share-btn{margin-right:5px}.at-mobile .at-resp-share-element .at-share-btn{padding:5px;margin-right:5px}
</style>
<style type="text/css">
.at-share-tbx-element{position:relative;margin:0;color:#fff;font-size:0}.at-share-tbx-element,.at-share-tbx-element .at-share-btn{font-family:helvetica neue,helvetica,arial,sans-serif;padding:0;line-height:0}.at-share-tbx-element .at-share-btn{cursor:pointer;margin:0 5px 5px 0;display:inline-block;overflow:hidden;border:0;text-decoration:none;text-transform:none;background-color:transparent;color:inherit;transition:all .2s ease-in-out}.at-share-tbx-element .at-share-btn:focus,.at-share-tbx-element .at-share-btn:hover{transform:translateY(-4px);outline-offset:-1px;color:inherit}.at-share-tbx-element .at-share-btn::-moz-focus-inner{border:0;padding:0}.at-share-tbx-element .at-share-btn.at-share-btn.at-svc-compact:hover{transform:none}.at-share-tbx-element .at-icon-wrapper{vertical-align:middle}.at-share-tbx-element .at4-share-count,.at-share-tbx-element .at-label{margin:0 7.5px 0 2.5px;text-decoration:none;vertical-align:middle;display:inline-block;background:none;height:0;font-size:inherit;line-height:inherit;color:inherit}.at-share-tbx-element.at-mobile .at4-share-count,.at-share-tbx-element.at-mobile .at-label{display:none}.at-share-tbx-element .at_native_button{vertical-align:middle}.at-share-tbx-element .addthis_counter.addthis_bubble_style{margin:0 2px;vertical-align:middle;display:inline-block}.at-share-tbx-element .fb_iframe_widget{display:block}.at-share-tbx-element.at-share-tbx-native .at300b{vertical-align:middle}.at-style-responsive .at-share-btn{padding:5px}.at-style-jumbo{display:table}.at-style-jumbo .at4-spacer{height:1px;display:block;visibility:hidden;opacity:0}.at-style-jumbo .at4-count-container{display:table-cell;text-align:center;min-width:200px;vertical-align:middle;border-right:1px solid #ccc;padding-right:20px}.at-style-jumbo .at4-count{font-size:60px;line-height:60px;font-weight:700}.at-style-jumbo .at4-count-title{position:relative;font-size:18px;line-height:18px;bottom:2px}.at-style-jumbo .at-share-btn-elements{display:table-cell;vertical-align:middle;padding-left:20px}.at_flat_counter{cursor:pointer;font-family:helvetica,arial,sans-serif;font-weight:700;text-transform:uppercase;display:inline-block;position:relative;vertical-align:top;height:auto;margin:0 5px;padding:0 6px;left:-1px;background:#ebebeb;color:#32363b;transition:all .2s ease}.at_flat_counter:after{top:30%;left:-4px;content:"";position:absolute;border-width:5px 8px 5px 0;border-style:solid;border-color:transparent #ebebeb transparent transparent;display:block;width:0;height:0;transform:translateY(360deg)}.at_flat_counter:hover{background:#e1e2e2}
</style>
<style type="text/css">
.at4-thankyou-background{top:0;right:0;left:0;bottom:0;-webkit-overflow-scrolling:touch;z-index:9999999;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpizCuu/sRABGBiIBKMKqSOQoAAAwC8KgJipENhxwAAAABJRU5ErkJggg==);background:hsla(217,6%,46%,.95)}.at4-thankyou-background.at-thankyou-shown{position:fixed}.at4-thankyou-inner{position:absolute;width:100%;top:10%;left:50%;margin-left:-50%;text-align:center}.at4-thankyou-mobile .at4-thankyou-inner{top:5%}.thankyou-description{font-weight:400}.at4-thankyou-background .at4lb-inner{position:relative;width:100%;height:100%}.at4-thankyou-background .at4lb-inner .at4x{position:absolute;top:15px;right:15px;display:block;width:20px;height:20px;padding:20px;margin:0;cursor:pointer;transition:opacity .25s ease-in;opacity:.4;background:url("data:image/gif;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABx0RVh0U29mdHdhcmUAQWRvYmUgRmlyZXdvcmtzIENTNui8sowAAAAWdEVYdENyZWF0aW9uIFRpbWUAMTEvMTMvMTKswDp5AAAAd0lEQVQ4jb2VQRLAIAgDE///Z3qqY1FAhalHMCsCIkVEAIAkkVgvp2lDBgYAnAyHkWotLccNrEd4A7X2TqIdqLfnWBAdaF5rJdyJfjtPH5GT37CaGhoVq3nOm/XflUuLUto2pY1d+vRKh0Pp+MrAVtDe2JkvYNQ+jVSEEFmOkggAAAAASUVORK5CYII=") no-repeat center center;overflow:hidden;text-indent:-99999em;border:1px solid transparent}.at4-thankyou-background .at4lb-inner .at4x:focus,.at4-thankyou-background .at4lb-inner .at4x:hover{border:1px solid #fff;border-radius:50%;outline:0}.at4-thankyou-background .at4lb-inner #at4-palogo{position:absolute;bottom:10px;display:inline-block;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:11px;cursor:pointer;-webkit-transition:opacity .25s ease-in;moz-transition:opacity .25s ease-in;transition:opacity .25s ease-in;opacity:.5;z-index:100020;color:#fff;padding:2px 0 0 13px}.at4-thankyou-background .at4lb-inner #at4-palogo .at-branding-addthis,.at4-thankyou-background .at4lb-inner #at4-palogo .at-branding-info{color:#fff}.at4-thankyou-background .at4lb-inner #at4-palogo:hover,.at4-thankyou-background.ats-dark .at4lb-inner a#at4-palogo:hover{text-decoration:none;color:#fff;opacity:1}.at4-thankyou-background.ats-dark{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgeMZABGBiIBKMKqSOQoAAAwB+cQD6hqlbCwAAAABJRU5ErkJggg==");background:rgba(0,0,0,.85)}.at4-thankyou-background .thankyou-title{color:#fff;font-size:38.5px;margin:10px 20px;line-height:38.5px;font-family:helvetica neue,helvetica,arial,sans-serif;font-weight:300}.at4-thankyou-background.ats-dark .thankyou-description,.at4-thankyou-background.ats-dark .thankyou-title{color:#fff}.at4-thankyou-background .thankyou-description{color:#fff;font-size:18px;margin:10px 0;line-height:24px;padding:0;font-family:helvetica neue,helvetica,arial,sans-serif;font-weight:300}.at4-thankyou-background .at4-thanks-icons{padding-top:10px}.at4-thankyou-mobile *{-webkit-overflow-scrolling:touch}#at4-thankyou .at4-recommended-recommendedbox .at-logo{display:none}.at4-thankyou .at-h3{height:49px;line-height:49px;margin:0 50px 0 20px;padding:1px 0 0;font-family:helvetica neue,helvetica,arial,sans-serif;font-size:1pc;font-weight:700;color:#fff;text-shadow:0 1px #000}.at4-thanks{padding-top:50px;text-align:center}.at4-thanks label{display:block;margin:0 0 15px;font-size:1pc;line-height:1pc}.at4-thanks .at4-h2{background:none;border:none;margin:0 0 10px;padding:0;font-family:helvetica neue,helvetica,arial,sans-serif;font-size:28px;font-weight:300;color:#000}.at4-thanks .at4-thanks-icons{position:relative;height:2pc}.at4-thanks .at4-thanks-icons .at-thankyou-label{display:block;padding-bottom:10px;font-size:14px;color:#666}.at4-thankyou-layer .at-follow .at-icon-wrapper{width:2pc;height:2pc}
</style>
<style type="text/css">
.at4-recommended-toaster{position:fixed;top:auto;bottom:0;right:0;z-index:100021}.at4-recommended-toaster.ats-light{border:1px solid #c5c5c5;background:#fff}.at4-recommended-toaster.ats-gray{border:1px solid #c5c5c5;background:#f2f2f2}.at4-recommended-toaster.ats-dark{background:#262b30;color:#fff}.at4-recommended-toaster .at4-recommended-container{padding-top:0;margin:0}.at4-recommended.at4-recommended-toaster div.at-recommended-label{line-height:1pc;font-size:1pc;text-align:left;padding:20px 0 0 20px}.at4-toaster-outer .at4-recommended .at4-recommended-item .at4-recommended-item-caption .at-h4{font-size:11px;line-height:11px;margin:10px 0 6px;height:30px}.at4-recommended.at4-recommended-toaster div.at-recommended-label.ats-gray,.at4-recommended.at4-recommended-toaster div.at-recommended-label.ats-light{color:#464646}.at4-recommended.at4-recommended-toaster div.at-recommended-label.ats-dark{color:#fff}.at4-toaster-close-control{position:absolute;top:0;right:0;display:block;width:20px;height:20px;line-height:20px;margin:5px 5px 0 0;padding:0;text-indent:-9999em}.at4-toaster-open-control{position:fixed;right:0;bottom:0;z-index:100020}.at4-toaster-outer .at4-recommended-item{width:90pt;border:0;margin:9px 10px 0}.at4-toaster-outer .at4-recommended-item:first-child{margin-left:20px}.at4-toaster-outer .at4-recommended-item:last-child{margin-right:20px}.at4-toaster-outer .at4-recommended-item .at4-recommended-item-img{max-height:90pt;max-width:90pt}.at4-toaster-outer .at4-recommended-item .at4-recommended-item-img img{height:90pt;width:90pt}.at4-toaster-outer .at4-recommended-item .at4-recommended-item-caption{height:30px;padding:0;margin:0;height:initial}.at4-toaster-outer .ats-dark .at4-recommended-item .at4-recommended-item-caption{background:#262b30}.at4-toaster-outer .at4-recommended .at4-recommended-item .at4-recommended-item-caption small{width:auto;line-height:14px;margin:0}.at4-toaster-outer .at4-recommended.ats-dark .at4-recommended-item .at4-recommended-item-caption small{color:#fff}.at4-recommended-toaster .at-logo{margin:0 0 3px 20px;text-align:left}.at4-recommended-toaster .at-logo .at4-logo-container.at-sponsored-logo{position:relative}.at4-toaster-outer .at4-recommended-item .sponsored-label{text-align:right;font-size:10px;color:#666;float:right;position:fixed;bottom:6px;right:20px;top:initial;z-index:99999}
</style>
<style type="text/css">
.at4-whatsnext{position:fixed;bottom:0!important;right:0;background:#fff;border:1px solid #c5c5c5;margin:-1px;width:390px;height:90pt;overflow:hidden;font-size:9pt;font-weight:400;color:#000;z-index:1800000000}.at4-whatsnext a{color:#666}.at4-whatsnext .at-whatsnext-content{height:90pt;position:relative}.at4-whatsnext .at-whatsnext-content .at-branding{position:absolute;bottom:15px;right:10px;padding-left:9px;text-decoration:none;line-height:10px;font-family:helvetica,arial,sans-serif;font-size:10px;color:#666}.at4-whatsnext .at-whatsnext-content .at-whatsnext-content-inner{position:absolute;top:15px;right:20px;bottom:15px;left:140px;text-align:left;height:105px}.at4-whatsnext .at-whatsnext-content-inner a{display:inline-block}.at4-whatsnext .at-whatsnext-content-inner div.at-h6{text-align:left;margin:0;padding:0 0 3px;font-size:11px;color:#666;cursor:default}.at4-whatsnext .at-whatsnext-content .at-h3{text-align:left;margin:5px 0;padding:0;line-height:1.2em;font-weight:400;font-size:14px;height:3pc}.at4-whatsnext .at-whatsnext-content-inner a:link,.at4-whatsnext .at-whatsnext-content-inner a:visited{text-decoration:none;font-weight:400;color:#464646}.at4-whatsnext .at-whatsnext-content-inner a:hover{color:#000}.at4-whatsnext .at-whatsnext-content-inner small{position:absolute;bottom:15px;line-height:10px;font-size:11px;color:#666;cursor:default;text-align:left}.at4-whatsnext .at-whatsnext-content .at-whatsnext-content-img{position:absolute;top:0;left:0;width:90pt;height:90pt;overflow:hidden}.at4-whatsnext .at-whatsnext-content .at-whatsnext-content-img img{position:absolute;top:0;left:0;max-height:none;max-width:none}.at4-whatsnext .at-whatsnext-close-control{position:absolute;top:0;right:0;display:block;width:20px;height:20px;line-height:20px;margin:0 5px 0 0;padding:0;text-indent:-9999em}.at-whatsnext-open-control{position:fixed;right:0;bottom:0;z-index:100020}.at4-whatsnext.ats-dark{background:#262b30}.at4-whatsnext.ats-dark .at-whatsnext-content .at-h3,.at4-whatsnext.ats-dark .at-whatsnext-content a.at4-logo:hover,.at4-whatsnext.ats-dark .at-whatsnext-content-inner a:link,.at4-whatsnext.ats-dark .at-whatsnext-content-inner a:visited{color:#fff}.at4-whatsnext.ats-light{background:#fff}.at4-whatsnext.ats-gray{background:#f2f2f2}.at4-whatsnext.at-whatsnext-nophoto{width:270px}.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content-img{display:none}.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content .at-whatsnext-content-inner{top:15px;right:0;left:20px}.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content .at-whatsnext-content-inner.addthis_32x32_style{top:0;right:0;left:0;padding:45px 20px 0;font-size:20px}.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content .at-whatsnext-content-inner .at4-icon,.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content .at-whatsnext-content-inner .at4-icon-fw,.at4-whatsnext.at-whatsnext-nophoto .at-whatsnext-content .at-whatsnext-content-inner .whatsnext-msg{vertical-align:middle}.at-whatsnext-img,.at-whatsnext-img-lnk{position:absolute;left:0}
</style>
<style type="text/css">
.at4-whatsnextmobile{position:fixed;bottom:0;right:0;left:0;background:#fff;z-index:9999998;height:170px;font-size:28px}.at4-whatsnextmobile .col-2{height:100%;font-size:1em}.at4-whatsnextmobile .col-2:first-child{max-width:200px;display:inline-block;float:left}.at4-whatsnextmobile .col-2:last-child{position:absolute;left:200px;right:50px;top:0;bottom:0;display:inline-block}.at4-whatsnextmobile .at-whatsnext-content-inner{font-size:1em}.at4-whatsnextmobile .at-whatsnext-content-img img{height:100%;width:100%}.at4-whatsnextmobile .at-close-control{font-size:1em;position:absolute;top:0;right:0;width:50px;height:50px}.at4-whatsnextmobile .at-close-control button{width:100%;height:100%;font-size:1em;font-weight:400;text-decoration:none;opacity:.5;padding:0;cursor:pointer;background:0 0;border:0;-webkit-appearance:none}.at4-whatsnextmobile .at-h3,.at4-whatsnextmobile .at-h6{font-size:1em;margin:0;color:#a1a1a1;margin-left:2.5%;margin-top:25px}.at4-whatsnextmobile .at-h3{font-size:1em;line-height:1em;font-weight:500;height:50%}.at4-whatsnextmobile .at-h3 a{font-size:1em;text-decoration:none}.at4-whatsnextmobile .at-h6{font-size:.8em;line-height:.8em;font-weight:500}.at4-whatsnextmobile .footer{position:absolute;bottom:2px;left:200px;right:0;padding-left:2.5%;font-size:1em;line-height:.6em}.at4-whatsnextmobile .footer small{font-size:.6em;color:#a1a1a1}.at4-whatsnextmobile .footer small:first-child{margin-right:5%;float:left}.at4-whatsnextmobile .footer small:last-child{margin-right:2.5%;float:right}.at4-whatsnextmobile .at-whatsnext-content{height:100%}.at4-whatsnextmobile.ats-dark{background:#262b30;color:#fff}.at4-whatsnextmobile .at-close-control button{color:#bfbfbf}.at4-whatsnextmobile.ats-dark a:link,.at4-whatsnextmobile.ats-dark a:visited{color:#fff}.at4-whatsnextmobile.ats-gray{background:#f2f2f2;color:#262b30}.at4-whatsnextmobile.ats-light{background:#fff;color:#262b30}.at4-whatsnextmobile.ats-dark .footer a:link,.at4-whatsnextmobile.ats-dark .footer a:visited,.at4-whatsnextmobile.ats-gray .footer a:link,.at4-whatsnextmobile.ats-gray .footer a:visited,.at4-whatsnextmobile.ats-light .footer a:link,.at4-whatsnextmobile.ats-light .footer a:visited{color:#a1a1a1}.at4-whatsnextmobile.ats-gray a:link,.at4-whatsnextmobile.ats-gray a:visited,.at4-whatsnextmobile.ats-light a:link,.at4-whatsnextmobile.ats-light a:visited{color:#262b30}@media only screen and (min-device-width:320px) and (max-device-width:480px){.at4-whatsnextmobile{height:85px;font-size:14px}.at4-whatsnextmobile .col-2:first-child{width:75pt}.at4-whatsnextmobile .col-2:last-child{right:25px;left:75pt}.at4-whatsnextmobile .footer{left:75pt}.at4-whatsnextmobile .at-close-control{width:25px;height:25px}.at4-whatsnextmobile .at-h3,.at4-whatsnextmobile .at-h6{margin-top:12.5px}}
</style>
<style type="text/css">
.at-custom-mobile-bar{left:0;right:0;width:100%;height:56px;position:fixed;text-align:center;z-index:100020;background:#fff;overflow:hidden;box-shadow:0 0 10px 0 rgba(0,0,0,.2);font:initial;line-height:normal;top:auto;bottom:0}.at-custom-mobile-bar.at-custom-mobile-bar-zindex-hide{z-index:-1!important}.at-custom-mobile-bar.atss-top{top:0;bottom:auto}.at-custom-mobile-bar.atss-bottom{top:auto;bottom:0}.at-custom-mobile-bar .at-custom-mobile-bar-btns{display:inline-block;text-align:center}.at-custom-mobile-bar .at-custom-mobile-bar-counter,.at-custom-mobile-bar .at-share-btn{margin-top:4px}.at-custom-mobile-bar .at-share-btn{display:inline-block;text-decoration:none;transition:none;box-sizing:content-box}.at-custom-mobile-bar .at-custom-mobile-bar-counter{font-family:Helvetica neue,arial;vertical-align:top;margin-left:4px;margin-right:4px;display:inline-block}.at-custom-mobile-bar .at-custom-mobile-bar-count{font-size:26px;line-height:1.25em;color:#222}.at-custom-mobile-bar .at-custom-mobile-bar-text{font-size:9pt;line-height:1.25em;color:#888;letter-spacing:1px}.at-custom-mobile-bar .at-icon-wrapper{text-align:center;height:3pc;width:3pc;margin:0 4px}.at-custom-mobile-bar .at-icon{vertical-align:top;margin:8px;width:2pc;height:2pc}.at-custom-mobile-bar.at-shfs-medium{height:3pc}.at-custom-mobile-bar.at-shfs-medium .at-custom-mobile-bar-counter{margin-top:6px}.at-custom-mobile-bar.at-shfs-medium .at-custom-mobile-bar-count{font-size:18px}.at-custom-mobile-bar.at-shfs-medium .at-custom-mobile-bar-text{font-size:10px}.at-custom-mobile-bar.at-shfs-medium .at-icon-wrapper{height:40px;width:40px}.at-custom-mobile-bar.at-shfs-medium .at-icon{margin:6px;width:28px;height:28px}.at-custom-mobile-bar.at-shfs-small{height:40px}.at-custom-mobile-bar.at-shfs-small .at-custom-mobile-bar-counter{margin-top:3px}.at-custom-mobile-bar.at-shfs-small .at-custom-mobile-bar-count{font-size:1pc}.at-custom-mobile-bar.at-shfs-small .at-custom-mobile-bar-text{font-size:10px}.at-custom-mobile-bar.at-shfs-small .at-icon-wrapper{height:2pc;width:2pc}.at-custom-mobile-bar.at-shfs-small .at-icon{margin:4px;width:24px;height:24px}
</style>
<style type="text/css">
.at-custom-sidebar{top:20%;width:58px;position:fixed;text-align:center;z-index:100020;background:#fff;overflow:hidden;box-shadow:0 0 10px 0 rgba(0,0,0,.2);font:initial;line-height:normal;top:auto;bottom:0}.at-custom-sidebar.at-custom-sidebar-zindex-hide{z-index:-1!important}.at-custom-sidebar.atss-left{left:0;right:auto;float:left;border-radius:0 4px 4px 0}.at-custom-sidebar.atss-right{left:auto;right:0;float:right;border-radius:4px 0 0 4px}.at-custom-sidebar .at-custom-sidebar-btns{display:inline-block;text-align:center;padding-top:4px}.at-custom-sidebar .at-custom-sidebar-counter{margin-bottom:8px}.at-custom-sidebar .at-share-btn{display:inline-block;text-decoration:none;transition:none;box-sizing:content-box}.at-custom-sidebar .at-custom-sidebar-counter{font-family:Helvetica neue,arial;vertical-align:top;margin-left:4px;margin-right:4px;display:inline-block}.at-custom-sidebar .at-custom-sidebar-count{font-size:21px;line-height:1.25em;color:#222}.at-custom-sidebar .at-custom-sidebar-text{font-size:10px;line-height:1.25em;color:#888;letter-spacing:1px}.at-custom-sidebar .at-icon-wrapper{text-align:center;margin:0 4px}.at-custom-sidebar .at-icon{vertical-align:top;margin:9px;width:2pc;height:2pc}.at-custom-sidebar .at-icon-wrapper{position:relative}.at-custom-sidebar .at4-share-count,.at-custom-sidebar .at4-share-count-container{line-height:1pc;font-size:10px}.at-custom-sidebar .at4-share-count{text-indent:0;font-family:Arial,Helvetica Neue,Helvetica,sans-serif;font-weight:200;width:100%;height:1pc}.at-custom-sidebar .at4-share-count-anchor .at-icon{margin-top:3px}.at-custom-sidebar .at4-share-count-container{position:absolute;left:0;right:auto;top:auto;bottom:0;width:100%;color:#fff;background:inherit}
</style>
<style type="text/css">
.at-image-sharing-mobile-icon{position:absolute;background:#000 url(//s7.addthis.com/static/44a36d35bafef33aa9455b7d3039a771.png) no-repeat top center;background-color:rgba(0,0,0,.9);background-image:url(//s7.addthis.com/static/10db525181ee0bbe1a515001be1c7818.svg),none;border-radius:3px;width:50px;height:40px;top:-9999px;left:-9999px}.at-image-sharing-tool{display:block;position:absolute;text-align:center;z-index:9001;background:none;overflow:hidden;top:-9999px;left:-9999px;font:initial;line-height:0}.at-image-sharing-tool.addthis-animated{animation-duration:.15s}.at-image-sharing-tool.at-orientation-vertical .at-share-btn{display:block}.at-image-sharing-tool.at-orientation-horizontal .at-share-btn{display:inline-block}.at-image-sharing-tool.at-image-sharing-tool-size-big .at-icon{width:43px;height:43px}.at-image-sharing-tool.at-image-sharing-tool-size-mobile .at-share-btn{margin:0!important}.at-image-sharing-tool.at-image-sharing-tool-size-mobile .at-icon-wrapper{height:60px;width:100%;border-radius:0!important}.at-image-sharing-tool.at-image-sharing-tool-size-mobile .at-icon{max-width:100%;height:54px!important;width:54px!important}.at-image-sharing-tool .at-custom-shape.at-image-sharing-tool-btns{margin-right:8px;margin-bottom:8px}.at-image-sharing-tool .at-custom-shape .at-share-btn{margin-top:8px;margin-left:8px}.at-image-sharing-tool .at-share-btn{line-height:0;text-decoration:none;transition:none;box-sizing:content-box}.at-image-sharing-tool .at-icon-wrapper{text-align:center;height:100%;width:100%}.at-image-sharing-tool .at-icon{vertical-align:top;width:2pc;height:2pc;margin:3px}
</style>
<style type="text/css">
.at-expanding-share-button{box-sizing:border-box;position:fixed;z-index:9999}.at-expanding-share-button[data-position=bottom-right]{bottom:10px;right:10px}.at-expanding-share-button[data-position=bottom-right] .at-expanding-share-button-toggle-bg,.at-expanding-share-button[data-position=bottom-right] .at-expanding-share-button-toggle-btn[data-name]:after,.at-expanding-share-button[data-position=bottom-right] .at-icon-wrapper,.at-expanding-share-button[data-position=bottom-right] [data-name]:after{float:right}.at-expanding-share-button[data-position=bottom-right] [data-name]:after{margin-right:10px}.at-expanding-share-button[data-position=bottom-right] .at-expanding-share-button-toggle-btn[data-name]:after{margin-right:5px}.at-expanding-share-button[data-position=bottom-right] .at-icon-wrapper{margin-right:-3px}.at-expanding-share-button[data-position=bottom-left]{bottom:10px;left:10px}.at-expanding-share-button[data-position=bottom-left] .at-expanding-share-button-toggle-bg,.at-expanding-share-button[data-position=bottom-left] .at-expanding-share-button-toggle-btn[data-name]:after,.at-expanding-share-button[data-position=bottom-left] .at-icon-wrapper,.at-expanding-share-button[data-position=bottom-left] [data-name]:after{float:left}.at-expanding-share-button[data-position=bottom-left] [data-name]:after{margin-left:10px}.at-expanding-share-button[data-position=bottom-left] .at-expanding-share-button-toggle-btn[data-name]:after{margin-left:5px}.at-expanding-share-button *,.at-expanding-share-button :after,.at-expanding-share-button :before{box-sizing:border-box}.at-expanding-share-button .at-expanding-share-button-services-list{display:none;list-style:none;margin:0 5px;overflow:visible;padding:0}.at-expanding-share-button .at-expanding-share-button-services-list>li{display:block;height:45px;position:relative;overflow:visible}.at-expanding-share-button .at-expanding-share-button-toggle-btn,.at-expanding-share-button .at-share-btn{transition:.1s;text-decoration:none}.at-expanding-share-button .at-share-btn{display:block;height:40px;padding:0 3px 0 0}.at-expanding-share-button .at-expanding-share-button-toggle-btn{position:relative;overflow:auto}.at-expanding-share-button .at-expanding-share-button-toggle-btn.at-expanding-share-button-hidden[data-name]:after{display:none}.at-expanding-share-button .at-expanding-share-button-toggle-bg{box-shadow:0 2px 4px 0 rgba(0,0,0,.3);border-radius:50%;position:relative}.at-expanding-share-button .at-expanding-share-button-toggle-bg>span{background-image:url("data:image/svg+xml,%3Csvg%20width%3D%2232px%22%20height%3D%2232px%22%20viewBox%3D%220%200%2032%2032%22%20version%3D%221.1%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Ctitle%3Eshare%3C%2Ftitle%3E%3Cg%20stroke%3D%22none%22%20stroke-width%3D%221%22%20fill%3D%22none%22%20fill-rule%3D%22evenodd%22%3E%3Cg%20fill%3D%22%23FFFFFF%22%3E%3Cpath%20d%3D%22M26%2C13.4285714%20C26%2C13.6220248%2025.9293162%2C13.7894338%2025.7879464%2C13.9308036%20L20.0736607%2C19.6450893%20C19.932291%2C19.786459%2019.7648819%2C19.8571429%2019.5714286%2C19.8571429%20C19.3779752%2C19.8571429%2019.2105662%2C19.786459%2019.0691964%2C19.6450893%20C18.9278267%2C19.5037195%2018.8571429%2C19.3363105%2018.8571429%2C19.1428571%20L18.8571429%2C16.2857143%20L16.3571429%2C16.2857143%20C15.6279725%2C16.2857143%2014.9750773%2C16.3080355%2014.3984375%2C16.3526786%20C13.8217977%2C16.3973217%2013.2488868%2C16.477306%2012.6796875%2C16.5926339%20C12.1104882%2C16.7079619%2011.6157015%2C16.8660704%2011.1953125%2C17.0669643%20C10.7749235%2C17.2678581%2010.3824423%2C17.5264121%2010.0178571%2C17.8426339%20C9.65327199%2C18.1588557%209.35565592%2C18.534596%209.125%2C18.9698661%20C8.89434408%2C19.4051361%208.71391434%2C19.9203839%208.58370536%2C20.515625%20C8.45349637%2C21.1108661%208.38839286%2C21.7842224%208.38839286%2C22.5357143%20C8.38839286%2C22.9449425%208.40699386%2C23.4025272%208.44419643%2C23.9084821%20C8.44419643%2C23.9531252%208.45349693%2C24.0405499%208.47209821%2C24.1707589%20C8.4906995%2C24.3009679%208.5%2C24.3995532%208.5%2C24.4665179%20C8.5%2C24.5781256%208.46837829%2C24.6711306%208.40513393%2C24.7455357%20C8.34188956%2C24.8199408%208.25446484%2C24.8571429%208.14285714%2C24.8571429%20C8.02380893%2C24.8571429%207.9196433%2C24.7938994%207.83035714%2C24.6674107%20C7.77827355%2C24.6004461%207.72991094%2C24.5186017%207.68526786%2C24.421875%20C7.64062478%2C24.3251483%207.59040206%2C24.2135423%207.53459821%2C24.0870536%20C7.47879436%2C23.9605648%207.43973225%2C23.87128%207.41741071%2C23.8191964%20C6.47246551%2C21.6986501%206%2C20.0208395%206%2C18.7857143%20C6%2C17.3050521%206.19717065%2C16.0662252%206.59151786%2C15.0691964%20C7.79688103%2C12.0706695%2011.0520568%2C10.5714286%2016.3571429%2C10.5714286%20L18.8571429%2C10.5714286%20L18.8571429%2C7.71428571%20C18.8571429%2C7.52083237%2018.9278267%2C7.35342333%2019.0691964%2C7.21205357%20C19.2105662%2C7.07068382%2019.3779752%2C7%2019.5714286%2C7%20C19.7648819%2C7%2019.932291%2C7.07068382%2020.0736607%2C7.21205357%20L25.7879464%2C12.9263393%20C25.9293162%2C13.067709%2026%2C13.2351181%2026%2C13.4285714%20L26%2C13.4285714%20Z%22%3E%3C%2Fpath%3E%3C%2Fg%3E%3C%2Fg%3E%3C%2Fsvg%3E");background-position:center center;background-repeat:no-repeat;transition:transform .4s ease;border-radius:50%;display:block}.at-expanding-share-button .at-icon-wrapper{box-shadow:0 2px 4px 0 rgba(0,0,0,.3);border-radius:50%;display:inline-block;height:40px;line-height:40px;text-align:center;width:40px}.at-expanding-share-button .at-icon{display:inline-block;height:34px;margin:3px 0;vertical-align:top;width:34px}.at-expanding-share-button [data-name]:after{box-shadow:0 2px 4px 0 rgba(0,0,0,.3);transform:translate(0, -50%);transition:.4s;background-color:#fff;border-radius:3px;color:#666;content:attr(data-name);font-family:Helvetica Neue,Helvetica,Arial,sans-serif;font-size:9pt;line-height:9pt;font-weight:500;opacity:0;padding:3px 5px;position:relative;top:20px;white-space:nowrap}.at-expanding-share-button.at-expanding-share-button-show-icons .at-expanding-share-button-services-list{display:block}.at-expanding-share-button.at-expanding-share-button-animate-in .at-expanding-share-button-toggle-bg>span{transform:rotate(270deg);background-image:url("data:image/svg+xml,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Axlink%3D%22http%3A%2F%2Fwww.w3.org%2F1999%2Fxlink%22%20viewBox%3D%220%200%2032%2032%22%3E%3Cg%3E%3Cpath%20d%3D%22M18%2014V8h-4v6H8v4h6v6h4v-6h6v-4h-6z%22%20fill-rule%3D%22evenodd%22%20fill%3D%22white%22%3E%3C%2Fpath%3E%3C%2Fg%3E%3C%2Fsvg%3E");background-position:center center;background-repeat:no-repeat}.at-expanding-share-button.at-expanding-share-button-animate-in [data-name]:after{opacity:1}.at-expanding-share-button.at-hide-label [data-name]:after{display:none}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle{height:50px}.at-expanding-share-button.at-expanding-share-button-desktop .at-icon-wrapper:hover{box-shadow:0 2px 5px 0 rgba(0,0,0,.5)}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle-bg{height:50px;line-height:50px;width:50px}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle-bg>span{height:50px;width:50px}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle-bg:after{box-shadow:0 2px 5px 0 rgba(0,0,0,.2);transition:opacity .2s ease;border-radius:50%;content:'';height:100%;opacity:0;position:absolute;top:0;left:0;width:100%}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle-bg:hover:after{opacity:1}.at-expanding-share-button.at-expanding-share-button-desktop .at-expanding-share-button-toggle-btn[data-name]:after{top:25px}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-services-list{margin:0}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-toggle-btn,.at-expanding-share-button.at-expanding-share-button-mobile .at-share-btn{outline:0}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-toggle{height:40px;-webkit-tap-highlight-color:transparent}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-toggle-bg,.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-toggle-bg span{height:40px;line-height:40px;width:40px}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-click-flash{transform:scale(0);transition:transform ease,opacity ease-in;background-color:hsla(0,0%,100%,.3);border-radius:50%;height:40px;opacity:1;position:absolute;width:40px;z-index:10000}.at-expanding-share-button.at-expanding-share-button-mobile .at-expanding-share-button-click-flash.at-expanding-share-button-click-flash-animate{transform:scale(1);opacity:0}.at-expanding-share-button.at-expanding-share-button-mobile+.at-expanding-share-button-mobile-overlay{transition:opacity ease;bottom:0;background-color:hsla(0,0%,87%,.7);display:block;height:auto;left:0;opacity:0;position:fixed;right:0;top:0;width:auto;z-index:9998}.at-expanding-share-button.at-expanding-share-button-mobile+.at-expanding-share-button-mobile-overlay.at-expanding-share-button-hidden{height:0;width:0;z-index:-10000}.at-expanding-share-button.at-expanding-share-button-mobile.at-expanding-share-button-animate-in+.at-expanding-share-button-mobile-overlay{transition:opacity ease;opacity:1}
</style>
<style type="text/css">
.at-tjin-element .at300b,.at-tjin-element .at300m{display:inline-block;width:auto;padding:0;margin:0 2px 5px;outline-offset:-1px;transition:all .2s ease-in-out}.at-tjin-element .at300b:focus,.at-tjin-element .at300b:hover,.at-tjin-element .at300m:focus,.at-tjin-element .at300m:hover{transform:translateY(-4px)}.at-tjin-element .addthis_tjin_label{display:none}.at-tjin-element .addthis_vertical_style .at300b,.at-tjin-element .addthis_vertical_style .at300m{display:block}.at-tjin-element .addthis_vertical_style .at300b .addthis_tjin_label,.at-tjin-element .addthis_vertical_style .at300b .at-icon-wrapper,.at-tjin-element .addthis_vertical_style .at300m .addthis_tjin_label,.at-tjin-element .addthis_vertical_style .at300m .at-icon-wrapper{display:inline-block;vertical-align:middle;margin-right:5px}.at-tjin-element .addthis_vertical_style .at300b:focus,.at-tjin-element .addthis_vertical_style .at300b:hover,.at-tjin-element .addthis_vertical_style .at300m:focus,.at-tjin-element .addthis_vertical_style .at300m:hover{transform:none}.at-tjin-element .at-tjin-btn{margin:0 5px 5px 0;padding:0;outline-offset:-1px;display:inline-block;box-sizing:content-box;transition:all .2s ease-in-out}.at-tjin-element .at-tjin-btn:focus,.at-tjin-element .at-tjin-btn:hover{transform:translateY(-4px)}.at-tjin-element .at-tjin-title{margin:0 0 15px}
</style>
<style type="text/css">
#addthissmartlayerscssready{color:#bada55!important}.addthis-smartlayers,div#at4-follow,div#at4-share,div#at4-thankyou,div#at4-whatsnext{padding:0;margin:0}#at4-follow-label,#at4-share-label,#at4-whatsnext-label,.at4-recommended-label.hidden{padding:0;border:none;background:none;position:absolute;top:0;left:0;height:0;width:0;overflow:hidden;text-indent:-9999em}.addthis-smartlayers .at4-arrow:hover{cursor:pointer}.addthis-smartlayers .at4-arrow:after,.addthis-smartlayers .at4-arrow:before{content:none}a.at4-logo{background:url(data:image/gif;base64,R0lGODlhBwAHAJEAAP9uQf///wAAAAAAACH5BAkKAAIALAAAAAAHAAcAAAILFH6Ge8EBH2MKiQIAOw==) no-repeat left center}.at4-minimal a.at4-logo{background:url(data:image/gif;base64,R0lGODlhBwAHAJEAAP9uQf///wAAAAAAACH5BAkKAAIALAAAAAAHAAcAAAILFH6Ge8EBH2MKiQIAOw==) no-repeat left center!important}button.at4-closebutton{position:absolute;top:0;right:0;padding:0;margin-right:10px;cursor:pointer;background:transparent;border:0;-webkit-appearance:none;font-size:19px;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.2}button.at4-closebutton:hover{color:#000;text-decoration:none;cursor:pointer;opacity:.5}div.at4-arrow{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFAAAAAoCAYAAABpYH0BAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAV1JREFUeNrsmesOgyAMhQfxwfrofTM3E10ME2i5Oeppwr9a5OMUCrh1XV+wcvNAAIAA+BiAzrmtUWln27dbjEcC3AdODfo0BdEPhmcO4nIDvDNELi2jggk4/k8dT7skfeKzWIEd4VUpMQKvNB7X+OZSmAZkATWC1xvipbpnLmOosbJZC08CkAeA4E6qFUEMwLAGnlSBPCE8lW8CYnZTcimH2HoT7kSFOx5HBmCnDhTIu1p5s98G+QZrxGPhZVMY1vgyAQaAAAiAAAgDQACcBOD+BvJtBWfRy7NpJK5tBe4FNzXokywV734wPHMQlxvgnSGyNoUP/2ACjv/7iSeYKO3YWKzAjvCqlBiBVxqPa3ynexNJwOsN8TJbzL6JNIYYXWpMv4lIIAZgWANPqkCeEJ7KNwExu8lpLlSpAVQarO77TyKdBsyRPuwV0h0gmoGnTWFYzVkYBoAA+I/2FmAAt6+b5XM9mFkAAAAASUVORK5CYII=);background-repeat:no-repeat;width:20px;height:20px;margin:0;padding:0;overflow:hidden;text-indent:-9999em;text-align:left;cursor:pointer}#at4-recommendedpanel-outer-container .at4-arrow.at-right,div.at4-arrow.at-right{background-position:-20px 0}#at4-recommendedpanel-outer-container .at4-arrow.at-left,div.at4-arrow.at-left{background-position:0 0}div.at4-arrow.at-down{background-position:-60px 0}div.at4-arrow.at-up{background-position:-40px 0}.ats-dark div.at4-arrow.at-right{background-position:-20px -20px}.ats-dark div.at4-arrow.at-left{background-position:0 -20px}.ats-dark div.at4-arrow.at-down{background-position:-60px -20px}.ats-dark div.at4-arrow.at-up{background-position:-40px -20}.at4-opacity-hidden{opacity:0!important}.at4-opacity-visible{opacity:1!important}.at4-visually-hidden{position:absolute;clip:rect(1px,1px,1px,1px);padding:0;border:0;overflow:hidden}.at4-hidden-off-screen,.at4-hidden-off-screen *{position:absolute!important;top:-9999px!important;left:-9999px!important}.at4-show{display:block!important;opacity:1!important}.at4-show-content{opacity:1!important;visibility:visible}.at4-hide{display:none!important;opacity:0!important}.at4-hide-content{opacity:0!important;visibility:hidden}.at4-visible{display:block!important;opacity:0!important}.at-wordpress-hide{display:none!important;opacity:0!important}.addthis-animated{animation-fill-mode:both;animation-timing-function:ease-out;animation-duration:.3s}.slideInDown.addthis-animated,.slideInLeft.addthis-animated,.slideInRight.addthis-animated,.slideInUp.addthis-animated,.slideOutDown.addthis-animated,.slideOutLeft.addthis-animated,.slideOutRight.addthis-animated,.slideOutUp.addthis-animated{animation-duration:.4s}@keyframes fadeIn{0%{opacity:0}to{opacity:1}}.fadeIn{animation-name:fadeIn}@keyframes fadeInUp{0%{opacity:0;transform:translateY(20px)}to{opacity:1;transform:translateY(0)}}.fadeInUp{animation-name:fadeInUp}@keyframes fadeInDown{0%{opacity:0;transform:translateY(-20px)}to{opacity:1;transform:translateY(0)}}.fadeInDown{animation-name:fadeInDown}@keyframes fadeInLeft{0%{opacity:0;transform:translateX(-20px)}to{opacity:1;transform:translateX(0)}}.fadeInLeft{animation-name:fadeInLeft}@keyframes fadeInRight{0%{opacity:0;transform:translateX(20px)}to{opacity:1;transform:translateX(0)}}.fadeInRight{animation-name:fadeInRight}@keyframes fadeOut{0%{opacity:1}to{opacity:0}}.fadeOut{animation-name:fadeOut}@keyframes fadeOutUp{0%{opacity:1;transform:translateY(0)}to{opacity:0;transform:translateY(-20px)}}.fadeOutUp{animation-name:fadeOutUp}@keyframes fadeOutDown{0%{opacity:1;transform:translateY(0)}to{opacity:0;transform:translateY(20px)}}.fadeOutDown{animation-name:fadeOutDown}@keyframes fadeOutLeft{0%{opacity:1;transform:translateX(0)}to{opacity:0;transform:translateX(-20px)}}.fadeOutLeft{animation-name:fadeOutLeft}@keyframes fadeOutRight{0%{opacity:1;transform:translateX(0)}to{opacity:0;transform:translateX(20px)}}.fadeOutRight{animation-name:fadeOutRight}@keyframes slideInUp{0%{transform:translateY(1500px)}0%,to{opacity:1}to{transform:translateY(0)}}.slideInUp{animation-name:slideInUp}.slideInUp.addthis-animated{animation-duration:.4s}@keyframes slideInDown{0%{transform:translateY(-850px)}0%,to{opacity:1}to{transform:translateY(0)}}.slideInDown{animation-name:slideInDown}@keyframes slideOutUp{0%{transform:translateY(0)}0%,to{opacity:1}to{transform:translateY(-250px)}}.slideOutUp{animation-name:slideOutUp}@keyframes slideOutUpFast{0%{transform:translateY(0)}0%,to{opacity:1}to{transform:translateY(-1250px)}}#at4m-menu.slideOutUp{animation-name:slideOutUpFast}@keyframes slideOutDown{0%{transform:translateY(0)}0%,to{opacity:1}to{transform:translateY(350px)}}.slideOutDown{animation-name:slideOutDown}@keyframes slideOutDownFast{0%{transform:translateY(0)}0%,to{opacity:1}to{transform:translateY(1250px)}}#at4m-menu.slideOutDown{animation-name:slideOutDownFast}@keyframes slideInLeft{0%{opacity:0;transform:translateX(-850px)}to{transform:translateX(0)}}.slideInLeft{animation-name:slideInLeft}@keyframes slideInRight{0%{opacity:0;transform:translateX(1250px)}to{transform:translateX(0)}}.slideInRight{animation-name:slideInRight}@keyframes slideOutLeft{0%{transform:translateX(0)}to{opacity:0;transform:translateX(-350px)}}.slideOutLeft{animation-name:slideOutLeft}@keyframes slideOutRight{0%{transform:translateX(0)}to{opacity:0;transform:translateX(350px)}}.slideOutRight{animation-name:slideOutRight}.at4win{margin:0 auto;background:#fff;border:1px solid #ebeced;width:25pc;box-shadow:0 0 10px rgba(0,0,0,.3);border-radius:8px;font-family:helvetica neue,helvetica,arial,sans-serif;text-align:left;z-index:9999}.at4win .at4win-header{position:relative;border-bottom:1px solid #f2f2f2;background:#fff;height:49px;-webkit-border-top-left-radius:8px;-webkit-border-top-right-radius:8px;-moz-border-radius-topleft:8px;-moz-border-radius-topright:8px;border-top-left-radius:8px;border-top-right-radius:8px;cursor:default}.at4win .at4win-header .at-h3,.at4win .at4win-header h3{height:49px;line-height:49px;margin:0 50px 0 0;padding:1px 0 0;margin-left:20px;font-family:helvetica neue,helvetica,arial,sans-serif;font-size:1pc;font-weight:700;text-shadow:0 1px #fff;color:#333}.at4win .at4win-header .at-h3 img,.at4win .at4win-header h3 img{display:inline-block;margin-right:4px}.at4win .at4win-header .at4-close{display:block;position:absolute;top:0;right:0;background:url("data:image/gif;base64,R0lGODlhFAAUAIABAAAAAP///yH5BAEAAAEALAAAAAAUABQAAAIzBIKpG+YMm5Enpodw1HlCfnkKOIqU1VXk55goVb2hi7Y0q95lfG70uurNaqLgTviyyUoFADs=") no-repeat center center;background-repeat:no-repeat;background-position:center center;border-left:1px solid #d2d2d1;width:49px;height:49px;line-height:49px;overflow:hidden;text-indent:-9999px;text-shadow:none;cursor:pointer;opacity:.5;border:0;transition:opacity .15s ease-in}.at4win .at4win-header .at4-close::-moz-focus-inner{border:0;padding:0}.at4win .at4win-header .at4-close:hover{opacity:1;background-color:#ebeced;border-top-right-radius:7px}.at4win .at4win-content{position:relative;background:#fff;min-height:220px}#at4win-footer{position:relative;background:#fff;border-top:1px solid #d2d2d1;-webkit-border-bottom-right-radius:8px;-webkit-border-bottom-left-radius:8px;-moz-border-radius-bottomright:8px;-moz-border-radius-bottomleft:8px;border-bottom-right-radius:8px;border-bottom-left-radius:8px;height:11px;line-height:11px;padding:5px 20px;font-size:11px;color:#666;-ms-box-sizing:content-box;-o-box-sizing:content-box;box-sizing:content-box}#at4win-footer a{margin-right:10px;text-decoration:none;color:#666}#at4win-footer a:hover{text-decoration:none;color:#000}#at4win-footer a.at4-logo{top:5px;padding-left:10px}#at4win-footer a.at4-privacy{position:absolute;top:5px;right:10px;padding-right:14px}.at4win.ats-dark{border-color:#555;box-shadow:none}.at4win.ats-dark .at4win-header{background:#1b1b1b;-webkit-border-top-left-radius:6px;-webkit-border-top-right-radius:6px;-moz-border-radius-topleft:6px;-moz-border-radius-topright:6px;border-top-left-radius:6px;border-top-right-radius:6px}.at4win.ats-dark .at4win-header .at4-close{background:url("data:image/gif;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABx0RVh0U29mdHdhcmUAQWRvYmUgRmlyZXdvcmtzIENTNui8sowAAAAWdEVYdENyZWF0aW9uIFRpbWUAMTEvMTMvMTKswDp5AAAAd0lEQVQ4jb2VQRLAIAgDE///Z3qqY1FAhalHMCsCIkVEAIAkkVgvp2lDBgYAnAyHkWotLccNrEd4A7X2TqIdqLfnWBAdaF5rJdyJfjtPH5GT37CaGhoVq3nOm/XflUuLUto2pY1d+vRKh0Pp+MrAVtDe2JkvYNQ+jVSEEFmOkggAAAAASUVORK5CYII=") no-repeat center center;background-image:url(//s7.addthis.com/static/fb08f6d50887bd0caacc86a62bcdcf68.svg),none;border-color:#333}.at4win.ats-dark .at4win-header .at4-close:hover{background-color:#000}.at4win.ats-dark .at4win-header .at-h3,.at4win.ats-dark .at4win-header h3{color:#fff;text-shadow:0 1px #000}.at4win.ats-gray .at4win-header{background:#fff;border-color:#d2d2d1;-webkit-border-top-left-radius:6px;-webkit-border-top-right-radius:6px;-moz-border-radius-topleft:6px;-moz-border-radius-topright:6px;border-top-left-radius:6px;border-top-right-radius:6px}.at4win.ats-gray .at4win-header a.at4-close{border-color:#d2d2d1}.at4win.ats-gray .at4win-header a.at4-close:hover{background-color:#ebeced}.at4win.ats-gray #at4win-footer{border-color:#ebeced}.at4win .clear{clear:both}.at4win ::selection{background:#fe6d4c;color:#fff}.at4win ::-moz-selection{background:#fe6d4c;color:#fff}.at4-icon-fw{display:inline-block;background-repeat:no-repeat;background-position:0 0;margin:0 5px 0 0;overflow:hidden;text-indent:-9999em;cursor:pointer;padding:0;border-radius:50%;-moz-border-radius:50%;-webkit-border-radius:50%}.at44-follow-container a.aticon{height:2pc;margin:0 5px 5px 0}.at44-follow-container .at4-icon-fw{margin:0}
</style>
<style id="at4-share-offset" media="screen" type="text/css">
#at4-share,#at4-soc {bottom:25px !important;top:auto;}
</style>
</head>
<body id="news" style="">
<div data-react-class="BrowseHappier" data-react-props='{"gt":1,"lt":11}'>
<!-- react-empty: 1 -->
</div>
<div id="main_container">
<div id="site_body">
<div class="site_header_area">
<header class="site_header">
<div class="brand_area">
<div class="brand1">
<a class="nasa_logo" href="http://www.nasa.gov" target="_blank" title="visit nasa.gov">
NASA
</a>
</div>
<div class="brand2">
<a class="top_logo" href="https://science.nasa.gov/" target="_blank" title="Explore NASA Science">
NASA Science
</a>
<a class="sub_logo" href="/mars-exploration/#" title="Mars">
Mars Exploration Program
</a>
</div>
<img alt="" class="print_only print_logo" src="/assets/[email protected]"/>
</div>
<a class="visuallyhidden focusable" href="#page">
Skip Navigation
</a>
<div class="right_header_container">
<a class="menu_button" href="javascript:void(0);" id="menu_button">
<span class="menu_icon">
menu
</span>
</a>
<a class="modal_close" id="modal_close">
<span class="modal_close_icon">
</span>
</a>
</div>
<div class="nav_area">
<div id="site_nav_container">
<nav class="site_nav">
<ul class="nav">
<li>
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title">
<a class="main_nav_item" href="/#red_planet" target="_self">
The Red Planet
</a>
</div>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#red_planet/0" target="_self">
Dashboard
</a>
</li>
<li>
<a href="/#red_planet/1" target="_self">
Science Goals
</a>
</li>
<li>
<a href="/#red_planet/2" target="_self">
The Planet
</a>
</li>
<li>
<a href="/#red_planet/3" target="_self">
Atmosphere
</a>
</li>
<li>
<a href="/#red_planet/4" target="_self">
Astrobiology
</a>
</li>
<li>
<a href="/#red_planet/5" target="_self">
Past, Present, Future, Timeline
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title">
<a class="main_nav_item" href="/#mars_exploration_program" target="_self">
The Program
</a>
</div>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#mars_exploration_program/0" target="_self">
Mission Statement
</a>
</li>
<li>
<a href="/#mars_exploration_program/1" target="_self">
About the Program
</a>
</li>
<li>
<a href="/#mars_exploration_program/2" target="_self">
Organization
</a>
</li>
<li>
<a href="/#mars_exploration_program/3" target="_self">
Why Mars?
</a>
</li>
<li>
<a href="/#mars_exploration_program/4" target="_self">
Research Programs
</a>
</li>
<li>
<a href="/#mars_exploration_program/5" target="_self">
Planetary Resources
</a>
</li>
<li>
<a href="/#mars_exploration_program/6" target="_self">
Technologies
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title">
<a class="main_nav_item" href="/#news_and_events" target="_self">
News & Events
</a>
</div>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/news" target="_self">
News
</a>
</li>
<li>
<a href="/events" target="_self">
Events
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title">
<a class="main_nav_item" href="/#multimedia" target="_self">
Multimedia
</a>
</div>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/multimedia/images/" target="_self">
Images
</a>
</li>
<li>
<a href="/multimedia/videos/" target="_self">
Videos
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title">
<a class="main_nav_item" href="/#missions" target="_self">
Missions
</a>
</div>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/mars-exploration/missions/?category=167" target="_self">
Past
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=170" target="_self">
Present
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=171" target="_self">
Future
</a>
</li>
<li>
<a href="/mars-exploration/partners" target="_self">
International Partners
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="nav_title">
<a class="main_nav_item" href="/#more" target="_self">
More
</a>
</div>
<div class="gradient_line">
</div>
</li>
<li>
<div class="nav_title">
<a class="main_nav_item" href="/legacy" target="_self">
Legacy Site
</a>
</div>
<div class="gradient_line">
</div>
</li>
</ul>
<form action="https://mars.nasa.gov/search/" class="overlay_search nav_search">
<label class="search_label">
Search
</label>
<input class="search_field" name="q" type="text" value=""/>
<div class="search_submit">
</div>
</form>
</nav>
</div>
</div>
</header>
</div>
<div id="sticky_nav_spacer">
</div>
<div id="page">
<!-- title to go in the page_header -->
<div class="header_mask">
</div>
<div class="react_grid_list" data-react-class="GridListPage" data-react-props='{"left_column":false,"class_name":"","default_view":"list_view","model":"news_items","view_toggle":false,"search":"true","list_item":"News","title":"News","categories":["19,165,184,204"],"order":"publish_date desc,created_at desc","no_items_text":"There are no items matching these criteria.","per_page":null,"filters":"[ [ \"date\", [ [ \"2018\", \"2018\" ], [ \"2017\", \"2017\" ], [ \"2016\", \"2016\" ], [ \"2015\", \"2015\" ], [ \"2014\", \"2014\" ], [ \"2013\", \"2013\" ], [ \"2012\", \"2012\" ], [ \"2011\", \"2011\" ], [ \"2010\", \"2010\" ], [ \"2009\", \"2009\" ], [ \"2008\", \"2008\" ], [ \"2007\", \"2007\" ], [ \"2006\", \"2006\" ], [ \"2005\", \"2005\" ], [ \"2004\", \"2004\" ], [ \"2003\", \"2003\" ], [ \"2002\", \"2002\" ], [ \"2001\", \"2001\" ], [ \"2000\", \"2000\" ] ], [ \"Latest\", \"\" ], false ], [ \"categories\", [ [ \"Feature Stories\", 165 ], [ \"Press Releases\", 19 ], [ \"Spotlights\", 184 ], [ \"Status Reports\", 204 ] ], [ \"All Categories\", \"\" ], false ] ]","conditions":null,"scope_in_title":true,"options":{"blank_scope":"Latest"},"results_in_title":false}'>
<section class="grid_gallery module list_view" data-reactroot="">
<div class="grid_layout">
<header class="gallery_header">
<h2 class="module_title">
News
</h2>
<section class="filter_bar">
<div class="section_search">
<div class="search_binder">
<input class="search_field" name="search" placeholder="search" type="text" value=""/>
<input class="search_submit" type="submit" value=""/>
</div>
<select class="filter" id="date" name="date">
<option value="">
Latest
</option>
<option value="2018">
2018
</option>
<option value="2017">
2017
</option>
<option value="2016">
2016
</option>
<option value="2015">
2015
</option>
<option value="2014">
2014
</option>
<option value="2013">
2013
</option>
<option value="2012">
2012
</option>
<option value="2011">
2011
</option>
<option value="2010">
2010
</option>
<option value="2009">
2009
</option>
<option value="2008">
2008
</option>
<option value="2007">
2007
</option>
<option value="2006">
2006
</option>
<option value="2005">
2005
</option>
<option value="2004">
2004
</option>
<option value="2003">
2003
</option>
<option value="2002">
2002
</option>
<option value="2001">
2001
</option>
<option value="2000">
2000
</option>
</select>
<select class="filter" id="categories" name="categories">
<option value="">
All Categories
</option>
<option value="165">
Feature Stories
</option>
<option value="19">
Press Releases
</option>
<option value="184">
Spotlights
</option>
<option value="204">
Status Reports
</option>
</select>
</div>
</section>
<!-- react-text: 37 -->
<!-- /react-text -->
</header>
<ul class="item_list ">
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8354/storm-chasers-on-mars-searching-for-dusty-secrets/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Scientists with NASA's Mars orbiters have been waiting years for an event like the current Mars global dust storm.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8354_Mars_Dust_Storm_PIA22487_thumb.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
'Storm Chasers' on Mars Searching for Dusty Secrets
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
July 19, 2018
</div>
<div class="content_title">
<a href="/news/8354/storm-chasers-on-mars-searching-for-dusty-secrets/" target="_self">
'Storm Chasers' on Mars Searching for Dusty Secrets
</a>
</div>
<div class="article_teaser_body">
Scientists with NASA's Mars orbiters have been waiting years for an event like the current Mars global dust storm.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8353/nasa-mars-mission-adds-southern-california-dates/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Looking for summer fun? Southern California families have their choice of the beach, movies, museums -- and even NASA's next mission to Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="The Mars InSight Roadshow van at San Francisco's Exploratorium in April 2018. The Roadshow van will stop at different California venues to share public exhibits and lectures about NASA's InSight mission. " src="/system/news_items/list_view_images/8353_list_image.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Mars Mission Adds Southern California Dates
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 26, 2018
</div>
<div class="content_title">
<a href="/news/8353/nasa-mars-mission-adds-southern-california-dates/" target="_self">
NASA Mars Mission Adds Southern California Dates
</a>
</div>
<div class="article_teaser_body">
Looking for summer fun? Southern California families have their choice of the beach, movies, museums -- and even NASA's next mission to Mars.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8351/curiosity-captures-photos-of-thickening-dust/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
A storm of tiny dust particles has engulfed much of Mars over the last two weeks.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="A self-portrait by NASA's Curiosity rover taken on Sol 2082 (June 15, 2018). A Martian dust storm has reduced sunlight and visibility at the rover's location in Gale Crater. " src="/system/news_items/list_view_images/8351_PIA22486-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Curiosity Captures Photos of Thickening Dust
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 20, 2018
</div>
<div class="content_title">
<a href="/news/8351/curiosity-captures-photos-of-thickening-dust/" target="_self">
Curiosity Captures Photos of Thickening Dust
</a>
</div>
<div class="article_teaser_body">
A storm of tiny dust particles has engulfed much of Mars over the last two weeks.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
As of Tuesday morning, June 19, the Martian dust storm had grown in size and was officially a "planet-encircling" (or "global") dust event.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="This series of images shows simulated views of a darkening Martian sky blotting out the Sun from NASA’s Opportunity rover’s point of view, with the right side simulating Opportunity’s current view in the global dust storm (June 2018). " src="/system/news_items/list_view_images/8348_PIA22521-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Opportunity Hunkers Down During Dust Storm
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 20, 2018
</div>
<div class="content_title">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/" target="_self">
Opportunity Hunkers Down During Dust Storm
</a>
</div>
<div class="article_teaser_body">
As of Tuesday morning, June 19, the Martian dust storm had grown in size and was officially a "planet-encircling" (or "global") dust event.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8350/nasa-encounters-the-perfect-storm-for-science/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
One of the most intense Martian dust storms ever observed is being studied by a record number of NASA spacecraft.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="This set of images from NASA’s Mars Reconnaissance Orbiter shows a fierce dust storm is kicking up on Mars, with rovers on the surface indicated as icons." src="/system/news_items/list_view_images/8350_marci-dgm-v04-for-home-page-5-br.gif"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Encounters the Perfect Storm for Science
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 13, 2018
</div>
<div class="content_title">
<a href="/news/8350/nasa-encounters-the-perfect-storm-for-science/" target="_self">
NASA Encounters the Perfect Storm for Science
</a>
</div>
<div class="article_teaser_body">
One of the most intense Martian dust storms ever observed is being studied by a record number of NASA spacecraft.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8349/media-telecon-about-mars-dust-storm-opportunity/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA will host a media telecon on Wednesday, June 13, about a massive Martian dust storm affecting the Opportunity rover, and how various missions can obtain unique science.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Mars, as seen by Mars Global Surveyor in 2003." src="/system/news_items/list_view_images/8349_PIA04591-br.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Media Telecon About Mars Dust Storm, Opportunity
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 12, 2018
</div>
<div class="content_title">
<a href="/news/8349/media-telecon-about-mars-dust-storm-opportunity/" target="_self">
Media Telecon About Mars Dust Storm, Opportunity
</a>
</div>
<div class="article_teaser_body">
NASA will host a media telecon on Wednesday, June 13, about a massive Martian dust storm affecting the Opportunity rover, and how various missions can obtain unique science.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s Curiosity rover has found evidence on Mars with implications for NASA’s search for life.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="NASA's Curiosity rover has discovered ancient organic molecules on Mars, embedded within sedimentary rocks that are billions of years old." src="/system/news_items/list_view_images/8347_curiosity_methane-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Finds Ancient Organic Material, Mysterious Methane on Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 7, 2018
</div>
<div class="content_title">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/" target="_self">
NASA Finds Ancient Organic Material, Mysterious Methane on Mars
</a>
</div>
<div class="article_teaser_body">
NASA’s Curiosity rover has found evidence on Mars with implications for NASA’s search for life.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8346/nasa-to-host-live-discussion-on-new-mars-science-results/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Questions are welcome during a live discussion at 11 a.m. PDT (2 p.m. EDT) Thursday, June 7, on new science results from NASA's Mars Curiosity rover.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Selfie of the Curiosity rover" src="/system/news_items/list_view_images/8346_PIA22207-br.png"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA to Host Live Discussion on New Mars Science Results
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 6, 2018
</div>
<div class="content_title">
<a href="/news/8346/nasa-to-host-live-discussion-on-new-mars-science-results/" target="_self">
NASA to Host Live Discussion on New Mars Science Results
</a>
</div>
<div class="article_teaser_body">
Questions are welcome during a live discussion at 11 a.m. PDT (2 p.m. EDT) Thursday, June 7, on new science results from NASA's Mars Curiosity rover.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8345/mars-curiositys-labs-are-back-in-action/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA's Curiosity rover is analyzing drilled samples on Mars in one of its onboard labs for the first time in more than a year.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="The drill bit of NASA's Curiosity Mars rover over one of the sample inlets on the rover's deck. The inlets lead to Curiosity's onboard laboratories. This image was taken on Sol 2068 by the rover's Mast Camera (Mastcam). It has been white balanced and contrast-enhanced. " src="/system/news_items/list_view_images/8345_PIA22327-br.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Mars Curiosity's Labs Are Back in Action
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 4, 2018
</div>
<div class="content_title">
<a href="/news/8345/mars-curiositys-labs-are-back-in-action/" target="_self">
Mars Curiosity's Labs Are Back in Action
</a>
</div>
<div class="article_teaser_body">
NASA's Curiosity rover is analyzing drilled samples on Mars in one of its onboard labs for the first time in more than a year.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8344/nasa-cubesats-steer-toward-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA has achieved a first for the class of tiny spacecraft known as CubeSats, which are opening new access to space.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's concept of one of NASA's MarCO CubeSats. " src="/system/news_items/list_view_images/8344_marco_tcm_20180601-320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA CubeSats Steer Toward Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 1, 2018
</div>
<div class="content_title">
<a href="/news/8344/nasa-cubesats-steer-toward-mars/" target="_self">
NASA CubeSats Steer Toward Mars
</a>
</div>
<div class="article_teaser_body">
NASA has achieved a first for the class of tiny spacecraft known as CubeSats, which are opening new access to space.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8343/scientists-shrink-chemistry-lab-to-seek-evidence-of-life-on-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
An international team of scientists has created a tiny chemistry lab for a rover that will drill beneath the Martian surface looking for signs of past or present life.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8343_wilkinson-moma_320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Scientists Shrink Chemistry Lab to Seek Evidence of Life on Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
June 1, 2018
</div>
<div class="content_title">
<a href="/news/8343/scientists-shrink-chemistry-lab-to-seek-evidence-of-life-on-mars/" target="_self">
Scientists Shrink Chemistry Lab to Seek Evidence of Life on Mars
</a>
</div>
<div class="article_teaser_body">
An international team of scientists has created a tiny chemistry lab for a rover that will drill beneath the Martian surface looking for signs of past or present life.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8342/insight-steers-toward-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
The spacecraft has completed its first trajectory correction maneuver.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="NASA's InSight spacecraft is currently cruising to Mars. Yesterday, it performed its first course correction guiding it to the Red Planet." src="/system/news_items/list_view_images/8342_insight20180523-320x240o.gif"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
InSight Steers Toward Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 23, 2018
</div>
<div class="content_title">
<a href="/news/8342/insight-steers-toward-mars/" target="_self">
InSight Steers Toward Mars
</a>
</div>
<div class="article_teaser_body">
The spacecraft has completed its first trajectory correction maneuver.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8341/drilling-success-curiosity-is-collecting-mars-rocks/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Engineers will now test delivering samples to instruments inside NASA's Curiosity Mars rover.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="NASA's Curiosity rover successfully drilled a 2-inch-deep hole in a target called "Duluth" on May 20. It was the first rock sample captured by the drill since October 2016. This image was taken by Curiosity's Mast Camera (Mastcam) on Sol 2057. " src="/system/news_items/list_view_images/8341_PIA22325-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Drilling Success: Curiosity is Collecting Mars Rocks
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 23, 2018
</div>
<div class="content_title">
<a href="/news/8341/drilling-success-curiosity-is-collecting-mars-rocks/" target="_self">
Drilling Success: Curiosity is Collecting Mars Rocks
</a>
</div>
<div class="article_teaser_body">
Engineers will now test delivering samples to instruments inside NASA's Curiosity Mars rover.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8339/nasas-curiosity-rover-aims-to-get-its-rhythm-back/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Rover engineers at JPL will try to restore percussive drilling on Mars this week, part of a larger series of tests that will last through summer.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="A test of a new percussive drilling technique at NASA's JPL. Later this week, NASA's Curiosity rover will test percussive drilling on Mars for the first time since December 2016." src="/system/news_items/list_view_images/8339_Curiosity_drill_PIA22324-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA's Curiosity Rover Aims to Get Its Rhythm Back
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 17, 2018
</div>
<div class="content_title">
<a href="/news/8339/nasas-curiosity-rover-aims-to-get-its-rhythm-back/" target="_self">
NASA's Curiosity Rover Aims to Get Its Rhythm Back
</a>
</div>
<div class="article_teaser_body">
Rover engineers at JPL will try to restore percussive drilling on Mars this week, part of a larger series of tests that will last through summer.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8338/a-pale-blue-dot-as-seen-by-a-cubesat/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
One of NASA's MarCO CubeSats has taken its first image.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="This image taken by NASA's Mars Cube One (MarCO) CubeSats contains a photograph of Earth and Mars at a distance. " src="/system/news_items/list_view_images/8338_PIA22323_main-320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
A Pale Blue Dot, As Seen by a CubeSat
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 15, 2018
</div>
<div class="content_title">
<a href="/news/8338/a-pale-blue-dot-as-seen-by-a-cubesat/" target="_self">
A Pale Blue Dot, As Seen by a CubeSat
</a>
</div>
<div class="article_teaser_body">
One of NASA's MarCO CubeSats has taken its first image.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8335/mars-helicopter-to-fly-on-nasas-next-red-planet-rover-mission/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is adding a Mars helicopter to the agency’s next mission to the Red Planet, Mars 2020.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8335_helicopter20180511-16-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Mars Helicopter to Fly on NASA’s Next Red Planet Rover Mission
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 11, 2018
</div>
<div class="content_title">
<a href="/news/8335/mars-helicopter-to-fly-on-nasas-next-red-planet-rover-mission/" target="_self">
Mars Helicopter to Fly on NASA’s Next Red Planet Rover Mission
</a>
</div>
<div class="article_teaser_body">
NASA is adding a Mars helicopter to the agency’s next mission to the Red Planet, Mars 2020.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8334/nasas-first-deep-space-cubesats-say-polo/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
MarCO is a pair of tiny spacecraft that launched with NASA's InSight lander today.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's rendering of the twin Mars Cube One (MarCO) spacecraft on their cruise to Mars. " src="/system/news_items/list_view_images/8334_PIA22314-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA's First Deep-Space CubeSats Say: 'Polo!'
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 5, 2018
</div>
<div class="content_title">
<a href="/news/8334/nasas-first-deep-space-cubesats-say-polo/" target="_self">
NASA's First Deep-Space CubeSats Say: 'Polo!'
</a>
</div>
<div class="article_teaser_body">
MarCO is a pair of tiny spacecraft that launched with NASA's InSight lander today.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8333/nasa-ula-launch-mission-to-study-how-mars-was-made/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s Mars InSight mission launched this morning on a 300-million-mile trip to Mars to study for the first time what lies deep beneath the surface of the Red Planet.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt=' The NASA InSight spacecraft launches onboard a United Launch Alliance Atlas-V rocket, Saturday, May 5, 2018, from Vandenberg Air Force Base in California. InSight, short for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, is a Mars lander designed to study the "inner space" of Mars: its crust, mantle, and core.' src="/system/news_items/list_view_images/8333_41864015862_4eb1b8de31_o-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA, ULA Launch Mission to Study How Mars Was Made
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 5, 2018
</div>
<div class="content_title">
<a href="/news/8333/nasa-ula-launch-mission-to-study-how-mars-was-made/" target="_self">
NASA, ULA Launch Mission to Study How Mars Was Made
</a>
</div>
<div class="article_teaser_body">
NASA’s Mars InSight mission launched this morning on a 300-million-mile trip to Mars to study for the first time what lies deep beneath the surface of the Red Planet.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8332/nasas-first-mission-to-study-the-interior-of-mars-awaits-may-5-launch/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
All systems are go for NASA’s next launch to the Red Planet.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's impression of the InSight lander on Mars. Credit: NASA/JPL-Caltech" src="/system/news_items/list_view_images/8332_21438_PIA22226_320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA’s First Mission to Study the Interior of Mars Awaits May 5 Launch
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
May 3, 2018
</div>
<div class="content_title">
<a href="/news/8332/nasas-first-mission-to-study-the-interior-of-mars-awaits-may-5-launch/" target="_self">
NASA’s First Mission to Study the Interior of Mars Awaits May 5 Launch
</a>
</div>
<div class="article_teaser_body">
All systems are go for NASA’s next launch to the Red Planet.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8331/vice-president-pence-visits-jpl-previews-nasas-next-mars-mission-launch/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
A week before NASA's next Mars launch, Vice President Mike Pence toured the birthplace of the InSight Mars Lander and numerous other past, present and future space missions.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="U.S. Vice President Mike Pence, right, is presented a plaque by JPL Director Michael Watkins during a tour of NASA's Jet Propulsion Laboratory, Saturday, April 28, 2018 in Pasadena, California. The plaque presents a view of the Mars Science Laboratory rover Curiosity on the surface of Mars. Photo Credit: (NASA/Bill Ingalls)" src="/system/news_items/list_view_images/8331_vicepresidentpencejplvisit-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Vice President Pence Visits JPL, Previews NASA’s Next Mars Mission Launch
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
April 30, 2018
</div>
<div class="content_title">
<a href="/news/8331/vice-president-pence-visits-jpl-previews-nasas-next-mars-mission-launch/" target="_self">
Vice President Pence Visits JPL, Previews NASA’s Next Mars Mission Launch
</a>
</div>
<div class="article_teaser_body">
A week before NASA's next Mars launch, Vice President Mike Pence toured the birthplace of the InSight Mars Lander and numerous other past, present and future space missions.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8330/nasa-sets-sights-on-may-5-launch-of-insight-to-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s next mission to Mars, InSight, is scheduled to launch Saturday, May 5, on a first-ever mission to study the heart of the Red Planet.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's rendering of a rocket launching with the InSight spacecraft in May." src="/system/news_items/list_view_images/8330_insight20180329-th.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Sets Sights on May 5 Launch of InSight to Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
April 27, 2018
</div>
<div class="content_title">
<a href="/news/8330/nasa-sets-sights-on-may-5-launch-of-insight-to-mars/" target="_self">
NASA Sets Sights on May 5 Launch of InSight to Mars
</a>
</div>
<div class="article_teaser_body">
NASA’s next mission to Mars, InSight, is scheduled to launch Saturday, May 5, on a first-ever mission to study the heart of the Red Planet.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8329/results-of-heat-shield-testing/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
A post-test inspection of the composite structure for a heat shield to be used on the Mars 2020 mission revealed that a fracture occurred during structural testing.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Artist's concept of the Mars Science Laboratory entry into the Martian atmosphere." src="/system/news_items/list_view_images/8329_PIA14835_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Results of Heat Shield Testing
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
April 26, 2018
</div>
<div class="content_title">
<a href="/news/8329/results-of-heat-shield-testing/" target="_self">
Results of Heat Shield Testing
</a>
</div>
<div class="article_teaser_body">
A post-test inspection of the composite structure for a heat shield to be used on the Mars 2020 mission revealed that a fracture occurred during structural testing.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8328/nasa-engineers-dream-big-with-small-spacecraft/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
The first CubeSat mission to deep space will launch in May.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8328_PIA22314_320.png"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Engineers Dream Big with Small Spacecraft
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
April 19, 2018
</div>
<div class="content_title">
<a href="/news/8328/nasa-engineers-dream-big-with-small-spacecraft/" target="_self">
NASA Engineers Dream Big with Small Spacecraft
</a>
</div>
<div class="article_teaser_body">
The first CubeSat mission to deep space will launch in May.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8327/bound-for-mars-countdown-to-first-interplanetary-launch-from-california/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
On May 5, millions of Californians may witness the historic first interplanetary launch from America’s West Coast.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8327_InterplanetaryLaunch-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Bound for Mars: Countdown to First Interplanetary Launch from California
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
April 6, 2018
</div>
<div class="content_title">
<a href="/news/8327/bound-for-mars-countdown-to-first-interplanetary-launch-from-california/" target="_self">
Bound for Mars: Countdown to First Interplanetary Launch from California
</a>
</div>
<div class="article_teaser_body">
On May 5, millions of Californians may witness the historic first interplanetary launch from America’s West Coast.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8326/nasa-invests-in-visionary-technology/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is investing in technology concepts, including several from JPL, that may one day be used for future space exploration missions.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="NASA is investing in technology concepts, including several from JPL, that may one day be used for future space exploration missions." src="/system/news_items/list_view_images/8326_niac320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Invests in Visionary Technology
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 30, 2018
</div>
<div class="content_title">
<a href="/news/8326/nasa-invests-in-visionary-technology/" target="_self">
NASA Invests in Visionary Technology
</a>
</div>
<div class="article_teaser_body">
NASA is investing in technology concepts, including several from JPL, that may one day be used for future space exploration missions.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is about to go on a journey to study the center of Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's rendering of the InSight spacecraft's cruise stage entering the Martian atmosphere." src="/system/news_items/list_view_images/8325_insight20180329b_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA is Ready to Study the Heart of Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 29, 2018
</div>
<div class="content_title">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/" target="_self">
NASA is Ready to Study the Heart of Mars
</a>
</div>
<div class="article_teaser_body">
NASA is about to go on a journey to study the center of Mars.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8324/marsquakes-could-shake-up-planetary-science/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
InSight, the next mission to the Red Planet, will use seismology to see into the depths of Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Inner Structure of Mars" src="/system/news_items/list_view_images/8324_PIA16078-320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
‘Marsquakes’ Could Shake Up Planetary Science
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 28, 2018
</div>
<div class="content_title">
<a href="/news/8324/marsquakes-could-shake-up-planetary-science/" target="_self">
‘Marsquakes’ Could Shake Up Planetary Science
</a>
</div>
<div class="article_teaser_body">
InSight, the next mission to the Red Planet, will use seismology to see into the depths of Mars.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8323/mars-curiosity-celebrates-sol-2000/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA's Mars Curiosity rover just hit a new milestone: its two-thousandth Martian day on the Red Planet. An image mosaic taken recently offers a preview of what comes next.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="rugged landscape of hills " src="/system/news_items/list_view_images/8323_PIA22313_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Mars Curiosity Celebrates Sol 2,000
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 22, 2018
</div>
<div class="content_title">
<a href="/news/8323/mars-curiosity-celebrates-sol-2000/" target="_self">
Mars Curiosity Celebrates Sol 2,000
</a>
</div>
<div class="article_teaser_body">
NASA's Mars Curiosity rover just hit a new milestone: its two-thousandth Martian day on the Red Planet. An image mosaic taken recently offers a preview of what comes next.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s next mission to Mars will be the topic of a media briefing Thursday, March 29, at JPL. The briefing will air live on NASA Television and the agency’s website.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="An artist's rendition of the InSight lander operating on the surface of Mars. " src="/system/news_items/list_view_images/8322_PIA22228_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Briefing on First Mission to Study Mars Interior
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 22, 2018
</div>
<div class="content_title">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/" target="_self">
NASA Briefing on First Mission to Study Mars Interior
</a>
</div>
<div class="article_teaser_body">
NASA’s next mission to Mars will be the topic of a media briefing Thursday, March 29, at JPL. The briefing will air live on NASA Television and the agency’s website.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA spacecraft travel to far-off destinations in space, but a new mobile app produced by NASA's Jet Propulsion Laboratory, Pasadena, California, brings spacecraft to users.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Free Spacecraft AR app uses Google ARCore technology" src="/system/news_items/list_view_images/8321_list_image.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
New 'AR' Mobile App Features 3-D NASA Spacecraft
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 20, 2018
</div>
<div class="content_title">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/" target="_self">
New 'AR' Mobile App Features 3-D NASA Spacecraft
</a>
</div>
<div class="article_teaser_body">
NASA spacecraft travel to far-off destinations in space, but a new mobile app produced by NASA's Jet Propulsion Laboratory, Pasadena, California, brings spacecraft to users.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8319/nasa-mars-mission-tours-california/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Scientists and engineers with NASA's next mission to Mars will be touring California cities starting this month.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="This artist's concept shows the InSight lander, its sensors, cameras and instruments" src="/system/news_items/list_view_images/8319_PIA22227_320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA Mars Mission Tours California
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 14, 2018
</div>
<div class="content_title">
<a href="/news/8319/nasa-mars-mission-tours-california/" target="_self">
NASA Mars Mission Tours California
</a>
</div>
<div class="article_teaser_body">
Scientists and engineers with NASA's next mission to Mars will be touring California cities starting this month.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8318/next-nasa-mars-rover-reaches-key-manufacturing-milestone/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA's Mars 2020 mission has begun the assembly, test and launch operations (ATLO) phase of its development, on track for a July 2020 launch to Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="A technician works on the descent stage for NASA’s Mars 2020 mission inside JPL’s Spacecraft Assembly Facility. " src="/system/news_items/list_view_images/8318_PIA22342_320x240.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Next NASA Mars Rover Reaches Key Manufacturing Milestone
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 13, 2018
</div>
<div class="content_title">
<a href="/news/8318/next-nasa-mars-rover-reaches-key-manufacturing-milestone/" target="_self">
Next NASA Mars Rover Reaches Key Manufacturing Milestone
</a>
</div>
<div class="article_teaser_body">
NASA's Mars 2020 mission has begun the assembly, test and launch operations (ATLO) phase of its development, on track for a July 2020 launch to Mars.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8317/witness-first-mars-launch-from-west-coast/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA invites digital creators to apply for social media credentials to cover the launch of the InSight mission to Mars, May 3-5, at California's Vandenberg Air Force Base.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="Launch of LDCM from Vandenberg AFB, California" src="/system/news_items/list_view_images/8317_list_image.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Witness First Mars Launch from West Coast
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 12, 2018
</div>
<div class="content_title">
<a href="/news/8317/witness-first-mars-launch-from-west-coast/" target="_self">
Witness First Mars Launch from West Coast
</a>
</div>
<div class="article_teaser_body">
NASA invites digital creators to apply for social media credentials to cover the launch of the InSight mission to Mars, May 3-5, at California's Vandenberg Air Force Base.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8316/360-video-tour-a-mars-robot-test-lab/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Engineers are practicing operations for NASA's Mars InSight lander, which is launching this spring.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8316_InSight_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
360 Video: Tour a Mars Robot Test Lab
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
March 8, 2018
</div>
<div class="content_title">
<a href="/news/8316/360-video-tour-a-mars-robot-test-lab/" target="_self">
360 Video: Tour a Mars Robot Test Lab
</a>
</div>
<div class="article_teaser_body">
Engineers are practicing operations for NASA's Mars InSight lander, which is launching this spring.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8315/nasa-insight-mission-to-mars-arrives-at-launch-site/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA's InSight spacecraft has arrived at Vandenberg Air Force Base in central California to begin final preparations for a launch this May.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="crate being loaded into military transport plane" src="/system/news_items/list_view_images/8315_list_image.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
NASA InSight Mission to Mars Arrives at Launch Site
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 28, 2018
</div>
<div class="content_title">
<a href="/news/8315/nasa-insight-mission-to-mars-arrives-at-launch-site/" target="_self">
NASA InSight Mission to Mars Arrives at Launch Site
</a>
</div>
<div class="article_teaser_body">
NASA's InSight spacecraft has arrived at Vandenberg Air Force Base in central California to begin final preparations for a launch this May.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8314/curiosity-tests-a-new-way-to-drill-on-mars/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA's Mars Curiosity rover has conducted the first test of a new drilling technique on the Red Planet since its drill stopped working reliably.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8314_PIA22224_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Curiosity Tests a New Way to Drill on Mars
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 28, 2018
</div>
<div class="content_title">
<a href="/news/8314/curiosity-tests-a-new-way-to-drill-on-mars/" target="_self">
Curiosity Tests a New Way to Drill on Mars
</a>
</div>
<div class="article_teaser_body">
NASA's Mars Curiosity rover has conducted the first test of a new drilling technique on the Red Planet since its drill stopped working reliably.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8313/seven-ways-mars-insight-is-different/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA has a long and successful track record at Mars. Since 1965, it has flown by, orbited, landed and roved across the surface of the Red Planet. What can InSight -- planned for launch in May -- do that hasn’t been done before?
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="" src="/system/news_items/list_view_images/8313_PIA22228_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Seven Ways Mars InSight is Different
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 22, 2018
</div>
<div class="content_title">
<a href="/news/8313/seven-ways-mars-insight-is-different/" target="_self">
Seven Ways Mars InSight is Different
</a>
</div>
<div class="article_teaser_body">
NASA has a long and successful track record at Mars. Since 1965, it has flown by, orbited, landed and roved across the surface of the Red Planet. What can InSight -- planned for launch in May -- do that hasn’t been done before?
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8312/nearly-a-decade-after-mars-phoenix-landed-another-look/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
A recent view from Mars orbit of the site where NASA's Phoenix Mars mission landed on far-northern Mars nearly a decade ago captures changes.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="animation of two alternating views of barren martian landscape" src="/system/news_items/list_view_images/8312_PIA22223_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Nearly a Decade After Mars Phoenix Landed, Another Look
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 20, 2018
</div>
<div class="content_title">
<a href="/news/8312/nearly-a-decade-after-mars-phoenix-landed-another-look/" target="_self">
Nearly a Decade After Mars Phoenix Landed, Another Look
</a>
</div>
<div class="article_teaser_body">
A recent view from Mars orbit of the site where NASA's Phoenix Mars mission landed on far-northern Mars nearly a decade ago captures changes.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8311/spacecraft-exits-safe-mode/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
Diagnostic work is the focus for resuming service and exiting safe standby status.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="spacecraft high over martian surface" src="/system/news_items/list_view_images/8311_PIA05490_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
Spacecraft Exits Safe Mode
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 16, 2018
</div>
<div class="content_title">
<a href="/news/8311/spacecraft-exits-safe-mode/" target="_self">
Spacecraft Exits Safe Mode
</a>
</div>
<div class="article_teaser_body">
Diagnostic work is the focus for resuming service and exiting safe standby status.
</div>
</div>
</div>
</li>
<li class="slide">
<div class="image_and_description_container">
<a href="/news/8310/5000-days-on-mars-solar-powered-rover-approaching-5000th-martian-dawn/" target="_self">
<div class="rollover_description">
<div class="rollover_description_inner">
The Sun will rise on NASA's solar-powered Mars rover Opportunity for the 5,000th time on Saturday, sending rays of energy to a robot that continues to provide revelations.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<div class="list_image">
<img alt="map of rugged slope" src="/system/news_items/list_view_images/8310_pia22221_320.jpg"/>
</div>
<div class="bottom_gradient">
<div>
<h3>
5,000 Days on Mars; Solar-Powered Rover Approaching 5,000th Martian Dawn
</h3>
</div>
</div>
</a>
<div class="list_text">
<div class="list_date">
February 15, 2018
</div>
<div class="content_title">
<a href="/news/8310/5000-days-on-mars-solar-powered-rover-approaching-5000th-martian-dawn/" target="_self">
5,000 Days on Mars; Solar-Powered Rover Approaching 5,000th Martian Dawn
</a>
</div>
<div class="article_teaser_body">
The Sun will rise on NASA's solar-powered Mars rover Opportunity for the 5,000th time on Saturday, sending rays of energy to a robot that continues to provide revelations.
</div>
</div>
</div>
</li>
</ul>
<footer class="list_footer more_button">
<div class="loading">
</div>
<a class="button" href="#" type="button">
More
</a>
</footer>
</div>
</section>
</div>
<section class="module suggested_features">
<div class="grid_layout">
<header>
<h2 class="module_title">
You Might Also Like
</h2>
</header>
<section>
<script>
$(document).ready(function(){
$(".features").slick({
dots: false,
infinite: true,
speed: 300,
slide: '.features .slide',
slidesToShow: 3,
slidesToScroll: 3,
lazyLoad: 'ondemand',
centerMode: false,
arrows: true,
appendArrows: '.features .slick-nav',
appendDots: ".features .slick-nav",
responsive: [{"breakpoint":953,"settings":{"slidesToShow":2,"slidesToScroll":2,"centerMode":false}},{"breakpoint":480,"settings":{"slidesToShow":1,"slidesToScroll":1,"centerMode":true,"arrows":false,"centerPadding":"25px"}}]
});
});
</script>
<div class="features slick-initialized slick-slider">
<div class="slick-list draggable" tabindex="0">
<div class="slick-track" style="opacity: 1; width: 3552px; transform: translate3d(-888px, 0px, 0px);">
<div class="slide slick-slide slick-cloned" index="-3" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is about to go on a journey to study the center of Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA is Ready to Study the Heart of Mars" class="img-lazy" data-lazy="/system/news_items/list_view_images/8325_insight20180329b_320.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/">
NASA is Ready to Study the Heart of Mars
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="-2" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s next mission to Mars will be the topic of a media briefing Thursday, March 29, at JPL. The briefing will air live on NASA Television and the agency’s website.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Briefing on First Mission to Study Mars Interior" class="img-lazy" data-lazy="/system/news_items/list_view_images/8322_PIA22228_320.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/">
NASA Briefing on First Mission to Study Mars Interior
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="-1" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA spacecraft travel to far-off destinations in space, but a new mobile app produced by NASA's Jet Propulsion Laboratory, Pasadena, California, brings spacecraft to users.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="New 'AR' Mobile App Features 3-D NASA Spacecraft" class="img-lazy" data-lazy="/system/news_items/list_view_images/8321_list_image.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/">
New 'AR' Mobile App Features 3-D NASA Spacecraft
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="0" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/">
<div class="rollover_description">
<div class="rollover_description_inner">
As of Tuesday morning, June 19, the Martian dust storm had grown in size and was officially a "planet-encircling" (or "global") dust event.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Opportunity Hunkers Down During Dust Storm" class="img-lazy" src="/system/news_items/list_view_images/8348_PIA22521-320.jpg?1532218141872" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/">
Opportunity Hunkers Down During Dust Storm
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="1" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s Curiosity rover has found evidence on Mars with implications for NASA’s search for life.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Finds Ancient Organic Material, Mysterious Methane on Mars" class="img-lazy" src="/system/news_items/list_view_images/8347_curiosity_methane-320.jpg?1532218141873" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/">
NASA Finds Ancient Organic Material, Mysterious Methane on Mars
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="2" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8326/nasa-invests-in-visionary-technology/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is investing in technology concepts, including several from JPL, that may one day be used for future space exploration missions.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Invests in Visionary Technology " class="img-lazy" src="/system/news_items/list_view_images/8326_niac320.jpg?1532218141873" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8326/nasa-invests-in-visionary-technology/">
NASA Invests in Visionary Technology
</a>
</div>
</div>
<div class="slide slick-slide" index="3" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is about to go on a journey to study the center of Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA is Ready to Study the Heart of Mars" class="img-lazy" data-lazy="/system/news_items/list_view_images/8325_insight20180329b_320.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8325/nasa-is-ready-to-study-the-heart-of-mars/">
NASA is Ready to Study the Heart of Mars
</a>
</div>
</div>
<div class="slide slick-slide" index="4" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s next mission to Mars will be the topic of a media briefing Thursday, March 29, at JPL. The briefing will air live on NASA Television and the agency’s website.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Briefing on First Mission to Study Mars Interior" class="img-lazy" data-lazy="/system/news_items/list_view_images/8322_PIA22228_320.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8322/nasa-briefing-on-first-mission-to-study-mars-interior/">
NASA Briefing on First Mission to Study Mars Interior
</a>
</div>
</div>
<div class="slide slick-slide" index="5" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA spacecraft travel to far-off destinations in space, but a new mobile app produced by NASA's Jet Propulsion Laboratory, Pasadena, California, brings spacecraft to users.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="New 'AR' Mobile App Features 3-D NASA Spacecraft" class="img-lazy" data-lazy="/system/news_items/list_view_images/8321_list_image.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8321/new-ar-mobile-app-features-3-d-nasa-spacecraft/">
New 'AR' Mobile App Features 3-D NASA Spacecraft
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="6" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/">
<div class="rollover_description">
<div class="rollover_description_inner">
As of Tuesday morning, June 19, the Martian dust storm had grown in size and was officially a "planet-encircling" (or "global") dust event.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Opportunity Hunkers Down During Dust Storm" class="img-lazy" src="/system/news_items/list_view_images/8348_PIA22521-320.jpg?1532218141874" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8348/opportunity-hunkers-down-during-dust-storm/">
Opportunity Hunkers Down During Dust Storm
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="7" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA’s Curiosity rover has found evidence on Mars with implications for NASA’s search for life.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Finds Ancient Organic Material, Mysterious Methane on Mars" class="img-lazy" src="/system/news_items/list_view_images/8347_curiosity_methane-320.jpg?1532218141874" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8347/nasa-finds-ancient-organic-material-mysterious-methane-on-mars/">
NASA Finds Ancient Organic Material, Mysterious Methane on Mars
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="8" style="width: 278px;">
<div class="image_and_description_container">
<a href="/news/8326/nasa-invests-in-visionary-technology/">
<div class="rollover_description">
<div class="rollover_description_inner">
NASA is investing in technology concepts, including several from JPL, that may one day be used for future space exploration missions.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA Invests in Visionary Technology " class="img-lazy" src="/system/news_items/list_view_images/8326_niac320.jpg?1532218141874" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8326/nasa-invests-in-visionary-technology/">
NASA Invests in Visionary Technology
</a>
</div>
</div>
</div>
</div>
<div class="grid_layout">
<div class="slick-nav_container">
<div class="slick-nav">
<button class="slick-prev" data-role="none" style="display: block;" type="button">
Previous
</button>
<button class="slick-next" data-role="none" style="display: block;" type="button">
Next
</button>
</div>
</div>
</div>
</div>
</section>
</div>
</section>
</div>
<footer id="site_footer">
<div class="grid_layout">
<section class="upper_footer">
<div class="share">
<h2>
Follow the Journey
</h2>
<div class="social_icons">
<!-- AddThis Button BEGIN -->
<div class="addthis_toolbox addthis_default_style addthis_32x32_style">
<a addthis:userid="NASABeAMartian" class="addthis_button_twitter_follow icon at300b" href="//twitter.com/NASABeAMartian" target="_blank" title="Follow on Twitter">
<img alt="twitter" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Twitter
</span>
</a>
<a addthis:userid="NASABEAM" class="addthis_button_facebook_follow icon at300b" href="http://www.facebook.com/NASABEAM" target="_blank" title="Follow on Facebook">
<img alt="facebook" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Facebook
</span>
</a>
<a addthis:userid="nasa" class="addthis_button_instagram_follow icon at300b" href="http://instagram.com/nasa" target="_blank" title="Follow on Instagram">
<img alt="instagram" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Instagram
</span>
</a>
<a addthis:url="https://mars.nasa.gov/rss/api/?feed=news&category=all&feedtype=rss" class="addthis_button_rss_follow icon at300b" href="https://mars.nasa.gov/rss/api/?feed=news&category=all&feedtype=rss" target="_blank" title="Follow on RSS">
<img alt="rss" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
RSS
</span>
</a>
<div class="atclear">
</div>
</div>
<script>
addthis_loader.init("//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-5a690e4c1320e328", {follow: true})
</script>
<!-- AddThis Button END -->
</div>
</div>
<div class="gradient_line">
</div>
</section>
<section class="sitemap">
<div class="sitemap_directory" id="sitemap_directory" style="position: relative; height: 327.266px;">
<div class="sitemap_block" style="position: absolute; left: 0px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#red_planet">
The Red Planet
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#red_planet/0" target="_self">
Dashboard
</a>
</li>
<li>
<a href="/#red_planet/1" target="_self">
Science Goals
</a>
</li>
<li>
<a href="/#red_planet/2" target="_self">
The Planet
</a>
</li>
<li>
<a href="/#red_planet/3" target="_self">
Atmosphere
</a>
</li>
<li>
<a href="/#red_planet/4" target="_self">
Astrobiology
</a>
</li>
<li>
<a href="/#red_planet/5" target="_self">
Past, Present, Future, Timeline
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 164px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#mars_exploration_program">
The Program
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#mars_exploration_program/0" target="_self">
Mission Statement
</a>
</li>
<li>
<a href="/#mars_exploration_program/1" target="_self">
About the Program
</a>
</li>
<li>
<a href="/#mars_exploration_program/2" target="_self">
Organization
</a>
</li>
<li>
<a href="/#mars_exploration_program/3" target="_self">
Why Mars?
</a>
</li>
<li>
<a href="/#mars_exploration_program/4" target="_self">
Research Programs
</a>
</li>
<li>
<a href="/#mars_exploration_program/5" target="_self">
Planetary Resources
</a>
</li>
<li>
<a href="/#mars_exploration_program/6" target="_self">
Technologies
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 328px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#news_and_events">
News & Events
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/news" target="_self">
News
</a>
</li>
<li>
<a href="/events" target="_self">
Events
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 493px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#multimedia">
Multimedia
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/multimedia/images/" target="_self">
Images
</a>
</li>
<li>
<a href="/multimedia/videos/" target="_self">
Videos
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 657px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#missions">
Missions
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/mars-exploration/missions/?category=167" target="_self">
Past
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=170" target="_self">
Present
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=171" target="_self">
Future
</a>
</li>
<li>
<a href="/mars-exploration/partners" target="_self">
International Partners
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 822px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#more">
More
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 822px; top: 53px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/legacy">
Legacy Site
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a class="" href="/legacy" target="_self">
Legacy Site
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
<div class="gradient_line">
</div>
</section>
<section class="lower_footer">
<div class="nav_container">
<nav>
<ul>
<li>
<a href="http://science.nasa.gov/" target="_blank">
NASA Science Mission Directorate
</a>
</li>
<li>
<a href="https://www.jpl.nasa.gov/copyrights.php" target="_blank">
Privacy
</a>
</li>
<li>
<a href="http://www.jpl.nasa.gov/imagepolicy/" target="_blank">
Image Policy
</a>
</li>
<li>
<a href="https://mars.nasa.gov/feedback/" target="_self">
Feedback
</a>
</li>
<li>
<a href="http://mars.nasa.gov/legacy" target="_blank">
Legacy Mars Site
</a>
</li>
</ul>
</nav>
</div>
<div class="credits">
<div class="footer_brands_top">
<p>
Managed by the Mars Exploration Program and the Jet Propulsion Laboratory for NASA’s Science Mission Directorate
</p>
</div>
<!-- .footer_brands -->
<!-- %a.jpl{href: "", target: "_blank"}Institution -->
<!-- -->
<!-- %a.caltech{href: "", target: "_blank"}Institution -->
<!-- .staff -->
<!-- %p -->
<!-- - get_staff_for_category(get_field_from_admin_config(:web_staff_category_id)) -->
<!-- - @staff.each_with_index do |staff, idx| -->
<!-- - unless staff.is_in_footer == 0 -->
<!-- = staff.title + ": " -->
<!-- - if staff.contact_link =~ /@/ -->
<!-- = mail_to staff.contact_link, staff.name, :subject => "[#{@site_title}]" -->
<!-- - elsif staff.contact_link.present? -->
<!-- = link_to staff.name, staff.contact_link -->
<!-- - else -->
<!-- = staff.name -->
<!-- - unless (idx + 1 == @staff.size) -->
<!-- %br -->
</div>
</section>
</div>
</footer>
</div>
</div>
<script id="_fed_an_ua_tag" src="https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=JPL-Mars-MEPJPL&pua=UA-9453474-9,UA-118212757-11&dclink=true&sp=searchbox&exts=tif,tiff,wav" type="text/javascript">
</script>
<div id="_atssh" style="visibility: hidden; height: 1px; width: 1px; position: absolute; top: -9999px; z-index: 100000;">
<iframe id="_atssh106" src="https://s7.addthis.com/static/sh.e4e8af4de595fdb10ec1459d.html#rand=0.2123387918842583&iit=1532218142590&tmr=load%3D1532218142440%26core%3D1532218142526%26main%3D1532218142575%26ifr%3D1532218142598&cb=0&cdn=0&md=0&kw=Mars%2Cmissions%2CNASA%2Crover%2CCuriosity%2COpportunity%2CInSight%2CMars%20Reconnaissance%20Orbiter%2Cfacts&ab=-&dh=mars.nasa.gov&dr=&du=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F%3Fpage%3D0%26per_page%3D40%26order%3Dpublish_date%2Bdesc%252Ccreated_at%2Bdesc%26search%3D%26category%3D19%252C165%252C184%252C204%26blank_scope%3DLatest&href=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F&dt=News%20%20%E2%80%93%20NASA%E2%80%99s%20Mars%20Exploration%20Program&dbg=0&cap=tc%3D0%26ab%3D0&inst=1&jsl=1&prod=undefined&lng=en&ogt=image%2Cupdated_time%2Ctype%3Darticle%2Curl%2Ctitle%2Cdescription%2Csite_name&pc=men&pub=ra-5a690e4c1320e328&ssl=1&sid=5b53cb1e6480f157&srf=0.01&ver=300&xck=1&xtr=0&og=site_name%3DNASA%25E2%2580%2599s%2520Mars%2520Exploration%2520Program%26description%3DNASA%25E2%2580%2599s%2520real-time%2520portal%2520for%2520Mars%2520exploration%252C%2520featuring%2520the%2520latest%2520news%252C%2520images%252C%2520and%2520discoveries%2520from%2520the%2520Red%2520Planet.%26title%3DNews%2520%2520%25E2%2580%2593%2520NASA%25E2%2580%2599s%2520Mars%2520Exploration%2520Program%26url%3Dhttps%253A%252F%252Fmars.nasa.gov%252Fnews%253Fpage%253D0%2526per_page%253D40%2526order%253Dpublish_date%252Bdesc%25252Ccreated_at%252Bdesc%2526search%253D%2526category%253D19%25252C165%25252C184%25252C204%2526blank_scope%253DLatest%26type%3Darticle%26updated_time%3D2017-09-22%252019%253A53%253A22%2520UTC%26image%3Dhttps%253A%252F%252Fmars.nasa.gov%252Fsystem%252Fsite_config_values%252Fmeta_share_images%252F1_142497main_PIA03154-200.jpg&csi=undefined&rev=v8.3.25-wp&ct=1&xld=1&xd=1" style="height: 1px; width: 1px; position: absolute; top: 0px; z-index: 100000; border: 0px; left: 0px;" title="AddThis utility frame">
</iframe>
</div>
<style id="service-icons-0">
</style>
<div aria-labelledby="at4-share-label" class="addthis-smartlayers addthis-smartlayers-desktop" role="region">
<div id="at4-share-label">
AddThis Sharing Sidebar
</div>
<div class="at4-share addthis_32x32_style atss atss-left addthis-animated slideInLeft" id="at4-share">
<a class="at-share-btn at-svc-facebook" role="button" tabindex="1">
<span class="at4-visually-hidden">
Share to Facebook
</span>
<span class="at-icon-wrapper" style="background-color: rgb(59, 89, 152);">
<svg aria-labelledby="at-svg-facebook-1" class="at-icon at-icon-facebook" role="img" style="fill: rgb(255, 255, 255);" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-facebook-1" xmlns="http://www.w3.org/1999/xhtml">
Facebook
</title>
<g>
<path d="M22 5.16c-.406-.054-1.806-.16-3.43-.16-3.4 0-5.733 1.825-5.733 5.17v2.882H9v3.913h3.837V27h4.604V16.965h3.823l.587-3.913h-4.41v-2.5c0-1.123.347-1.903 2.198-1.903H22V5.16z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-twitter" role="button" tabindex="1">
<span class="at4-visually-hidden">
Share to Twitter
</span>
<span class="at-icon-wrapper" style="background-color: rgb(29, 161, 242);">
<svg aria-labelledby="at-svg-twitter-2" class="at-icon at-icon-twitter" role="img" style="fill: rgb(255, 255, 255);" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-twitter-2" xmlns="http://www.w3.org/1999/xhtml">
Twitter
</title>
<g>
<path d="M27.996 10.116c-.81.36-1.68.602-2.592.71a4.526 4.526 0 0 0 1.984-2.496 9.037 9.037 0 0 1-2.866 1.095 4.513 4.513 0 0 0-7.69 4.116 12.81 12.81 0 0 1-9.3-4.715 4.49 4.49 0 0 0-.612 2.27 4.51 4.51 0 0 0 2.008 3.755 4.495 4.495 0 0 1-2.044-.564v.057a4.515 4.515 0 0 0 3.62 4.425 4.52 4.52 0 0 1-2.04.077 4.517 4.517 0 0 0 4.217 3.134 9.055 9.055 0 0 1-5.604 1.93A9.18 9.18 0 0 1 6 23.85a12.773 12.773 0 0 0 6.918 2.027c8.3 0 12.84-6.876 12.84-12.84 0-.195-.005-.39-.014-.583a9.172 9.172 0 0 0 2.252-2.336" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-reddit" role="button" tabindex="1">
<span class="at4-visually-hidden">
Share to Reddit
</span>
<span class="at-icon-wrapper" style="background-color: rgb(255, 87, 0);">
<svg aria-labelledby="at-svg-reddit-3" class="at-icon at-icon-reddit" role="img" style="fill: rgb(255, 255, 255);" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-reddit-3" xmlns="http://www.w3.org/1999/xhtml">
Reddit
</title>
<g>
<path d="M27 15.5a2.452 2.452 0 0 1-1.338 2.21c.098.38.147.777.147 1.19 0 1.283-.437 2.47-1.308 3.563-.872 1.092-2.06 1.955-3.567 2.588-1.506.634-3.143.95-4.91.95-1.768 0-3.403-.316-4.905-.95-1.502-.632-2.69-1.495-3.56-2.587-.872-1.092-1.308-2.28-1.308-3.562 0-.388.045-.777.135-1.166a2.47 2.47 0 0 1-1.006-.912c-.253-.4-.38-.842-.38-1.322 0-.678.237-1.26.712-1.744a2.334 2.334 0 0 1 1.73-.726c.697 0 1.29.26 1.78.782 1.785-1.258 3.893-1.928 6.324-2.01l1.424-6.467a.42.42 0 0 1 .184-.26.4.4 0 0 1 .32-.063l4.53 1.006c.147-.306.368-.553.662-.74a1.78 1.78 0 0 1 .97-.278c.508 0 .94.18 1.302.54.36.36.54.796.54 1.31 0 .512-.18.95-.54 1.315-.36.364-.794.546-1.302.546-.507 0-.94-.18-1.295-.54a1.793 1.793 0 0 1-.533-1.308l-4.1-.92-1.277 5.86c2.455.074 4.58.736 6.37 1.985a2.315 2.315 0 0 1 1.757-.757c.68 0 1.256.242 1.73.726.476.484.713 1.066.713 1.744zm-16.868 2.47c0 .513.178.95.534 1.315.356.365.787.547 1.295.547.508 0 .942-.182 1.302-.547.36-.364.54-.802.54-1.315 0-.513-.18-.95-.54-1.31-.36-.36-.794-.54-1.3-.54-.5 0-.93.183-1.29.547a1.79 1.79 0 0 0-.54 1.303zm9.944 4.406c.09-.09.135-.2.135-.323a.444.444 0 0 0-.44-.447c-.124 0-.23.042-.32.124-.336.348-.83.605-1.486.77a7.99 7.99 0 0 1-1.964.248 7.99 7.99 0 0 1-1.964-.248c-.655-.165-1.15-.422-1.486-.77a.456.456 0 0 0-.32-.124.414.414 0 0 0-.306.124.41.41 0 0 0-.135.317.45.45 0 0 0 .134.33c.352.355.837.636 1.455.843.617.207 1.118.33 1.503.366a11.6 11.6 0 0 0 1.117.056c.36 0 .733-.02 1.117-.056.385-.037.886-.16 1.504-.366.62-.207 1.104-.488 1.456-.844zm-.037-2.544c.507 0 .938-.182 1.294-.547.356-.364.534-.802.534-1.315 0-.505-.18-.94-.54-1.303a1.75 1.75 0 0 0-1.29-.546c-.506 0-.94.18-1.3.54-.36.36-.54.797-.54 1.31s.18.95.54 1.315c.36.365.794.547 1.3.547z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-email" role="button" tabindex="1">
<span class="at4-visually-hidden">
Share to Email
</span>
<span class="at-icon-wrapper" style="background-color: rgb(132, 132, 132);">
<svg aria-labelledby="at-svg-email-4" class="at-icon at-icon-email" role="img" style="fill: rgb(255, 255, 255);" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-email-4" xmlns="http://www.w3.org/1999/xhtml">
Email
</title>
<g>
<g fill-rule="evenodd">
</g>
<path d="M27 22.757c0 1.24-.988 2.243-2.19 2.243H7.19C5.98 25 5 23.994 5 22.757V13.67c0-.556.39-.773.855-.496l8.78 5.238c.782.467 1.95.467 2.73 0l8.78-5.238c.472-.28.855-.063.855.495v9.087z">
</path>
<path d="M27 9.243C27 8.006 26.02 7 24.81 7H7.19C5.988 7 5 8.004 5 9.243v.465c0 .554.385 1.232.857 1.514l9.61 5.733c.267.16.8.16 1.067 0l9.61-5.733c.473-.283.856-.96.856-1.514v-.465z">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-compact" role="button" tabindex="1">
<span class="at4-visually-hidden">
More AddThis Share options
</span>
<span class="at-icon-wrapper" style="background-color: rgb(255, 101, 80);">
<svg aria-labelledby="at-svg-addthis-5" class="at-icon at-icon-addthis" role="img" style="fill: rgb(255, 255, 255);" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-addthis-5" xmlns="http://www.w3.org/1999/xhtml">
Addthis
</title>
<g>
<path d="M18 14V8h-4v6H8v4h6v6h4v-6h6v-4h-6z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<div class="at-custom-sidebar-counter" style="width: 48px; word-wrap: break-word;">
<div class="at-custom-sidebar-count" style="color: rgb(34, 34, 34);">
36
</div>
<div class="at-custom-sidebar-text" style="color: rgb(34, 34, 34);">
SHARES
</div>
</div>
<div class="at-share-close-control ats-transparent at4-hide-content at4-show" id="at4-scc" title="Hide">
<div class="at4-arrow at-left">
Hide
</div>
</div>
</div>
<div class="at-share-open-control at-share-open-control-left ats-transparent at4-hide" id="at4-soc" title="Show">
<div class="at4-arrow at-right">
Show
</div>
</div>
</div>
<div aria-labelledby="at-thankyou-label" class="at4-thankyou at4-thankyou-background at4-hide ats-transparent at4-thankyou-desktop addthis-smartlayers addthis-animated fadeIn at4-show" id="at4-thankyou" role="dialog">
<div class="at4lb-inner">
<button class="at4x" title="Close">
Close
</button>
<a id="at4-palogo">
<div>
<a class="at-branding-logo" href="//www.addthis.com/website-tools/overview?utm_source=AddThis%20Tools&utm_medium=image" target="_blank" title="Powered by AddThis">
<div class="at-branding-icon">
</div>
<span class="at-branding-addthis">
AddThis
</span>
</a>
</div>
</a>
<div class="at4-thankyou-inner">
<div class="thankyou-title" id="at-thankyou-label">
</div>
<div class="thankyou-description">
</div>
<div class="at4-thankyou-layer">
</div>
</div>
</div>
</div>
<div aria-labelledby="at-share-dock-label" class="at-share-dock-outer at4-hide addthis-smartlayers at4-visually-hidden addthis-smartlayers-mobile" role="region">
<div class="at4-hide" id="at-share-dock-label">
AddThis Sharing
</div>
<div class="at-share-dock atss atss-bottom at-shfs-small addthis-animated slideInUp at4-show at4-hide" id="at-share-dock">
<a class="at4-count" href="#" style="width: 16.6667%;">
<span class="at4-counter">
</span>
<span class="at4-share-label">
SHARES
</span>
</a>
<a class="at-share-btn at-svc-facebook" role="button" style="width: 16.6667%;" tabindex="1" title="Facebook">
<span class="at-icon-wrapper" style="background-color: rgb(59, 89, 152);">
<svg alt="Facebook" aria-labelledby="at-svg-facebook-6" class="at-icon at-icon-facebook" role="img" style="fill: rgb(255, 255, 255); width: 24px; height: 24px;" title="Facebook" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-facebook-6" xmlns="http://www.w3.org/1999/xhtml">
Facebook
</title>
<g>
<path d="M22 5.16c-.406-.054-1.806-.16-3.43-.16-3.4 0-5.733 1.825-5.733 5.17v2.882H9v3.913h3.837V27h4.604V16.965h3.823l.587-3.913h-4.41v-2.5c0-1.123.347-1.903 2.198-1.903H22V5.16z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-twitter" role="button" style="width: 16.6667%;" tabindex="1" title="Twitter">
<span class="at-icon-wrapper" style="background-color: rgb(29, 161, 242);">
<svg alt="Twitter" aria-labelledby="at-svg-twitter-7" class="at-icon at-icon-twitter" role="img" style="fill: rgb(255, 255, 255); width: 24px; height: 24px;" title="Twitter" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-twitter-7" xmlns="http://www.w3.org/1999/xhtml">
Twitter
</title>
<g>
<path d="M27.996 10.116c-.81.36-1.68.602-2.592.71a4.526 4.526 0 0 0 1.984-2.496 9.037 9.037 0 0 1-2.866 1.095 4.513 4.513 0 0 0-7.69 4.116 12.81 12.81 0 0 1-9.3-4.715 4.49 4.49 0 0 0-.612 2.27 4.51 4.51 0 0 0 2.008 3.755 4.495 4.495 0 0 1-2.044-.564v.057a4.515 4.515 0 0 0 3.62 4.425 4.52 4.52 0 0 1-2.04.077 4.517 4.517 0 0 0 4.217 3.134 9.055 9.055 0 0 1-5.604 1.93A9.18 9.18 0 0 1 6 23.85a12.773 12.773 0 0 0 6.918 2.027c8.3 0 12.84-6.876 12.84-12.84 0-.195-.005-.39-.014-.583a9.172 9.172 0 0 0 2.252-2.336" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-reddit" role="button" style="width: 16.6667%;" tabindex="1" title="Reddit">
<span class="at-icon-wrapper" style="background-color: rgb(255, 87, 0);">
<svg alt="Reddit" aria-labelledby="at-svg-reddit-8" class="at-icon at-icon-reddit" role="img" style="fill: rgb(255, 255, 255); width: 24px; height: 24px;" title="Reddit" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-reddit-8" xmlns="http://www.w3.org/1999/xhtml">
Reddit
</title>
<g>
<path d="M27 15.5a2.452 2.452 0 0 1-1.338 2.21c.098.38.147.777.147 1.19 0 1.283-.437 2.47-1.308 3.563-.872 1.092-2.06 1.955-3.567 2.588-1.506.634-3.143.95-4.91.95-1.768 0-3.403-.316-4.905-.95-1.502-.632-2.69-1.495-3.56-2.587-.872-1.092-1.308-2.28-1.308-3.562 0-.388.045-.777.135-1.166a2.47 2.47 0 0 1-1.006-.912c-.253-.4-.38-.842-.38-1.322 0-.678.237-1.26.712-1.744a2.334 2.334 0 0 1 1.73-.726c.697 0 1.29.26 1.78.782 1.785-1.258 3.893-1.928 6.324-2.01l1.424-6.467a.42.42 0 0 1 .184-.26.4.4 0 0 1 .32-.063l4.53 1.006c.147-.306.368-.553.662-.74a1.78 1.78 0 0 1 .97-.278c.508 0 .94.18 1.302.54.36.36.54.796.54 1.31 0 .512-.18.95-.54 1.315-.36.364-.794.546-1.302.546-.507 0-.94-.18-1.295-.54a1.793 1.793 0 0 1-.533-1.308l-4.1-.92-1.277 5.86c2.455.074 4.58.736 6.37 1.985a2.315 2.315 0 0 1 1.757-.757c.68 0 1.256.242 1.73.726.476.484.713 1.066.713 1.744zm-16.868 2.47c0 .513.178.95.534 1.315.356.365.787.547 1.295.547.508 0 .942-.182 1.302-.547.36-.364.54-.802.54-1.315 0-.513-.18-.95-.54-1.31-.36-.36-.794-.54-1.3-.54-.5 0-.93.183-1.29.547a1.79 1.79 0 0 0-.54 1.303zm9.944 4.406c.09-.09.135-.2.135-.323a.444.444 0 0 0-.44-.447c-.124 0-.23.042-.32.124-.336.348-.83.605-1.486.77a7.99 7.99 0 0 1-1.964.248 7.99 7.99 0 0 1-1.964-.248c-.655-.165-1.15-.422-1.486-.77a.456.456 0 0 0-.32-.124.414.414 0 0 0-.306.124.41.41 0 0 0-.135.317.45.45 0 0 0 .134.33c.352.355.837.636 1.455.843.617.207 1.118.33 1.503.366a11.6 11.6 0 0 0 1.117.056c.36 0 .733-.02 1.117-.056.385-.037.886-.16 1.504-.366.62-.207 1.104-.488 1.456-.844zm-.037-2.544c.507 0 .938-.182 1.294-.547.356-.364.534-.802.534-1.315 0-.505-.18-.94-.54-1.303a1.75 1.75 0 0 0-1.29-.546c-.506 0-.94.18-1.3.54-.36.36-.54.797-.54 1.31s.18.95.54 1.315c.36.365.794.547 1.3.547z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-email" role="button" style="width: 16.6667%;" tabindex="1" title="Email">
<span class="at-icon-wrapper" style="background-color: rgb(132, 132, 132);">
<svg alt="Email" aria-labelledby="at-svg-email-9" class="at-icon at-icon-email" role="img" style="fill: rgb(255, 255, 255); width: 24px; height: 24px;" title="Email" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-email-9" xmlns="http://www.w3.org/1999/xhtml">
Email
</title>
<g>
<g fill-rule="evenodd">
</g>
<path d="M27 22.757c0 1.24-.988 2.243-2.19 2.243H7.19C5.98 25 5 23.994 5 22.757V13.67c0-.556.39-.773.855-.496l8.78 5.238c.782.467 1.95.467 2.73 0l8.78-5.238c.472-.28.855-.063.855.495v9.087z">
</path>
<path d="M27 9.243C27 8.006 26.02 7 24.81 7H7.19C5.988 7 5 8.004 5 9.243v.465c0 .554.385 1.232.857 1.514l9.61 5.733c.267.16.8.16 1.067 0l9.61-5.733c.473-.283.856-.96.856-1.514v-.465z">
</path>
</g>
</svg>
</span>
</a>
<a class="at-share-btn at-svc-compact" role="button" style="width: 16.6667%;" tabindex="1" title="More">
<span class="at-icon-wrapper" style="background-color: rgb(255, 101, 80);">
<svg alt="More" aria-labelledby="at-svg-addthis-10" class="at-icon at-icon-addthis" role="img" style="fill: rgb(255, 255, 255); width: 24px; height: 24px;" title="More" version="1.1" viewbox="0 0 32 32" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title id="at-svg-addthis-10" xmlns="http://www.w3.org/1999/xhtml">
Addthis
</title>
<g>
<path d="M18 14V8h-4v6H8v4h6v6h4v-6h6v-4h-6z" fill-rule="evenodd">
</path>
</g>
</svg>
</span>
</a>
</div>
</div>
</body>
</html>
news_title = soup.find("div", class_="content_title").get_text()
news_p = soup.find("div", class_="article_teaser_body").get_text()_____no_output_____print(f"{news_title}:{news_p}")'Storm Chasers' on Mars Searching for Dusty Secrets:Scientists with NASA's Mars orbiters have been waiting years for an event like the current Mars global dust storm.
</code>
# JPL Mars Space Images_____no_output_____
<code>
executable_path = {"executable_path": "chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, "html.parser")_____no_output_____image_url = soup.footer.find("a", class_="button fancybox")["data-fancybox-href"]
featured_image_url = "https://www.jpl.nasa.gov" + image_url
print(featured_image_url)https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA18284_ip.jpg
</code>
# Mars Weather_____no_output_____
<code>
executable_path = {"executable_path": "chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://twitter.com/marswxreport?lang=en"
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, "html.parser")_____no_output_____tweets = soup.find_all("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")_____no_output_____for tweet in tweets:
tweet_parent = tweet.find_parent("div", class_="content")
tweet_id = tweet_parent.find("a", class_="account-group js-account-group js-action-profile js-user-profile-link js-nav")["href"]
if tweet_id == '/MarsWxReport':
mars_weather = tweet_parent.find("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text").get_text()
break_____no_output_____mars_weather_____no_output_____
</code>
# Mars Facts_____no_output_____
<code>
url = 'https://space-facts.com/mars/'_____no_output_____tables = pd.read_html(url)
tables_____no_output_____df = tables[0]
df.columns = ["Description", "Value"]
df.set_index(df["Description"], inplace=True)_____no_output_____df = df[["Value"]]_____no_output_____html_table = df.to_html()
html_table = html_table.replace('\n', '')
html_table_____no_output_____
</code>
# Mars Hemispheres_____no_output_____
<code>
executable_path = {"executable_path": "chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, "html.parser")
h3s = soup.find_all("h3")_____no_output_____titles = []
for h3 in h3s:
h3 = str(h3)
h3 = h3[4:-14]
titles.append(h3)
titles_____no_output_____img_urls = []
for title in titles:
browser.click_link_by_partial_text(title)
html = browser.html
soup = BeautifulSoup(html, "html.parser")
img_urls.append(soup.find("div", class_="downloads").find("a")["href"])
img_urls_____no_output_____hemisphere_image_urls = []
for title, img_url in zip(titles, img_urls):
hemisphere_image_urls.append({"title": title, "img_url":img_url})
hemisphere_image_urls_____no_output_____
</code>
|
{
"repository": "EmmaK0822/web_scraping",
"path": "mission_to_mars.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 732716,
"hexsha": "cb91dea9be111158f5ed5bbe7bd3e2e8e900a44c",
"max_line_length": 353239,
"avg_line_length": 204.8409281521,
"alphanum_fraction": 0.7378766125
}
|
# Notebook from coopwilliams/Neural_network_foundations_code_challenges
Path: Tuesday's_Challenge.ipynb
# For today's code challenge you will be reviewing yesterdays lecture material. Have fun!
### if you get done early check out [these videos](https://www.3blue1brown.com/neural-networks)._____no_output_____# The Perceptron
The first and simplest kind of neural network that we could talk about is the perceptron. A perceptron is just a single node or neuron of a neural network with nothing else. It can take any number of inputs and spit out an output. What a neuron does is it takes each of the input values, multplies each of them by a weight, sums all of these products up, and then passes the sum through what is called an "activation function" the result of which is the final value.
I really like figure 2.1 found in this [pdf](http://www.uta.fi/sis/tie/neuro/index/Neurocomputing2.pdf) even though it doesn't have bias term represented there.

If we were to write what is happening in some verbose mathematical notation, it might look something like this:
\begin{align}
y = sigmoid(\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias)
\end{align}
Understanding what happens with a single neuron is important because this is the same pattern that will take place for all of our networks.
When imagining a neural network I like to think about the arrows as representing the weights, like a wire that has a certain amount of resistance and only lets a certain amount of current through. And I like to think about the node itselef as containing the prescribed activation function that neuron will use to decide how much signal to pass onto the next layer._____no_output_____# Activation Functions (transfer functions)
In Neural Networks, each node has an activation function. Each node in a given layer typically has the same activation function. These activation functions are the biggest piece of neural networks that have been inspired by actual biology. The activation function decides whether a cell "fires" or not. Sometimes it is said that the cell is "activated" or not. In Artificial Neural Networks activation functions decide how much signal to pass onto the next layer. This is why they are sometimes referred to as transfer functions because they determine how much signal is transferred to the next layer.
## Common Activation Functions:
_____no_output_____# Implementing a Perceptron from scratch in Python_____no_output_____### Establish training data_____no_output_____
<code>
import numpy as np
np.random.seed(812)
inputs = np.array([
[0, 0, 1],
[1, 1, 1],
[1, 0, 1],
[0, 1, 1]
])
correct_outputs = [[0], [1], [1], [0]]_____no_output_____
</code>
### Sigmoid activation function and its derivative for updating weights_____no_output_____
<code>
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
sx = sigmoid(x)
return sx * (1 - sx)_____no_output_____
</code>
## Updating weights with derivative of sigmoid function:
_____no_output_____### Initialize random weights for our three inputs_____no_output_____
<code>
weights = 2 * np.random.random((3, 1)) - 1_____no_output_____weights.shape_____no_output_____inputs.shape_____no_output_____
</code>
### Calculate weighted sum of inputs and weights_____no_output_____
<code>
weighted_sum = np.dot(inputs, weights)_____no_output_____weighted_sum_____no_output_____
</code>
### Output the activated value for the end of 1 training epoch_____no_output_____
<code>
activated_value = sigmoid(weighted_sum)
activated_value_____no_output_____
</code>
### take difference of output and true values to calculate error_____no_output_____
<code>
error = correct_outputs - activated_value
error
_____no_output_____
</code>
### Put it all together_____no_output_____
<code>
adjustments = error * sigmoid_derivative(activated_value)
adjustments, inputs.T_____no_output_____for i in range(10000):
weighted_sum = np.dot(inputs, weights)
activated_value = sigmoid(weighted_sum)
error = correct_outputs - activated_value
adjustments = error * sigmoid_derivative(activated_value)
weights += np.dot(inputs.T, adjustments)
print(weights)
print("\n-----\n", activated_value)[[15.03804491]
[-0.40666422]
[-7.23278107]]
-----
[[7.22059439e-04]
[9.99388204e-01]
[9.99592541e-01]
[4.80912067e-04]]
</code>
|
{
"repository": "coopwilliams/Neural_network_foundations_code_challenges",
"path": "Tuesday's_Challenge.ipynb",
"matched_keywords": [
"biology"
],
"stars": null,
"size": 10034,
"hexsha": "cb920c0beab47a5f7a1fdf04c67a242405c7df70",
"max_line_length": 614,
"avg_line_length": 24.4136253041,
"alphanum_fraction": 0.5359776759
}
|
# Notebook from MOAZ47/Predict-the-score-of-Beer
Path: Predict Beer .ipynb
<code>
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import nltk_____no_output_____train = pd.read_csv("C:\\Users\\Moaz\\Desktop\\moaz\\Jupyter Python NB\\Machine Hack Practice\\Beer Train Data Set.csv")
test = pd.read_csv("C:\\Users\\Moaz\\Desktop\\moaz\\Jupyter Python NB\\Machine Hack Practice\\Beer Test Data Set.csv")_____no_output_____train.head()_____no_output_____train.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 185643 entries, 0 to 185642
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ABV 170513 non-null float64
1 Brewing Company 185643 non-null int64
2 Food Paring 185643 non-null object
3 Glassware Used 185643 non-null object
4 Beer Name 185643 non-null int64
5 Ratings 185643 non-null object
6 Style Name 185643 non-null object
7 Cellar Temperature 178862 non-null object
8 Serving Temperature 185450 non-null object
9 Score 185643 non-null float64
dtypes: float64(2), int64(2), object(6)
memory usage: 14.2+ MB
train.isnull().sum()_____no_output_____train[["Minimum Temperature", "Maximum Temperature"]]=train["Cellar Temperature"].str.split("-", expand=True, n=1).astype(float)_____no_output_____train[["Minimum Serving Temperature", "Maximum Serving Temperature"]]=train["Serving Temperature"].str.split("-", expand=True, n=1).astype(float)_____no_output_____# Filling empty vaues with MEAN value
avg_abv = train["ABV"].astype("float").mean(axis=0)
train["ABV"].replace(np.nan, avg_abv, inplace=True)
avg_min_temp = train["Minimum Temperature"].astype("float").mean(axis=0)
train["Minimum Temperature"].replace(np.nan, avg_min_temp, inplace=True)
avg_min_temp = train["Maximum Temperature"].astype("float").mean(axis=0)
train["Maximum Temperature"].replace(np.nan, avg_min_temp, inplace=True)
avg_minserv_temp = train["Minimum Serving Temperature"].astype("float").mean(axis=0)
train["Minimum Serving Temperature"].replace(np.nan, avg_minserv_temp, inplace=True)
avg_minserv_temp = train["Maximum Serving Temperature"].astype("float").mean(axis=0)
train["Maximum Serving Temperature"].replace(np.nan, avg_minserv_temp, inplace=True)_____no_output_____train.isnull().sum()_____no_output_____freq = nltk.FreqDist(train['Food Paring'])
for key,value in freq.items():
print(str(key)+' : '+str(value))(Curried,Thai)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Poultry,Fish,Shellfish,Salmon) : 25577
(PanAsian)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry) : 12648
Meat(Pork,Poultry) : 1280
(Indian,LatinAmerican,PanAsian)General(Aperitif) : 247
Meat(Poultry,Fish,Shellfish) : 2444
(Italian,German)Cheese(nuttyAsiago,Colby,Parmesan)Meat(Fish,Shellfish,Salmon) : 816
Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)Meat(Pork,GrilledMeat) : 1301
Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Digestive)Meat(Beef,SmokedMeat,Game,GrilledMeat) : 5125
(Barbecue,Indian,LatinAmerican,Thai,PanAsian)Cheese(pepperyMontereyPepperJack)Meat(Shellfish) : 1064
(Barbecue)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Beef,GrilledMeat) : 1229
(Curried,Thai)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,pungentGorgonzola,Limburger)General(Salad,Aperitif)Meat(Poultry,Fish,Shellfish) : 8982
(Salad) : 4088
(Barbecue,Curried,Indian,LatinAmerican,Italian,Thai,Chinese,Japanese,PanAsian,Mediterranean,MiddleEastern) : 731
Cheese(tangyBrick,Edam,Feta) : 2164
Cheese(sharpBlue,Cheddar)Meat(Beef,Poultry,Fish) : 5924
(Barbecue,German)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon) : 394
Cheese(butteryBrie,Gouda,Havarti,Swiss)Meat(SmokedMeat,Salmon) : 1379
Cheese(pepperyMontereyPepperJack,pungentGorgonzola,Limburger)General(Salad) : 5523
(German)General(Chocolate,Dessert)Meat(GrilledMeat) : 517
(Curried,Indian)Cheese(nuttyAsiago,Colby,Parmesan,sharpBlue,Cheddar)Meat(Shellfish) : 1451
(Barbecue,LatinAmerican)Cheese(earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,GrilledMeat) : 1364
(German) : 3979
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon) : 725
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Digestive) : 682
(Barbecue,Italian)Cheese(earthyCamembert,Fontina)Meat(Pork,Poultry,Fish,Shellfish) : 2160
(Curried)Cheese(nuttyAsiago,Colby,Parmesan,pepperyMontereyPepperJack)Meat(Poultry,Fish) : 331
(German)Cheese(tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish) : 3670
Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Chocolate)Meat(Beef) : 1283
(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Dessert)Meat(Beef,SmokedMeat,GrilledMeat) : 11992
(Thai)Cheese(tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish) : 2909
(LatinAmerican,German)Meat(Pork,Poultry) : 918
(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,Game,GrilledMeat) : 4911
Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Digestive)Meat(Beef,SmokedMeat,Game) : 774
(Dessert,Aperitif,Digestive) : 1578
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Game,GrilledMeat,Salmon) : 10147
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Salad,Aperitif) : 2376
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate)Meat(Game) : 977
(Aperitif,Digestive)Meat(Game,Salmon) : 1419
(Barbecue)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Beef) : 3958
(Indian,MiddleEastern)Cheese(nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish,Shellfish) : 628
(German)Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(Game) : 944
Cheese(pepperyMontereyPepperJack,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish) : 3265
(Barbecue,LatinAmerican)General(Chocolate)Meat(SmokedMeat,GrilledMeat) : 1161
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Salad)Meat(Pork,Fish,Shellfish) : 1110
(Dessert)Meat(Poultry) : 1381
(German)General(Salad)Meat(Fish) : 1681
None,yet : 5417
(Mediterranean)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Pork,Poultry) : 1987
(Barbecue,Curried,Indian,LatinAmerican,Chinese)Cheese(sharpBlue,Cheddar)General(Aperitif,Digestive)Meat(Shellfish,Game) : 833
(German)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Pork) : 1438
(Italian,MiddleEastern)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Fish) : 4234
(Japanese,German)Cheese(pepperyMontereyPepperJack)General(Aperitif)Meat(Poultry,Fish) : 2955
(LatinAmerican,German)Meat(Beef,SmokedMeat,GrilledMeat) : 590
(Chocolate,Salad,Dessert,Aperitif) : 396
(Barbecue)Cheese(earthyCamembert,Fontina)Meat(Beef,SmokedMeat,Game,GrilledMeat) : 788
(German)General(Salad)Meat(Pork,Fish,Shellfish) : 1888
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Aperitif)Meat(Pork,Poultry,Fish,Shellfish) : 431
Cheese(earthyCamembert,Fontina,sharpBlue,Cheddar)Meat(GrilledMeat) : 328
(Curried,Indian,Thai,Chinese,Japanese,PanAsian)Cheese(sharpBlue,Cheddar) : 1778
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Dessert,Digestive) : 1322
(LatinAmerican)Meat(Beef,Poultry) : 608
(German)Meat(SmokedMeat,Game,GrilledMeat) : 838
Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive) : 910
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Salad)Meat(Pork,Poultry,Fish,Shellfish) : 507
(Indian,Mediterranean,MiddleEastern)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Fish,Shellfish) : 2176
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Salad) : 1175
(German)Cheese(earthyCamembert,Fontina)Meat(SmokedMeat,Game,GrilledMeat) : 864
Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(GrilledMeat) : 266
Cheese(sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Aperitif,Digestive) : 68
Cheese(pepperyMontereyPepperJack)General(Chocolate)Meat(GrilledMeat) : 713
(Curried,German)Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)Meat(Salmon) : 917
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Dessert,Digestive) : 75
Cheese(earthyCamembert,Fontina)General(Aperitif) : 660
(German)General(Salad)Meat(Poultry,Fish) : 161
(Japanese) : 58
(German)Cheese(sharpBlue,Cheddar)General(Salad)Meat(Pork) : 195
Cheese(pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Digestive)Meat(Shellfish,Game,GrilledMeat) : 219
(Barbecue,LatinAmerican)Cheese(pepperyMontereyPepperJack)Meat(Fish,SmokedMeat) : 365
(Salad)Meat(Poultry,Game) : 240
(Curried,Thai,PanAsian)Cheese(sharpBlue,Cheddar)Meat(Game,GrilledMeat) : 334
Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Dessert,Digestive)Meat(Game) : 167
Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar,pungentGorgonzola,Limburger) : 104
(Aperitif)Meat(Fish,Shellfish,Salmon) : 51
(Thai,Chinese,Japanese,PanAsian)Meat(Pork,Poultry,Fish,Shellfish) : 89
(Barbecue,LatinAmerican)Cheese(nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Salmon) : 179
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Aperitif,Digestive) : 93
(Dessert,Aperitif) : 18
(Chocolate,Salad,Dessert,Apritif) : 1
train['Food Paring'] = train['Food Paring'].replace("(Curried,Thai)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Poultry,Fish,Shellfish,Salmon)" , "Thai, Cheese, Meat" )
train['Food Paring'] = train['Food Paring'].replace("(PanAsian)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry)" , "Pan-Asian, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Meat(Pork,Poultry)" , "Meat")
train['Food Paring'] = train['Food Paring'].replace("(Indian,LatinAmerican,PanAsian)General(Aperitif)" , "Indian, Latin-American, Pan-Asian, General Food")
train['Food Paring'] = train['Food Paring'].replace("Meat(Poultry,Fish,Shellfish)" , "Meat")
train['Food Paring'] = train['Food Paring'].replace("(Italian,German)Cheese(nuttyAsiago,Colby,Parmesan)Meat(Fish,Shellfish,Salmon)" , "Italian, German, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)Meat(Pork,GrilledMeat)" , "Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Digestive)Meat(Beef,SmokedMeat,Game,GrilledMeat)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,Indian,LatinAmerican,Thai,PanAsian)Cheese(pepperyMontereyPepperJack)Meat(Shellfish)" , "Barbecue, Indian, Latin-American, Thai, Pan-Asian, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Beef,GrilledMeat)" , "Barbecue, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Curried,Thai)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,pungentGorgonzola,Limburger)General(Salad,Aperitif)Meat(Poultry,Fish,Shellfish)" , "Thai, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Salad)" , "General Food")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,Curried,Indian,LatinAmerican,Italian,Thai,Chinese,Japanese,PanAsian,Mediterranean,MiddleEastern)" , "Barbecue, Indian, Latin-American, Italian, Thai, Japanese, Pan-Asian, Mediterranean, Middle-East")
train['Food Paring'] = train['Food Paring'].replace("Cheese(tangyBrick,Edam,Feta)" , "Cheese")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar)Meat(Beef,Poultry,Fish)" , "Cheese")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,German)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon)" , "Barbecue, German, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss)Meat(SmokedMeat,Salmon)" , "Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(pepperyMontereyPepperJack,pungentGorgonzola,Limburger)General(Salad)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(German)General(Chocolate,Dessert)Meat(GrilledMeat)" , "German, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Curried,Indian)Cheese(nuttyAsiago,Colby,Parmesan,sharpBlue,Cheddar)Meat(Shellfish)" , "Indian, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,GrilledMeat)" , "Barbecue, Latin-American, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)" , "German")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon)" , "Barbecue, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Digestive)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,Italian)Cheese(earthyCamembert,Fontina)Meat(Pork,Poultry,Fish,Shellfish)" , "Barbecue, Italian, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Curried)Cheese(nuttyAsiago,Colby,Parmesan,pepperyMontereyPepperJack)Meat(Poultry,Fish)" , "Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Chocolate)Meat(Beef)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Dessert)Meat(Beef,SmokedMeat,GrilledMeat)" , "Barbecue, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Thai)Cheese(tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish)" , "Thai, Cheese, General Food, Meat")
_____no_output_____train['Food Paring'] = train['Food Paring'].replace("(LatinAmerican,German)Meat(Pork,Poultry)" , "Latin-American, German, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,Game,GrilledMeat)" , "Barbecue, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Digestive)Meat(Beef,SmokedMeat,Game)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Dessert,Aperitif,Digestive)" , "Dessert")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Game,GrilledMeat,Salmon)" , "Barbecue, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Salad,Aperitif)" , "German, Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate)Meat(Game)" , "German, Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Aperitif,Digestive)Meat(Game,Salmon)" , "Dessert, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Beef)" , "Barbecue, Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Indian,MiddleEastern)Cheese(nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish,Shellfish)" , "Indian, Middle-East, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(Game)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(pepperyMontereyPepperJack,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,LatinAmerican)General(Chocolate)Meat(SmokedMeat,GrilledMeat)" , "Barbecue, Latin-American, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Salad)Meat(Pork,Fish,Shellfish)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Dessert)Meat(Poultry)" , "Dessert, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)General(Salad)Meat(Fish)" , "German, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("None,yet" , "None yet")
train['Food Paring'] = train['Food Paring'].replace("(Mediterranean)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Pork,Poultry)" , "Mediterranean, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,Curried,Indian,LatinAmerican,Chinese)Cheese(sharpBlue,Cheddar)General(Aperitif,Digestive)Meat(Shellfish,Game)" , "Barbecue, Indian, Latin-American, Chinese, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Pork)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Italian,MiddleEastern)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Fish)" , "Italian, Middle-East, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Japanese,German)Cheese(pepperyMontereyPepperJack)General(Aperitif)Meat(Poultry,Fish)" , "Japanese, German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(LatinAmerican,German)Meat(Beef,SmokedMeat,GrilledMeat)" , "Latin-American, German, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Chocolate,Salad,Dessert,Aperitif)" , "Dessert")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(earthyCamembert,Fontina)Meat(Beef,SmokedMeat,Game,GrilledMeat)" , "Barbecue, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)General(Salad)Meat(Pork,Fish,Shellfish)" , "German, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Aperitif)Meat(Pork,Poultry,Fish,Shellfish)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(earthyCamembert,Fontina,sharpBlue,Cheddar)Meat(GrilledMeat)" , "Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Curried,Indian,Thai,Chinese,Japanese,PanAsian)Cheese(sharpBlue,Cheddar)" , "Indian, Thai, Chinese, Japanese, Pan-Aisan, Cheese")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Dessert,Digestive)" , "Cheese, General Food")
_____no_output_____train['Food Paring'] = train['Food Paring'].replace("(LatinAmerican)Meat(Beef,Poultry)" , "Latin-American, Meat")
train['Food Paring'] = train['Food Paring'].replace("(German)Meat(SmokedMeat,Game,GrilledMeat)" , "German, Meat ")
train['Food Paring'] = train['Food Paring'].replace("Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Salad)Meat(Pork,Poultry,Fish,Shellfish)" , "Barbecue, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Indian,Mediterranean,MiddleEastern)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Fish,Shellfish)" , "Indian, Mediterranean, Middle-East, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Salad)" , "Cheese, General")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(earthyCamembert,Fontina)Meat(SmokedMeat,Game,GrilledMeat)" , "German, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(GrilledMeat)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Aperitif,Digestive)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("Cheese(pepperyMontereyPepperJack)General(Chocolate)Meat(GrilledMeat)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Curried,German)Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)Meat(Salmon)" , "German, Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Dessert,Digestive)" , "German, Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("Cheese(earthyCamembert,Fontina)General(Aperitif)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(German)General(Salad)Meat(Poultry,Fish)" , "German, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Japanese)" , "Japanese")
train['Food Paring'] = train['Food Paring'].replace("(German)Cheese(sharpBlue,Cheddar)General(Salad)Meat(Pork)" , "German, Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Digestive)Meat(Shellfish,Game,GrilledMeat)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(pepperyMontereyPepperJack)Meat(Fish,SmokedMeat)" , "Barbecue, Latin-American, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Salad)Meat(Poultry,Game)" , "General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Curried,Thai,PanAsian)Cheese(sharpBlue,Cheddar)Meat(Game,GrilledMeat)" , "Thai, Cheese, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Dessert,Digestive)Meat(Game)" , "Cheese, General Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar,pungentGorgonzola,Limburger)" , "Cheese")
train['Food Paring'] = train['Food Paring'].replace("(Aperitif)Meat(Fish,Shellfish,Salmon)" , "Dessert, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Thai,Chinese,Japanese,PanAsian)Meat(Pork,Poultry,Fish,Shellfish)" , "Thai, Chinese, Japanese, Pan-Asian, Meat")
train['Food Paring'] = train['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Salmon)" , "Barbecue, Latin-American, Geberal Food, Meat")
train['Food Paring'] = train['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Aperitif,Digestive)" , "Cheese, General Food")
train['Food Paring'] = train['Food Paring'].replace("(Dessert,Aperitif)" , "Dessert")
train['Food Paring'] = train['Food Paring'].replace("(Chocolate,Salad,Dessert,Apritif)" , "Dessert")
_____no_output_____train['Food Paring'].nunique()_____no_output_____freq = nltk.FreqDist(train['Glassware Used'])
for key,value in freq.items():
print(str(key)+' : '+str(value))PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein) : 91275
PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 5217
PilsenerGlass(orPokal) : 5758
Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 4689
PintGlass(orBecker,Nonic,Tumbler),Snifter,OversizedWineGlass : 5807
PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal) : 1560
Snifter,Tulip,Goblet(orChalice),OversizedWineGlass : 1229
PintGlass(orBecker,Nonic,Tumbler),Tulip,OversizedWineGlass : 8982
Mug(orSeidel,Stein),Stange(SlenderCylinder) : 394
PintGlass(orBecker,Nonic,Tumbler),Snifter,Tulip : 1379
Flute,Tulip,OversizedWineGlass : 5523
Flute,WeizenGlass : 517
Flute,PilsenerGlass(orPokal) : 4254
PintGlass(orBecker,Nonic,Tumbler) : 3888
WeizenGlass : 4748
Goblet(orChalice) : 1283
Snifter,Tulip,OversizedWineGlass : 15135
Snifter,Tulip,Goblet(orChalice) : 774
Mug(orSeidel,Stein) : 918
PintGlass(orBecker,Nonic,Tumbler),Goblet(orChalice) : 2376
PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 977
Tulip,OversizedWineGlass : 628
Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein),Stange(SlenderCylinder) : 2722
Stange(SlenderCylinder),WeizenGlass : 1681
Snifter,Goblet(orChalice) : 1987
PintGlass(orBecker,Nonic,Tumbler),Stange(SlenderCylinder) : 1438
Flute,Snifter,Tulip,Stange(SlenderCylinder) : 501
Stange(SlenderCylinder) : 2752
PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),OversizedWineGlass : 2255
Flute,Snifter,Tulip : 594
PintGlass(orBecker,Nonic,Tumbler),Snifter : 1322
PintGlass(orBecker,Nonic,Tumbler),Snifter,Mug(orSeidel,Stein) : 910
Tulip,Goblet(orChalice),OversizedWineGlass : 1175
None,yet : 193
Flute,Snifter,OversizedWineGlass : 75
Snifter,OversizedWineGlass : 386
Flute : 51
PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),WeizenGlass : 179
Flute,Stange(SlenderCylinder) : 111
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein)','Pint Glass, Mug')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal),Mug(orSeidel,Stein)','Pint Glass, Pilsener Glass, Mug')
train['Glassware Used'] = train['Glassware Used'].replace('PilsenerGlass(orPokal)','Pilsener Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein)','Flute, Pint Glass, Mug')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Snifter,OversizedWineGlass','Pint Glass, Snifter, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal)','Pint Glass, Pilsener Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Snifter,Tulip,Goblet(orChalice),OversizedWineGlass','Snifter, Tulip, Goblet, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Tulip,OversizedWineGlass','Pint Glass, Tulip, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Mug(orSeidel,Stein),Stange(SlenderCylinder)','Mug, Stange')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Snifter,Tulip','Pint Glass, Nonic, Tumbler')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,Tulip,OversizedWineGlass','Flute, Tulip, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,WeizenGlass','Flute, Weizen Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,PilsenerGlass(orPokal)','Flute, Pilsener Glass')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler)','Pint Glass')
train['Glassware Used'] = train['Glassware Used'].replace('WeizenGlass','Weizen Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Goblet(orChalice)','Goblet')
train['Glassware Used'] = train['Glassware Used'].replace('Snifter,Tulip,OversizedWineGlass','Snifter, Tulip, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Snifter,Tulip,Goblet(orChalice)','Snifter, Tulip, Goblet')
train['Glassware Used'] = train['Glassware Used'].replace('Mug(orSeidel,Stein)','Mug')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Goblet(orChalice)','Pint Glass, Goblet')
train['Glassware Used'] = train['Glassware Used'].replace('PilsenerGlass(orPokal),Mug(orSeidel,Stein)','Pilsener Glass, Mug')
train['Glassware Used'] = train['Glassware Used'].replace('Tulip,OversizedWineGlass','Tulip, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein),Stange(SlenderCylinder)','Flute, Mug, Stange')
train['Glassware Used'] = train['Glassware Used'].replace('Stange(SlenderCylinder),WeizenGlass','Stange, Weizen Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Snifter,Goblet(orChalice)','Snifter, Goblet')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Stange(SlenderCylinder)','Pint Glass, Stange')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,Snifter,Tulip,Stange(SlenderCylinder)','Flute, Snifter, Tulip, Stange ')
train['Glassware Used'] = train['Glassware Used'].replace('Stange(SlenderCylinder)','Stange ')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),OversizedWineGlass','Pint Glass, Mug, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,Snifter,Tulip','Flute, Snifter, Tulip ')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Snifter','Pint Glass, Snifter ')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Snifter,Mug(orSeidel,Stein)','Pint Glass, Snifter, Mug ')
train['Glassware Used'] = train['Glassware Used'].replace('Tulip,Goblet(orChalice),OversizedWineGlass','Tulip, Over-sized Wine Glass')
train['Glassware Used'] = train['Glassware Used'].replace('None,yet','None yet ')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,Snifter,OversizedWineGlass','Flute, Snifter, Over-sized Wine Glass ')
train['Glassware Used'] = train['Glassware Used'].replace('Snifter,OversizedWineGlass','Snifter, Over-sized Wine Glass ')
train['Glassware Used'] = train['Glassware Used'].replace('Flute','Flute ')
train['Glassware Used'] = train['Glassware Used'].replace('PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),WeizenGlass','Pint Glass, Mug, Weizen Glass ')
train['Glassware Used'] = train['Glassware Used'].replace('Flute,Stange(SlenderCylinder)','Flute, Stange ')
_____no_output_____train['Glassware Used'].nunique()_____no_output_____train['Style Name'].nunique()_____no_output_____from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()_____no_output_____train['Food Paring label']= label_encoder.fit_transform(train['Food Paring'])
train['Glassware Used label']= label_encoder.fit_transform(train['Glassware Used'])
train['Style Name label']= label_encoder.fit_transform(train['Style Name']) _____no_output_____train['Ratings'] = pd.to_numeric(train['Ratings'],errors='coerce')
train['Beer Name'] = train['Beer Name'].astype(float)
train['Brewing Company'] = train['Brewing Company'].astype(float)_____no_output_____train.head()_____no_output_____train.dtypes_____no_output_____train1 = train[['ABV', 'Ratings', 'Minimum Temperature','Maximum Temperature','Minimum Serving Temperature','Maximum Serving Temperature', 'Food Paring label', 'Glassware Used label', 'Style Name label', 'Score']]
_____no_output_____train1.isnull().sum()_____no_output_____# Replace empty values by mean rating values
avg_rating = train1["Ratings"].astype("float").mean(axis=0)
train1["Ratings"].replace(np.nan, avg_rating, inplace=True)_____no_output_____train1.isnull().sum()_____no_output_____sns.set(style="ticks", color_codes=True)
sns.pairplot(train1)_____no_output_____#A simple correlation plot usong seaborn. The below plot shows how the different variables correlate with each other
corr = train1.corr()
fig, ax = plt.subplots(figsize=(10,10))
ax = sns.heatmap(
corr,
vmin=-1, vmax=1 , center=2,
square=True,
annot=True,
linewidths=.5,
cmap="YlGnBu" )
#Rotating labels on x axis
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=55,
horizontalalignment='right'
)_____no_output_____
</code>
## TEST SET_____no_output_____
<code>
test.head()_____no_output_____test.isnull().sum()_____no_output_____test[["Minimum Temperature", "Maximum Temperature"]] = test["Cellar Temperature"].str.split("-", expand=True, n=1).astype(float)
test[["Minimum Serving Temperature", "Maximum Serving Temperature"]] = test["Serving Temperature"].str.split("-", expand=True, n=1).astype(float)_____no_output_____avg_abv1 = test["ABV"].astype("float").mean(axis=0)
test["ABV"].replace(np.nan, avg_abv1, inplace=True)
avg_min_temp1 = test["Minimum Temperature"].astype("float").mean(axis=0)
test["Minimum Temperature"].replace(np.nan, avg_min_temp1, inplace=True)
avg_max_temp1 = test["Maximum Temperature"].astype("float").mean(axis=0)
test["Maximum Temperature"].replace(np.nan, avg_max_temp1, inplace=True)
avg_minserv_temp1 = test["Minimum Serving Temperature"].astype("float").mean(axis=0)
test["Minimum Serving Temperature"].replace(np.nan, avg_minserv_temp1, inplace=True)
avg_maxserv_temp1 = test["Maximum Serving Temperature"].astype("float").mean(axis=0)
test["Maximum Serving Temperature"].replace(np.nan, avg_maxserv_temp1, inplace=True)_____no_output_____freq = nltk.FreqDist(test['Food Paring'])
for key,value in freq.items():
print(str(key)+' : '+str(value))(Curried,Thai)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Poultry,Fish,Shellfish,Salmon) : 2842
(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Dessert)Meat(Beef,SmokedMeat,GrilledMeat) : 1332
Cheese(earthyCamembert,Fontina)General(Aperitif) : 73
(LatinAmerican,German)Meat(Pork,Poultry) : 102
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Game,GrilledMeat,Salmon) : 1127
Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Digestive)Meat(Beef,SmokedMeat,Game,GrilledMeat) : 570
Meat(Poultry,Fish,Shellfish) : 272
Cheese(pepperyMontereyPepperJack,pungentGorgonzola,Limburger)General(Salad) : 614
(Dessert)Meat(Poultry) : 154
(Curried,Thai)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,pungentGorgonzola,Limburger)General(Salad,Aperitif)Meat(Poultry,Fish,Shellfish) : 998
(Curried,Indian,Thai,Chinese,Japanese,PanAsian)Cheese(sharpBlue,Cheddar) : 198
(PanAsian)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry) : 1405
(Indian,Mediterranean,MiddleEastern)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Fish,Shellfish) : 242
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Digestive) : 76
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Dessert,Digestive) : 147
Cheese(sharpBlue,Cheddar)Meat(Beef,Poultry,Fish) : 658
(Salad) : 454
(Italian,German)Cheese(nuttyAsiago,Colby,Parmesan)Meat(Fish,Shellfish,Salmon) : 91
Cheese(pepperyMontereyPepperJack,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish) : 363
(German) : 442
(Thai)Cheese(tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish) : 323
(Curried,Indian)Cheese(nuttyAsiago,Colby,Parmesan,sharpBlue,Cheddar)Meat(Shellfish) : 161
(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,Game,GrilledMeat) : 546
Cheese(pepperyMontereyPepperJack)General(Chocolate)Meat(GrilledMeat) : 79
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate)Meat(Game) : 109
(German)Cheese(earthyCamembert,Fontina)Meat(SmokedMeat,Game,GrilledMeat) : 96
(Dessert,Aperitif,Digestive) : 175
(Indian,LatinAmerican,PanAsian)General(Aperitif) : 27
Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Chocolate)Meat(Beef) : 142
Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive) : 101
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Salad,Aperitif) : 264
(German)Cheese(tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish) : 408
(Indian,MiddleEastern)Cheese(nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish,Shellfish) : 70
(German)Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(Game) : 105
(Italian,MiddleEastern)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Fish) : 471
(German)General(Salad)Meat(Pork,Fish,Shellfish) : 210
(Barbecue)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Beef) : 440
(Aperitif,Digestive)Meat(Game,Salmon) : 158
(Barbecue,Italian)Cheese(earthyCamembert,Fontina)Meat(Pork,Poultry,Fish,Shellfish) : 240
(Barbecue,Curried,Indian,LatinAmerican,Italian,Thai,Chinese,Japanese,PanAsian,Mediterranean,MiddleEastern) : 81
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Salad)Meat(Pork,Poultry,Fish,Shellfish) : 56
(German)Meat(SmokedMeat,Game,GrilledMeat) : 93
(Japanese,German)Cheese(pepperyMontereyPepperJack)General(Aperitif)Meat(Poultry,Fish) : 328
(Barbecue)Cheese(earthyCamembert,Fontina)Meat(Beef,SmokedMeat,Game,GrilledMeat) : 88
Cheese(butteryBrie,Gouda,Havarti,Swiss)Meat(SmokedMeat,Salmon) : 153
(Mediterranean)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Pork,Poultry) : 221
(German)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Pork) : 160
(Barbecue,Curried,Indian,LatinAmerican,Chinese)Cheese(sharpBlue,Cheddar)General(Aperitif,Digestive)Meat(Shellfish,Game) : 93
(Curried,Thai,PanAsian)Cheese(sharpBlue,Cheddar)Meat(Game,GrilledMeat) : 37
None,yet : 602
Cheese(tangyBrick,Edam,Feta) : 241
(Barbecue,LatinAmerican)Cheese(earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,GrilledMeat) : 152
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Salad) : 130
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Salad)Meat(Pork,Fish,Shellfish) : 123
(Barbecue,Indian,LatinAmerican,Thai,PanAsian)Cheese(pepperyMontereyPepperJack)Meat(Shellfish) : 118
(German)General(Salad)Meat(Fish) : 187
Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)Meat(Pork,GrilledMeat) : 144
(LatinAmerican)Meat(Beef,Poultry) : 67
(Curried,German)Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)Meat(Salmon) : 102
Meat(Pork,Poultry) : 142
(Barbecue,LatinAmerican)Cheese(nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Salmon) : 20
(German)General(Chocolate,Dessert)Meat(GrilledMeat) : 57
Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(GrilledMeat) : 29
(Chocolate,Salad,Dessert,Aperitif) : 44
(LatinAmerican,German)Meat(Beef,SmokedMeat,GrilledMeat) : 66
(Barbecue)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Beef,GrilledMeat) : 137
Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Dessert,Digestive)Meat(Game) : 19
Cheese(sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Aperitif,Digestive) : 7
Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Digestive)Meat(Beef,SmokedMeat,Game) : 86
(Barbecue,LatinAmerican)General(Chocolate)Meat(SmokedMeat,GrilledMeat) : 129
(Salad)Meat(Poultry,Game) : 27
(German)General(Salad)Meat(Poultry,Fish) : 18
Cheese(pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Digestive)Meat(Shellfish,Game,GrilledMeat) : 24
(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon) : 81
(German)Cheese(sharpBlue,Cheddar)General(Salad)Meat(Pork) : 22
(Thai,Chinese,Japanese,PanAsian)Meat(Pork,Poultry,Fish,Shellfish) : 10
(Barbecue,LatinAmerican)Cheese(pepperyMontereyPepperJack)Meat(Fish,SmokedMeat) : 40
Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Aperitif,Digestive) : 10
(Curried)Cheese(nuttyAsiago,Colby,Parmesan,pepperyMontereyPepperJack)Meat(Poultry,Fish) : 37
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Dessert,Digestive) : 8
Cheese(earthyCamembert,Fontina,sharpBlue,Cheddar)Meat(GrilledMeat) : 36
Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar,pungentGorgonzola,Limburger) : 12
(Barbecue,German)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon) : 44
(Dessert,Aperitif) : 2
(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Aperitif)Meat(Pork,Poultry,Fish,Shellfish) : 48
(Japanese) : 6
(Aperitif)Meat(Fish,Shellfish,Salmon) : 6
test['Food Paring'] = test['Food Paring'].replace("(Curried,Thai)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Poultry,Fish,Shellfish,Salmon)" , "Thai, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Dessert)Meat(Beef,SmokedMeat,GrilledMeat)" , "Barbecue, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(earthyCamembert,Fontina)General(Aperitif)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(LatinAmerican,German)Meat(Pork,Poultry)" , "Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Game,GrilledMeat,Salmon)" , "Barbecue, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Chocolate,Digestive)Meat(Beef,SmokedMeat,Game,GrilledMeat)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Meat(Poultry,Fish,Shellfish)" , "Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(pepperyMontereyPepperJack,pungentGorgonzola,Limburger)General(Salad)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(Dessert)Meat(Poultry)" , "Dessert, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Curried,Thai)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,pungentGorgonzola,Limburger)General(Salad,Aperitif)Meat(Poultry,Fish,Shellfish)" , "Thai, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Curried,Indian,Thai,Chinese,Japanese,PanAsian)Cheese(sharpBlue,Cheddar)" , "Indian, Thai, Chinese, Japanese, PanAsian, Cheese")
test['Food Paring'] = test['Food Paring'].replace("(PanAsian)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry)" , "PanAsian, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Indian,Mediterranean,MiddleEastern)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Fish,Shellfish)" , "Indian, Mediterranean, MiddleEastern, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Digestive)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Dessert,Digestive)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar)Meat(Beef,Poultry,Fish)" , "Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Salad)" , "Salad")
test['Food Paring'] = test['Food Paring'].replace("(Italian,German)Cheese(nuttyAsiago,Colby,Parmesan)Meat(Fish,Shellfish,Salmon)" , "Italian, German, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(pepperyMontereyPepperJack,tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)" , "German")
test['Food Paring'] = test['Food Paring'].replace("(Thai)Cheese(tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish)" , "Thai, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Curried,Indian)Cheese(nuttyAsiago,Colby,Parmesan,sharpBlue,Cheddar)Meat(Shellfish)" , "Indian, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,Game,GrilledMeat)" , "Barbecue, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(pepperyMontereyPepperJack)General(Chocolate)Meat(GrilledMeat)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,earthyCamembert,Fontina)General(Chocolate)Meat(Game)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(earthyCamembert,Fontina)Meat(SmokedMeat,Game,GrilledMeat)" , "German, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Dessert,Aperitif,Digestive)" , "Dessert")
test['Food Paring'] = test['Food Paring'].replace("(Indian,LatinAmerican,PanAsian)General(Aperitif)" , "Indian, LatinAmerican, PanAsian, General Food")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Chocolate)Meat(Beef)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Salad,Aperitif)" , "German, Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(tangyBrick,Edam,Feta)General(Salad)Meat(Poultry,Fish,Shellfish)" , "German, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Indian,MiddleEastern)Cheese(nuttyAsiago,Colby,Parmesan,tangyBrick,Edam,Feta)General(Salad,Aperitif)Meat(Fish,Shellfish)" , "Indian, MiddleEastern, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(Game)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Italian,MiddleEastern)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Fish)" , "Italian, MiddleEastern, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)General(Salad)Meat(Pork,Fish,Shellfish)" , "German, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Beef)" , "Barbecue, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Aperitif,Digestive)Meat(Game,Salmon)" , "Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,Italian)Cheese(earthyCamembert,Fontina)Meat(Pork,Poultry,Fish,Shellfish)" , "Barbecue, Italian, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,Curried,Indian,LatinAmerican,Italian,Thai,Chinese,Japanese,PanAsian,Mediterranean,MiddleEastern)" , "Barbecue, Curried, Indian, LatinAmerican, Italian, Thai, Chinese, Japanese, PanAsian, Mediterranean, MiddleEastern")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Salad)Meat(Pork,Poultry,Fish,Shellfish)" , "Barbecue, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Meat(SmokedMeat,Game,GrilledMeat)" , "German, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Japanese,German)Cheese(pepperyMontereyPepperJack)General(Aperitif)Meat(Poultry,Fish)" , "Japanese, German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(earthyCamembert,Fontina)Meat(Beef,SmokedMeat,Game,GrilledMeat)" , "Barbecue, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss)Meat(SmokedMeat,Salmon)" , "Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Mediterranean)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Pork,Poultry)" , "Mediterranean, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(pepperyMontereyPepperJack)General(Salad)Meat(Pork)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,Curried,Indian,LatinAmerican,Chinese)Cheese(sharpBlue,Cheddar)General(Aperitif,Digestive)Meat(Shellfish,Game)" , "Barbecue, Curried, Indian, LatinAmerican, Chinese, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Curried,Thai,PanAsian)Cheese(sharpBlue,Cheddar)Meat(Game,GrilledMeat)" , "Curried, Thai, PanAsian, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("None,yet" , "None")
test['Food Paring'] = test['Food Paring'].replace("Cheese(tangyBrick,Edam,Feta)" , "Cheese")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(earthyCamembert,Fontina)General(Chocolate,Dessert)Meat(Beef,Shellfish,SmokedMeat,GrilledMeat)" , "Barbecue, LatinAmerican, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Salad)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Salad)Meat(Pork,Fish,Shellfish)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,Indian,LatinAmerican,Thai,PanAsian)Cheese(pepperyMontereyPepperJack)Meat(Shellfish)" , "Barbecue, Indian, LatinAmerican, Thai, PanAsian, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)General(Salad)Meat(Fish)" , "German, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(earthyCamembert,Fontina,nuttyAsiago,Colby,Parmesan)Meat(Pork,GrilledMeat)" , "Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(LatinAmerican)Meat(Beef,Poultry)" , "LatinAmerican, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Curried,German)Cheese(nuttyAsiago,Colby,Parmesan)General(Digestive)Meat(Salmon)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Meat(Pork,Poultry)" , "Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(nuttyAsiago,Colby,Parmesan)General(Chocolate)Meat(Salmon)" , "Barbecue, LatinAmerican, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)General(Chocolate,Dessert)Meat(GrilledMeat)" , "German, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(earthyCamembert,Fontina)General(Chocolate)Meat(GrilledMeat)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Chocolate,Salad,Dessert,Aperitif)" , "Dessert")
test['Food Paring'] = test['Food Paring'].replace("(LatinAmerican,German)Meat(Beef,SmokedMeat,GrilledMeat)" , "LatinAmerican, German, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)Meat(Beef,GrilledMeat)" , "Barbecue, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,pungentGorgonzola,Limburger)General(Dessert,Digestive)Meat(Game)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar,tangyBrick,Edam,Feta)General(Aperitif,Digestive)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Digestive)Meat(Beef,SmokedMeat,Game)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,LatinAmerican)General(Chocolate)Meat(SmokedMeat,GrilledMeat)" , "Barbecue, LatinAmerican, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Salad)Meat(Poultry,Game)" , "Salad, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)General(Salad)Meat(Poultry,Fish)" , "German, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(pungentGorgonzola,Limburger,tangyBrick,Edam,Feta)General(Digestive)Meat(Shellfish,Game,GrilledMeat)" , "Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon)" , "Barbecue, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(sharpBlue,Cheddar)General(Salad)Meat(Pork)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Thai,Chinese,Japanese,PanAsian)Meat(Pork,Poultry,Fish,Shellfish)" , "Thai, Chinese, Japanese, PanAsian, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,LatinAmerican)Cheese(pepperyMontereyPepperJack)Meat(Fish,SmokedMeat)" , "Barbecue, LatinAmerican, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(sharpBlue,Cheddar,pungentGorgonzola,Limburger)General(Aperitif,Digestive)" , "Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("(Curried)Cheese(nuttyAsiago,Colby,Parmesan,pepperyMontereyPepperJack)Meat(Poultry,Fish)" , "Curried, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar)General(Dessert,Digestive)" , "German, Cheese, General Food")
test['Food Paring'] = test['Food Paring'].replace("Cheese(earthyCamembert,Fontina,sharpBlue,Cheddar)Meat(GrilledMeat)" , "Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("Cheese(butteryBrie,Gouda,Havarti,Swiss,sharpBlue,Cheddar,pungentGorgonzola,Limburger)" , "Cheese")
test['Food Paring'] = test['Food Paring'].replace("(Barbecue,German)Cheese(pepperyMontereyPepperJack,sharpBlue,Cheddar)Meat(Beef,SmokedMeat,Game,GrilledMeat,Salmon)" , "Barbecue, German, Cheese, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Dessert,Aperitif)" , "Dessert")
test['Food Paring'] = test['Food Paring'].replace("(German)Cheese(butteryBrie,Gouda,Havarti,Swiss)General(Aperitif)Meat(Pork,Poultry,Fish,Shellfish)" , "German, Cheese, General Food, Meat")
test['Food Paring'] = test['Food Paring'].replace("(Japanese)" , "Japanese")
test['Food Paring'] = test['Food Paring'].replace("(Aperitif)Meat(Fish,Shellfish,Salmon)" , "Meat")_____no_output_____test['Food Paring'].nunique()_____no_output_____freq = nltk.FreqDist(test['Glassware Used'])
for key,value in freq.items():
print(str(key)+' : '+str(value))PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein) : 10141
Snifter,Tulip,OversizedWineGlass : 1681
Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 521
PintGlass(orBecker,Nonic,Tumbler),Snifter,OversizedWineGlass : 646
PilsenerGlass(orPokal) : 640
Flute,Tulip,OversizedWineGlass : 614
PintGlass(orBecker,Nonic,Tumbler) : 433
PintGlass(orBecker,Nonic,Tumbler),Tulip,OversizedWineGlass : 998
Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein),Stange(SlenderCylinder) : 303
PintGlass(orBecker,Nonic,Tumbler),Snifter : 147
PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 579
Mug(orSeidel,Stein) : 102
PilsenerGlass(orPokal),Mug(orSeidel,Stein) : 109
Stange(SlenderCylinder) : 306
Goblet(orChalice) : 142
PintGlass(orBecker,Nonic,Tumbler),Snifter,Mug(orSeidel,Stein) : 101
PintGlass(orBecker,Nonic,Tumbler),Goblet(orChalice) : 264
WeizenGlass : 528
Tulip,OversizedWineGlass : 70
Flute,PilsenerGlass(orPokal) : 473
PintGlass(orBecker,Nonic,Tumbler),Snifter,Tulip : 153
Snifter,Goblet(orChalice) : 221
PintGlass(orBecker,Nonic,Tumbler),Stange(SlenderCylinder) : 160
Tulip,Goblet(orChalice),OversizedWineGlass : 130
PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal) : 173
Stange(SlenderCylinder),WeizenGlass : 187
PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),WeizenGlass : 20
PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),OversizedWineGlass : 251
Flute,WeizenGlass : 57
Flute,Snifter,Tulip : 65
Flute,Snifter,Tulip,Stange(SlenderCylinder) : 56
Snifter,Tulip,Goblet(orChalice),OversizedWineGlass : 137
Snifter,OversizedWineGlass : 43
Snifter,Tulip,Goblet(orChalice) : 86
None,yet : 21
Flute,Stange(SlenderCylinder) : 12
Flute,Snifter,OversizedWineGlass : 8
Mug(orSeidel,Stein),Stange(SlenderCylinder) : 44
Flute : 6
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein)" , "PintGlass, Mug")
test['Glassware Used'] = test['Glassware Used'].replace("Snifter,Tulip,OversizedWineGlass" , "Snifter, Tulip, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein)" , "Flute, PilsenerGlass, Mug")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Snifter,OversizedWineGlass" , "PintGlass, Snifter, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PilsenerGlass(orPokal" , "PilsenerGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,Tulip,OversizedWineGlass" , "Flute, Tulip, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler)" , "PintGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Tulip,OversizedWineGlass" , "PintGlass, Tulip, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,PilsenerGlass(orPokal),Mug(orSeidel,Stein),Stange(SlenderCylinder)" , "Flute, PilsenerGlass, Mug, Stange")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Snifter" , "PintGlass, Snifter")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal),Mug(orSeidel,Stein)" , "PintGlass, PilsenerGlass, Mug")
test['Glassware Used'] = test['Glassware Used'].replace("Mug(orSeidel,Stein)" , "Mug")
test['Glassware Used'] = test['Glassware Used'].replace("PilsenerGlass(orPokal),Mug(orSeidel,Stein)" , "PilsenerGlass, Mug")
test['Glassware Used'] = test['Glassware Used'].replace("Stange(SlenderCylinder)" , "Stange")
test['Glassware Used'] = test['Glassware Used'].replace("Goblet(orChalice)" , "Goblet")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Snifter,Mug(orSeidel,Stein)","PintGlass, Snifter, Mug")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Goblet(orChalice)" , "PintGlass, Goblet")
test['Glassware Used'] = test['Glassware Used'].replace("WeizenGlass" , "WeizenGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Tulip,OversizedWineGlass" , "Tulip, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,PilsenerGlass(orPokal)" , "Flute, PilsenerGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Snifter,Tulip" , "PintGlass, Snifter, Tulip")
test['Glassware Used'] = test['Glassware Used'].replace("Snifter,Goblet(orChalice)" , "Snifter, Goblet")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Stange(SlenderCylinder)" , "PintGlass, Stange")
test['Glassware Used'] = test['Glassware Used'].replace("Tulip,Goblet(orChalice),OversizedWineGlass" , "Tulip, Goblet, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),PilsenerGlass(orPokal)" , "PintGlass, PilsenerGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Stange(SlenderCylinder),WeizenGlass" , "Stange, WeizenGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),WeizenGlass" , "PintGlass, Mug, WeizenGlass")
test['Glassware Used'] = test['Glassware Used'].replace("PintGlass(orBecker,Nonic,Tumbler),Mug(orSeidel,Stein),OversizedWineGlass" , "PintGlass, Mug, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,WeizenGlass" , "Flute, WeizenGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,Snifter,Tulip" , "Flute, Snifter, Tulip")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,Snifter,Tulip,Stange(SlenderCylinder)" , "Flute, Snifter, Tulip, Stange")
test['Glassware Used'] = test['Glassware Used'].replace("Snifter,Tulip,Goblet(orChalice),OversizedWineGlass" , "Snifter, Tulip, Goblet, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Snifter,OversizedWineGlass" , "Snifter, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Snifter,Tulip,Goblet(orChalice)" , "Snifter, Tulip, Goblet")
test['Glassware Used'] = test['Glassware Used'].replace("None,yet" , "None")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,Stange(SlenderCylinder)" , "Flute,Stange")
test['Glassware Used'] = test['Glassware Used'].replace("Flute,Snifter,OversizedWineGlass" , "Flute, Snifter, OversizedWineGlass")
test['Glassware Used'] = test['Glassware Used'].replace("Mug(orSeidel,Stein),Stange(SlenderCylinder)" , "Mug, Stange")
test['Glassware Used'] = test['Glassware Used'].replace("Flute" , "Flute")
_____no_output_____test['Glassware Used'].nunique()_____no_output_____test.dtypes_____no_output_____test['Food Paring label']= label_encoder.fit_transform(test['Food Paring'])
test['Glassware Used label']= label_encoder.fit_transform(test['Glassware Used'])
test['Style Name label']= label_encoder.fit_transform(test['Style Name']) _____no_output_____test['Ratings'] = pd.to_numeric(test['Ratings'],errors='coerce')
test['Beer Name'] = test['Beer Name'].astype(float)
test['Brewing Company'] = test['Brewing Company'].astype(float)_____no_output_____test.isnull().sum()_____no_output_____test_avg_ratings = test['Ratings'].astype(float).mean()
test['Ratings'].replace(np.nan, test_avg_ratings, inplace=True)_____no_output_____test.isnull().sum()_____no_output_____test.columns_____no_output_____test1 = test[['ABV', 'Ratings', 'Minimum Temperature', 'Maximum Temperature','Minimum Serving Temperature', 'Maximum Serving Temperature', 'Food Paring label', 'Glassware Used label', 'Style Name label']]_____no_output_____test1.isnull().sum()_____no_output_____
</code>
## Data Pre-Processing_____no_output_____
<code>
x = train1.iloc[:,:-1]
y = train1.iloc[:,-1]
print(x.columns)
Index(['ABV', 'Ratings', 'Minimum Temperature', 'Maximum Temperature',
'Minimum Serving Temperature', 'Maximum Serving Temperature',
'Food Paring label', 'Glassware Used label', 'Style Name label'],
dtype='object')
from sklearn.preprocessing import StandardScaler
x = StandardScaler().fit(x).transform(x)
x[:3]_____no_output_____
</code>
### Regression_____no_output_____
<code>
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state = 0)_____no_output_____from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(x_train, y_train)_____no_output_____y_pred = reg.predict(x_test)
print(y_pred)[3.19830794 3.30957503 3.07962486 ... 3.16934654 3.23573775 3.25829787]
# Calculating score from Root Mean Log Squared Error
def rmlse(y_test, y_pred):
error = np.square(np.log10(y_pred +1) - np.log10(y_test +1)).mean() ** 0.5
score = 1 - error
return score
_____no_output_____print("\n----------------------------\nRMLSE Score = ", rmlse(y_test, y_pred))
----------------------------
RMLSE Score = 0.7615472587302806
</code>
### SVR_____no_output_____
<code>
from sklearn.svm import SVR
svr = SVR(kernel='rbf')_____no_output_____# Training the regressor with training data
svr.fit(x_train, y_train)_____no_output_____y_pred2 = svr.predict(x_test)
print(y_pred2)[3.63264068 3.79316237 3.64473313 ... 3.70577344 3.65590063 3.68563509]
print("----------------------------\nRMLSE Score = ", rmlse(y_test, y_pred2))----------------------------
RMLSE Score = 0.7502329428531098
pd.DataFrame({'Score' : y_pred}).to_excel("C:\\Users\\Moaz\\Desktop\\moaz\\Jupyter Python NB\\Machine Hack\\beer_score.xlsx")_____no_output_____
</code>
|
{
"repository": "MOAZ47/Predict-the-score-of-Beer",
"path": "Predict Beer .ipynb",
"matched_keywords": [
"Salmon"
],
"stars": null,
"size": 1022808,
"hexsha": "cb951b787d839372bc7c4225585ce8ec4c4df0d4",
"max_line_length": 799748,
"avg_line_length": 469.8245291686,
"alphanum_fraction": 0.9242937091
}
|
# Notebook from Cinofix/graph-kernel-manifold-learning
Path: src/PPI_WLKernel.ipynb
<code>
import numpy as np
import scipy.io as sio
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.metrics.pairwise import pairwise_distances
from sklearn import manifold
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.svm import SVC_____no_output_____from graph_kernels_lib import WeisfeilerLehmanKernel, fit_n_components_____no_output_____ppi = sio.loadmat("PPI.mat")
ppi_graphs = ppi['G'][0]
ppi_labels = ppi['labels'].ravel()_____no_output_____n = ppi_labels.shape[0]_____no_output_____wl_kernel = WeisfeilerLehmanKernel()_____no_output_____K = wl_kernel.eval_similarities(ppi_graphs[:]['am'], 2)_____no_output_____D = pairwise_distances(K, metric='euclidean')_____no_output_____plt.imshow(D, zorder=2, cmap='Blues', interpolation='nearest')
plt.colorbar();
plt.style.use("ggplot")
plt.show()_____no_output_____
</code>
# SVM Linear Classifier_____no_output_____
<code>
from sklearn.model_selection import StratifiedKFold
strat_k_fold = StratifiedKFold(n_splits = 10, shuffle = True) #10_____no_output_____clf = svm.SVC(kernel="linear", C = 1.0)
scores_ln = cross_val_score(clf, D, ppi_labels, cv = strat_k_fold)
print(str(np.min(scores_ln)) +" - "+str(np.mean(scores_ln))+ " - " + str(np.max(scores_ln)) + " - "+ str(np.std(scores_ln)))0.5555555555555556 - 0.763888888888889 - 1.0 - 0.15023130314433286
PCA_D = PCA(n_components = 2).fit_transform(D)
plt.plot(np.cumsum(PCA().fit(D).explained_variance_ratio_))
plt.show()
np.cumsum(PCA().fit(D).explained_variance_ratio_)[:3]_____no_output_____acidovorax = PCA_D[ppi_labels == 1]
acidobacteria = PCA_D[ppi_labels == 2]
clf = clf.fit(PCA_D, ppi_labels)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(np.min(PCA_D), np.max(PCA_D))
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.figure(figsize=(10,5))
ax_av = plt.scatter(acidovorax[:, 0], acidovorax[:, 1], color = "xkcd:red", marker = "^",label = "Acidovorax", s = 455, alpha = 0.65)
ax_ab = plt.scatter(acidobacteria[:, 0], acidobacteria[:, 1], color = "green", label = "Acidobacteria", s = 250, alpha = 0.75)
svm_line = plt.plot(xx, yy, color = "xkcd:sky blue", linestyle = "--", linewidth = 3.0)
plt.axis('tight');
#plt.grid(True)
plt.legend(prop={'size': 15})
ax_av.set_facecolor('xkcd:salmon')
ax_ab.set_facecolor('xkcd:pale green')
plt.show()
_____no_output_____from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection='3d')
PCA_D = PCA(n_components = 3).fit_transform(D)
acidovorax = PCA_D[ppi_labels == 1]
acidobacteria = PCA_D[ppi_labels == 2]
clf = clf.fit(PCA_D, ppi_labels)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(np.min(PCA_D), np.max(PCA_D))
yy = a * xx - (clf.intercept_[0]) / w[1]
#plt.figure(figsize=(10,5))
ax_av = ax.scatter(acidovorax[:, 0], acidovorax[:, 1], acidovorax[:, 2],c = "xkcd:red", marker = "^",label = "Acidovorax", s = 455, alpha = 0.65)
ax_ab = ax.scatter(acidobacteria[:, 0], acidobacteria[:, 1], acidobacteria[:, 2], c = "green", label = "Acidobacteria", s = 250, alpha = 0.75)
#svm_line = plt.plot(xx, yy, color = "xkcd:sky blue", linestyle = "--", linewidth = 3.0)
plt.axis('tight');
#plt.grid(True)
plt.legend(prop={'size': 15})
ax_av.set_facecolor('xkcd:salmon')
ax_ab.set_facecolor('xkcd:pale green')
ax.view_init(azim = 30, elev = 25)
plt.show()
_____no_output_____
</code>
# Manifold Learning Isomap_____no_output_____
<code>
n_neighbors = 14#15
n_components = 2
iso_prj_D = manifold.Isomap(n_neighbors, n_components).fit_transform(D)_____no_output_____scores_ln = cross_val_score(clf, iso_prj_D, ppi_labels, cv = strat_k_fold, n_jobs= 8)
np.mean(scores_ln)_____no_output_____
</code>
It seems that manifold learning with Isomap does not improve the performance of our linear svm classifier_____no_output_____### Plots for Isomap_____no_output_____
<code>
acidovorax = iso_prj_D[ppi_labels == 1]
acidobacteria = iso_prj_D[ppi_labels == 2]
clf = clf.fit(iso_prj_D, ppi_labels)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(np.min(iso_prj_D), np.max(iso_prj_D))
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.figure(figsize=(10,5))
ax_av = plt.scatter(acidovorax[:, 0], acidovorax[:, 1], color = "xkcd:red", marker = "^",label = "Acidovorax", s = 455, alpha = 0.65)
ax_ab = plt.scatter(acidobacteria[:, 0], acidobacteria[:, 1], color = "green", label = "Acidobacteria", s = 250, alpha = 0.75)
svm_line = plt.plot(xx, yy, color = "xkcd:sky blue", linestyle = "--", linewidth = 3.0)
plt.axis('tight');
#plt.grid(True)
plt.legend(prop={'size': 15})
ax_av.set_facecolor('xkcd:salmon')
ax_ab.set_facecolor('xkcd:pale green')
plt.show()
_____no_output_____
</code>
#### Fit with best n of components_____no_output_____
<code>
opt_n_components = fit_n_components(D, ppi_labels, manifold.Isomap, n_iteration= 10)_____no_output_____opt_iso_prj_D = manifold.Isomap(n_neighbors, opt_n_components).fit_transform(D)_____no_output_____scores_ln = cross_val_score(clf, opt_iso_prj_D, ppi_labels, cv = strat_k_fold, n_jobs= 8)
np.mean(scores_ln)_____no_output_____
</code>
# Manifold Learning LocalLinearEmbedding_____no_output_____
<code>
n_neighbors = 13#15
n_components = 15
lle_prj_D = manifold.LocallyLinearEmbedding(n_neighbors, n_components).fit_transform(D)_____no_output_____scores_ln = cross_val_score(clf, lle_prj_D, ppi_labels, cv = strat_k_fold, n_jobs= 8)
np.mean(scores_ln)_____no_output_____
</code>
It seems that also manifold learning with LocalLinearEmbedding does not improve the performance of our linear svm classifier_____no_output_____### Plots for LLE_____no_output_____
<code>
acidovorax = lle_prj_D[ppi_labels == 1]
acidobacteria = lle_prj_D[ppi_labels == 2]
clf = clf.fit(lle_prj_D, ppi_labels)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-0.2,0.25)
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.figure(figsize=(10,5))
ax_av = plt.scatter(acidovorax[:, 0], acidovorax[:, 1], color = "xkcd:red", marker = "^",label = "Acidovorax", s = 455, alpha = 0.65)
ax_ab = plt.scatter(acidobacteria[:, 0], acidobacteria[:, 1], color = "green", label = "Acidobacteria", s = 250, alpha = 0.75)
svm_line = plt.plot(xx, yy, color = "xkcd:sky blue", linestyle = "--", linewidth = 3.0)
plt.axis('tight');
#plt.grid(True)
plt.legend(prop={'size': 15})
ax_av.set_facecolor('xkcd:salmon')
ax_ab.set_facecolor('xkcd:pale green')
plt.show()
_____no_output_____
</code>
#### Fit with best n of components_____no_output_____
<code>
opt_n_components = fit_n_components(D, ppi_labels, manifold.LocallyLinearEmbedding, n_neighbors=13, n_iteration= 10)
opt_n_components_____no_output_____opt_lle_prj_D = manifold.LocallyLinearEmbedding(13, opt_n_components).fit_transform(D)_____no_output_____scores_ln = cross_val_score(clf, opt_lle_prj_D, ppi_labels, cv = strat_k_fold, n_jobs= 8)
np.mean(scores_ln)_____no_output_____
</code>
# Graphs plots_____no_output_____
<code>
import networkx as nx
G = nx.from_numpy_matrix(ppi_graphs[10]['am'])
#pos=nx.spring_layout(G) # positions for all nodes
pos = nx.spring_layout(G, k = 0.9, iterations = 1000)
nx.draw_networkx_nodes(G, pos, with_labels= False, node_color = "green", node_size = 300, alpha = 0.8)
nx.draw_networkx_edges(G, pos, width = 2, alpha=0.5,edge_color='r')
plt.axis('off')
#plt.savefig("acidovorax_graph_10.png") # save as png
plt.show() # display_____no_output_____G = nx.from_numpy_matrix(ppi_graphs[59]['am'])
#pos=nx.spring_layout(G) # positions for all nodes
pos = nx.spring_layout(G, k = 0.9, iterations = 1000)
nx.draw_networkx_nodes(G, pos, with_labels= False, node_color = "green", node_size = 300, alpha = 0.8)
nx.draw_networkx_edges(G, pos, width = 2, alpha=0.5,edge_color='r')
plt.axis('off')
#plt.savefig("Acidobacteria_graph_59.png") # save as png
plt.show() # display_____no_output_____G = nx.from_numpy_matrix(ppi_graphs[6]['am'])
#pos=nx.spring_layout(G) # positions for all nodes
pos = nx.spring_layout(G, k = 0.9, iterations = 1000)
nx.draw_networkx_nodes(G, pos, with_labels= False, node_color = "green", node_size = 300, alpha = 0.8)
nx.draw_networkx_edges(G, pos, width = 2, alpha=0.5,edge_color='r')
plt.axis('off')
#plt.savefig("acidovorax_graph_2.png") # save as png
plt.show() # display_____no_output_____G = nx.from_numpy_matrix(ppi_graphs[48]['am'])
#pos=nx.spring_layout(G) # positions for all nodes
pos = nx.spring_layout(G, k = 0.9, iterations = 1000)
nx.draw_networkx_nodes(G, pos, with_labels= False, node_color = "green", node_size = 300, alpha = 0.8)
nx.draw_networkx_edges(G, pos, width = 2, alpha=0.5,edge_color='r')
plt.axis('off')
#plt.savefig("Acidobacteria_graph_48.png") # save as png
plt.show() # display_____no_output_____node_labels = wl_kernel.extract_graphs_labels(ppi_graphs[:]['am'])
size = int(np.max(np.concatenate(node_labels)))
degree_component = np.zeros((n, size))
for i in range(len(node_labels)):
for j in node_labels[i]:
degree_component[i,int(j)-1] += 1
degree_component[0]
_____no_output_____
</code>
|
{
"repository": "Cinofix/graph-kernel-manifold-learning",
"path": "src/PPI_WLKernel.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": 3,
"size": 251255,
"hexsha": "cb95904800773f40029c465d291e41fbd34327f6",
"max_line_length": 63980,
"avg_line_length": 294.5545134818,
"alphanum_fraction": 0.9284193349
}
|
# Notebook from Cellular-Longevity/cmapPy
Path: tutorials/test_handling_GCTX_original.ipynb
<code>
%load_ext autoreload
%autoreload 2
import cmapPyThe autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
DATAFOLDER = '/home/ubuntu'
os.chdir(DATAFOLDER)
!aws s3 cp s3://bioinformatics-loyal/nf-core_processing/HEALTHSPAN/ADMERA_100_ALL/kraken2_classification/standard_custom_bisulfite/taxonomy_gctx_classified_only/kraken_species_percentage.gctx tmp/
file = DATAFOLDER + '/tmp/kraken_species_percentage.gctx'download: s3://bioinformatics-loyal/nf-core_processing/HEALTHSPAN/ADMERA_100_ALL/kraken2_classification/standard_custom_bisulfite/taxonomy_gctx_classified_only/kraken_species_percentage.gctx to tmp/kraken_species_percentage.gctx
_____no_output_____
from cmapPy.pandasGEXpress.view import view
nodeNames = view(file)
# !h5ls -r tmp/kraken_species_percentage.gctx
0/DATA/0/matrix (99, 7086)
0/META/COL/group (99,)
0/META/COL/id (99,)
0/META/ROW/class (7086,)
0/META/ROW/family (7086,)
0/META/ROW/genus (7086,)
0/META/ROW/id (7086,)
0/META/ROW/kingdom (7086,)
0/META/ROW/order (7086,)
0/META/ROW/phylum (7086,)
0/META/ROW/root (7086,)
0/META/ROW/species (7086,)
0/META/ROW/subspecies (7086,)
0/META/ROW/taxid (7086,)
from cmapPy.pandasGEXpress.parse import parse
gctx_df = parse(file)_____no_output_____my_col_metadata = parse(file, col_meta_only=True)
my_row_metadata = parse(file, row_meta_only=True)_____no_output_____gctx_df.meth_df.head()_____no_output_____gctx_df.cov_df.head()_____no_output_____gctx_df.row_metadata_df.head()_____no_output_____gctx_df.col_metadata_df.head()_____no_output_____
</code>
|
{
"repository": "Cellular-Longevity/cmapPy",
"path": "tutorials/test_handling_GCTX_original.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 51226,
"hexsha": "cb959bdb292e23b62f7bc5afe180a50bd2fb3f09",
"max_line_length": 238,
"avg_line_length": 44.8957055215,
"alphanum_fraction": 0.3616132433
}
|
# Notebook from eschares/dimensions-api-lab
Path: archive/2020-04-Publishers-Usecases/1-Gathering-data-for-a-journal.ipynb
# Part 1: Extracting a Journal's Publications+Researchers Datasets
In this notebook we are going to
* extract all publications data for a given journal
* have a quick look at the publications' authors and affiliations
* review how many authors have been disambiguated with a Dimensions Researcher ID
* produce a dataset of non-disambiguated authors that can be used for manual disambiguation _____no_output_____## Prerequisites: Installing the Dimensions Library and Logging in_____no_output_____
<code>
# @markdown # Get the API library and login
# @markdown Click the 'play' button on the left (or shift+enter) after entering your API credentials
username = "" #@param {type: "string"}
password = "" #@param {type: "string"}
endpoint = "https://app.dimensions.ai" #@param {type: "string"}
!pip install dimcli plotly tqdm -U --quiet
import dimcli
from dimcli.shortcuts import *
dimcli.login(username, password, endpoint)
dsl = dimcli.Dsl()
#
# load common libraries
import time
import sys
import json
import os
import pandas as pd
from pandas.io.json import json_normalize
from tqdm.notebook import tqdm as progress
#
# charts libs
# import plotly_express as px
import plotly.express as px
if not 'google.colab' in sys.modules:
# make js dependecies local / needed by html exports
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
#
# create output data folder
if not(os.path.exists("data")):
os.mkdir("data")DimCli v0.6.7 - Succesfully connected to <https://app.dimensions.ai> (method: dsl.ini file)
</code>
## Selecting a Journal and Extracting All Publications Metadata_____no_output_____
<code>
#@title Select a journal from the dropdown
#@markdown If the journal isn't there, you can try type in the exact name instead.
journal_title = "Nature Genetics" #@param ['Nature', 'The Science of Nature', 'Nature Communications', 'Nature Biotechnology', 'Nature Medicine', 'Nature Genetics', 'Nature Neuroscience', 'Nature Structural & Molecular Biology', 'Nature Methods', 'Nature Cell Biology', 'Nature Immunology', 'Nature Reviews Drug Discovery', 'Nature Materials', 'Nature Physics', 'Nature Reviews Neuroscience', 'Nature Nanotechnology', 'Nature Reviews Genetics', 'Nature Reviews Urology', 'Nature Reviews Molecular Cell Biology', 'Nature Precedings', 'Nature Reviews Cancer', 'Nature Photonics', 'Nature Reviews Immunology', 'Nature Reviews Cardiology', 'Nature Reviews Gastroenterology & Hepatology', 'Nature Reviews Clinical Oncology', 'Nature Reviews Endocrinology', 'Nature Reviews Neurology', 'Nature Chemical Biology', 'Nature Reviews Microbiology', 'Nature Geoscience', 'Nature Reviews Rheumatology', 'Nature Climate Change', 'Nature Reviews Nephrology', 'Nature Chemistry', 'Nature Digest', 'Nature Protocols', 'Nature Middle East', 'Nature India', 'Nature China', 'Nature Plants', 'Nature Microbiology', 'Nature Ecology & Evolution', 'Nature Astronomy', 'Nature Energy', 'Nature Human Behaviour', 'AfCS-Nature Molecule Pages', 'Human Nature', 'Nature Reviews Disease Primers', 'Nature Biomedical Engineering', 'Nature Reports Stem Cells', 'Nature Reviews Materials', 'Nature Sustainability', 'Nature Catalysis', 'Nature Electronics', 'Nature Reviews Chemistry', 'Nature Metabolism', 'Nature Reviews Physics', 'Nature Machine Intelligence', 'NCI Nature Pathway Interaction Database', 'Nature Reports: Climate Change'] {allow-input: true}
start_year = 2015 #@param {type: "number"}
#@markdown ---
# PS
# To get titles from the API one can do this:
# > %dsldf search publications where journal.title~"Nature" and publisher="Springer Nature" return journal limit 100
# > ", ".join([f"'{x}'" for x in list(dsl_last_results.title)])
#
q_template = """search publications where
journal.title="{}" and
year>={}
return publications[basics+altmetric+times_cited]"""
q = q_template.format(journal_title, start_year)
print("DSL Query:\n----\n", q, "\n----")
pubs = dsl.query_iterative(q.format(journal_title, start_year), limit=500)
DSL Query:
----
search publications where
journal.title="Nature Genetics" and
year>=2015
return publications[basics+altmetric+times_cited]
----
500 / 1472
1000 / 1472
1472 / 1472
</code>
Save the data as a CSV file in case we want to reuse it later_____no_output_____
<code>
dfpubs = pubs.as_dataframe()
dfpubs.to_csv("data/1.pubs_metadata_with_metrics.csv")
# preview the publications
dfpubs.head(10)_____no_output_____
</code>
Extract the authors data _____no_output_____
<code>
# preview the authors data
authors = pubs.as_dataframe_authors()
authors.to_csv("data/1.publications_authors.csv", index=False)
authors.head(10)_____no_output_____
</code>
Extract the affiliations data _____no_output_____
<code>
affiliations = pubs.as_dataframe_authors_affiliations()
affiliations.to_csv("data/1.publications_authors_affiliations.csv", index=False)
affiliations.head(10)_____no_output_____
</code>
## Some stats about authors
* count how many authors in total
* count how many authors have a researcher ID
* count how many unique researchers IDs we have in total_____no_output_____
<code>
researchers = authors.query("researcher_id!=''")
#
df = pd.DataFrame({
'measure' : ['Authors in total (non unique)', 'Authors with a researcher ID', 'Authors with a researcher ID (unique)'],
'count' : [len(authors), len(researchers), researchers['researcher_id'].nunique()],
})
px.bar(df, x="measure", y="count", title=f"Author stats for {journal_title} (from {start_year})")_____no_output_____# save the researchers data to a file
researchers.to_csv("data/1.authors_with_researchers_id.csv")_____no_output_____
</code>
## Apprendix: A quick look at authors *without a Researcher ID*
We're not going to try to disambiguate them here, but still it's good to have a quick look at them...
Looks like the most common surname is `Wang`, while the most common first name is an empty value_____no_output_____
<code>
authors_without_id = authors.query("researcher_id==''")
authors_without_id[['first_name', 'last_name']].describe()
_____no_output_____
</code>
Top Ten surnames seem all Chinese.. _____no_output_____
<code>
authors_without_id['last_name'].value_counts()[:10]_____no_output_____
</code>
### Any common patterns?
If we try to group the data by name+surname we can see some interesting patterns
* some entries are things which are not persons (presumably the results of bad source data in Dimensions, eg from the publisher)
* there are some apparently meaningful name+surname combinations with a lot of hits
* not many Chinese names in the top ones
_____no_output_____
<code>
test = authors_without_id.groupby(["first_name", "last_name"]).size()
test.sort_values(ascending=False, inplace=True)
test.head(50)_____no_output_____
</code>
## Conclusion and next steps
For the next tasks, we will focus on the disambiguated authors as the Researcher ID links will let us carry out useful analyses.
Still, we can **save the authors with missing IDs** results and try to do some manual disambiguation later. To this end, adding a simple google-search URL can help in making sense of these data quickly._____no_output_____
<code>
from urllib.parse import quote
out = []
for index, value in test.items():
# compose a simple URL of the form 'https://www.google.com/search?q=tonu+esko'
if index[0] or index[1]:
n, s = quote(index[0]), quote(index[1])
url = f"https://www.google.com/search?q={n}+{s}"
else:
url = ""
d = {'name': index[0] , 'surname' : index[1] , 'frequency' : value , 'search_url' : url }
out.append(d)
dftest = pd.DataFrame.from_dict(out)
# set order of columns
dftest = dftest[['name', 'surname', 'frequency', 'search_url']]
dftest.head(20)_____no_output_____# save the data
#
dftest.to_csv("data/1.authors_not_disambiguated_frequency.csv", header=True)_____no_output_____if COLAB_ENV:
files.download("data/1.authors_not_disambiguated_frequency.csv")
files.download("data/1.authors_with_researchers_id.csv")
files.download("data/1.publications_authors.csv")
files.download("data/1.publications_authors_affiliations.csv")
files.download("data/1.pubs_metadata_with_metrics.csv")_____no_output_____
</code>
That's it!
Now let's go and open this in [Google Sheets](https://docs.google.com/spreadsheets/)..._____no_output_____
|
{
"repository": "eschares/dimensions-api-lab",
"path": "archive/2020-04-Publishers-Usecases/1-Gathering-data-for-a-journal.ipynb",
"matched_keywords": [
"immunology",
"biology",
"neuroscience",
"ecology",
"drug discovery",
"evolution"
],
"stars": 57,
"size": 132489,
"hexsha": "cb96099973073ae8f05011b91d2b40b2ef825ab9",
"max_line_length": 46362,
"avg_line_length": 52.9744102359,
"alphanum_fraction": 0.5927209051
}
|
# Notebook from jenchen/image-captioning
Path: 2_Training.ipynb
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model_____no_output_____<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:** I referenced the two papers suggested above to come up with an initial design of my CNN-RNN architecture. The CNN architecture was provided in the initial project code and is a pre-trained ResNet-50 model. My RNN architecture is based on the second paper, "Show and Tell: A Neural Image Caption Generator". Thus, I chose `vocab_threshold` of 5, `embed_size` of 512, and `hidden_size` of 512. I think 512 is a good choice because a large word embedding increases the chance of learning useful information. Additionally, I selected a `batch_size` of 128, since it is a power of 2 (taking advantage of vector optimizations) and batch sizes of 128 and 256 are commonly used.
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:** I left `transform_train` at its provided value. Since I used the CNN architecture as provided, I kept the transform function unchanged. By applying random cropping, the image transform extends the amount of data for training and makes the neural net more robust. Additionally, horizontal flipping makes sense because images are more likely to be mirrored across the vertical axis. A dog facing left and a dog facing right should be interpreted as dogs in a similar position. Normalization is also an important step. The data augmentation introduced by the image transformation function makes it a good choice for the CNN architecture.
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:** I selected the trainable parameters of my architecture based on the recommended values. All the weights in the decoder and only the weights in the embedding layer of the encoder are trained, while the other parameters of the encoder won't be trained since we're using a pre-trained model.
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:** I initially used I used SGD since the paper recommends it. After experimentation, I decided to go with the Adam optimizer to train my final model. SGD was very slow and Adam was faster and produced significantly better perplexity scores (with perplexity <30). Models that are better at predicting a sample have low perplexity._____no_output_____
<code>
import nltk
nltk.download('punkt')[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
% load_ext autoreload
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 128 # batch size
vocab_threshold = 5 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
Done (t=1.11s)
creating index...
</code>
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset._____no_output_____
<code>
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()Epoch [1/3], Step [100/3236], Loss: 3.6239, Perplexity: 37.48471
Epoch [1/3], Step [200/3236], Loss: 3.1404, Perplexity: 23.11358
Epoch [1/3], Step [300/3236], Loss: 3.0967, Perplexity: 22.1238
Epoch [1/3], Step [400/3236], Loss: 3.0676, Perplexity: 21.49042
Epoch [1/3], Step [500/3236], Loss: 2.9404, Perplexity: 18.9239
Epoch [1/3], Step [600/3236], Loss: 2.8282, Perplexity: 16.9155
Epoch [1/3], Step [700/3236], Loss: 2.7990, Perplexity: 16.4287
Epoch [1/3], Step [800/3236], Loss: 2.6820, Perplexity: 14.6139
Epoch [1/3], Step [900/3236], Loss: 2.4851, Perplexity: 12.0017
Epoch [1/3], Step [1000/3236], Loss: 2.6709, Perplexity: 14.4536
Epoch [1/3], Step [1100/3236], Loss: 2.3593, Perplexity: 10.5840
Epoch [1/3], Step [1200/3236], Loss: 2.3194, Perplexity: 10.1696
Epoch [1/3], Step [1300/3236], Loss: 2.3720, Perplexity: 10.7186
Epoch [1/3], Step [1400/3236], Loss: 2.4119, Perplexity: 11.15474
Epoch [1/3], Step [1500/3236], Loss: 2.3795, Perplexity: 10.7997
Epoch [1/3], Step [1600/3236], Loss: 2.3247, Perplexity: 10.2233
Epoch [1/3], Step [1700/3236], Loss: 2.5268, Perplexity: 12.5134
Epoch [1/3], Step [1800/3236], Loss: 2.1708, Perplexity: 8.76544
Epoch [1/3], Step [1900/3236], Loss: 2.2815, Perplexity: 9.79152
Epoch [1/3], Step [2000/3236], Loss: 2.1799, Perplexity: 8.84551
Epoch [1/3], Step [2100/3236], Loss: 2.2065, Perplexity: 9.08429
Epoch [1/3], Step [2200/3236], Loss: 2.2165, Perplexity: 9.17510
Epoch [1/3], Step [2300/3236], Loss: 2.1406, Perplexity: 8.50452
Epoch [1/3], Step [2400/3236], Loss: 2.4853, Perplexity: 12.0042
Epoch [1/3], Step [2500/3236], Loss: 2.1120, Perplexity: 8.26453
Epoch [1/3], Step [2600/3236], Loss: 2.1271, Perplexity: 8.39043
Epoch [1/3], Step [2700/3236], Loss: 2.1332, Perplexity: 8.44184
Epoch [1/3], Step [2800/3236], Loss: 2.0957, Perplexity: 8.13101
Epoch [1/3], Step [2900/3236], Loss: 2.2598, Perplexity: 9.58117
Epoch [1/3], Step [3000/3236], Loss: 2.1951, Perplexity: 8.98091
Epoch [1/3], Step [3100/3236], Loss: 2.0428, Perplexity: 7.71224
Epoch [1/3], Step [3200/3236], Loss: 2.2843, Perplexity: 9.81868
Epoch [2/3], Step [100/3236], Loss: 2.1602, Perplexity: 8.672755
Epoch [2/3], Step [200/3236], Loss: 2.1486, Perplexity: 8.57271
Epoch [2/3], Step [300/3236], Loss: 2.4173, Perplexity: 11.2155
Epoch [2/3], Step [400/3236], Loss: 2.3438, Perplexity: 10.4204
Epoch [2/3], Step [500/3236], Loss: 2.0606, Perplexity: 7.85104
Epoch [2/3], Step [600/3236], Loss: 2.1495, Perplexity: 8.58036
Epoch [2/3], Step [700/3236], Loss: 2.1013, Perplexity: 8.17695
Epoch [2/3], Step [800/3236], Loss: 2.1093, Perplexity: 8.24217
Epoch [2/3], Step [900/3236], Loss: 2.0459, Perplexity: 7.73593
Epoch [2/3], Step [1000/3236], Loss: 2.0698, Perplexity: 7.9231
Epoch [2/3], Step [1100/3236], Loss: 2.1618, Perplexity: 8.68655
Epoch [2/3], Step [1200/3236], Loss: 2.3400, Perplexity: 10.3816
Epoch [2/3], Step [1300/3236], Loss: 2.0491, Perplexity: 7.76075
Epoch [2/3], Step [1400/3236], Loss: 2.0541, Perplexity: 7.79959
Epoch [2/3], Step [1500/3236], Loss: 2.0187, Perplexity: 7.52873
Epoch [2/3], Step [1600/3236], Loss: 2.1680, Perplexity: 8.74058
Epoch [2/3], Step [1700/3236], Loss: 1.9661, Perplexity: 7.14275
Epoch [2/3], Step [1800/3236], Loss: 1.9652, Perplexity: 7.13656
Epoch [2/3], Step [1900/3236], Loss: 2.1052, Perplexity: 8.20876
Epoch [2/3], Step [2000/3236], Loss: 1.9908, Perplexity: 7.32115
Epoch [2/3], Step [2100/3236], Loss: 2.1415, Perplexity: 8.51187
Epoch [2/3], Step [2200/3236], Loss: 2.7824, Perplexity: 16.1574
Epoch [2/3], Step [2300/3236], Loss: 2.1612, Perplexity: 8.68132
Epoch [2/3], Step [2400/3236], Loss: 2.0250, Perplexity: 7.57602
Epoch [2/3], Step [2500/3236], Loss: 2.8415, Perplexity: 17.1420
Epoch [2/3], Step [2600/3236], Loss: 2.0138, Perplexity: 7.49196
Epoch [2/3], Step [2700/3236], Loss: 2.1041, Perplexity: 8.19960
Epoch [2/3], Step [2800/3236], Loss: 2.0494, Perplexity: 7.76293
Epoch [2/3], Step [2900/3236], Loss: 1.9698, Perplexity: 7.16928
Epoch [2/3], Step [3000/3236], Loss: 2.1085, Perplexity: 8.23572
Epoch [2/3], Step [3100/3236], Loss: 2.0151, Perplexity: 7.50161
Epoch [2/3], Step [3200/3236], Loss: 1.8978, Perplexity: 6.67105
Epoch [3/3], Step [100/3236], Loss: 1.9430, Perplexity: 6.979408
Epoch [3/3], Step [200/3236], Loss: 2.1278, Perplexity: 8.39668
Epoch [3/3], Step [300/3236], Loss: 1.9606, Perplexity: 7.10383
Epoch [3/3], Step [400/3236], Loss: 1.8707, Perplexity: 6.49252
Epoch [3/3], Step [500/3236], Loss: 1.9794, Perplexity: 7.23856
Epoch [3/3], Step [600/3236], Loss: 2.0009, Perplexity: 7.39608
Epoch [3/3], Step [700/3236], Loss: 1.8993, Perplexity: 6.68102
Epoch [3/3], Step [800/3236], Loss: 1.9123, Perplexity: 6.76854
Epoch [3/3], Step [900/3236], Loss: 2.7445, Perplexity: 15.5567
Epoch [3/3], Step [1000/3236], Loss: 1.8934, Perplexity: 6.6417
Epoch [3/3], Step [1100/3236], Loss: 2.1756, Perplexity: 8.80789
Epoch [3/3], Step [1200/3236], Loss: 1.9674, Perplexity: 7.15219
Epoch [3/3], Step [1300/3236], Loss: 1.8194, Perplexity: 6.16826
Epoch [3/3], Step [1400/3236], Loss: 2.2362, Perplexity: 9.35803
Epoch [3/3], Step [1500/3236], Loss: 1.8438, Perplexity: 6.32029
Epoch [3/3], Step [1600/3236], Loss: 2.0412, Perplexity: 7.70011
Epoch [3/3], Step [1700/3236], Loss: 1.8665, Perplexity: 6.46590
Epoch [3/3], Step [1800/3236], Loss: 1.9141, Perplexity: 6.78106
Epoch [3/3], Step [1900/3236], Loss: 1.9906, Perplexity: 7.31972
Epoch [3/3], Step [2000/3236], Loss: 1.8777, Perplexity: 6.53881
Epoch [3/3], Step [2100/3236], Loss: 1.9040, Perplexity: 6.71282
Epoch [3/3], Step [2200/3236], Loss: 1.9244, Perplexity: 6.85118
Epoch [3/3], Step [2300/3236], Loss: 1.8678, Perplexity: 6.47370
Epoch [3/3], Step [2400/3236], Loss: 2.1070, Perplexity: 8.22378
Epoch [3/3], Step [2500/3236], Loss: 1.8958, Perplexity: 6.65829
Epoch [3/3], Step [2600/3236], Loss: 1.7855, Perplexity: 5.96253
Epoch [3/3], Step [2700/3236], Loss: 1.9551, Perplexity: 7.06489
Epoch [3/3], Step [2800/3236], Loss: 2.0558, Perplexity: 7.81299
Epoch [3/3], Step [2900/3236], Loss: 2.1580, Perplexity: 8.65373
Epoch [3/3], Step [3000/3236], Loss: 1.9254, Perplexity: 6.85805
Epoch [3/3], Step [3100/3236], Loss: 1.8341, Perplexity: 6.25961
Epoch [3/3], Step [3200/3236], Loss: 2.0032, Perplexity: 7.41304
Epoch [3/3], Step [3236/3236], Loss: 1.9834, Perplexity: 7.26748
</code>
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset._____no_output_____
<code>
# (Optional) TODO: Validate your model._____no_output_____
</code>
|
{
"repository": "jenchen/image-captioning",
"path": "2_Training.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 30030,
"hexsha": "cb963c2af01501866aafb04696f76f6e55e02288",
"max_line_length": 734,
"avg_line_length": 59.7017892644,
"alphanum_fraction": 0.6267399267
}
|
# Notebook from kevinrue/cgat-flow
Path: cgatpipelines/tools/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report_insert_sizes.ipynb
Peakcalling Bam Stats and Filtering Report - Insert Sizes
================================================================
This notebook is for the analysis of outputs from the peakcalling pipeline
There are severals stats that you want collected and graphed (topics covered in this notebook in bold).
These are:
- how many reads input
- how many reads removed at each step (numbers and percentages)
- how many reads left after filtering
- inset size distribution pre filtering for PE reads
- how many reads mapping to each chromosome before filtering?
- how many reads mapping to each chromosome after filtering?
- X:Y reads ratio
- **inset size distribution after filtering for PE reads**
- samtools flags - check how many reads are in categories they shouldn't be
- picard stats - check how many reads are in categories they shouldn't be
This notebook takes the sqlite3 database created by cgat peakcalling_pipeline.py and uses it for plotting the above statistics
It assumes a file directory of:
location of database = project_folder/csvdb
location of this notebook = project_folder/notebooks.dir/_____no_output_____Firstly lets load all the things that might be needed_____no_output_____Insert size distribution
------------------------
This section get the size distribution of the fragements that have been sequeced in paired-end sequencing. The pipeline calculates the size distribution by caluculating the distance between the most 5' possition of both reads, for those mapping to the + stand this is the left-post possition, for those mapping to the - strand is the rightmost coordinate.
This plot is especially useful for ATAC-Seq experiments as good samples should show peaks with a period approximately equivelent to the length of a nucleosome (~ 146bp) a lack of this phasing might indicate poor quality samples and either over (if lots of small fragments) or under intergration (if an excess of large fragments) of the topoisomerase. _____no_output_____
<code>
import sqlite3
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
#import cgatcore.pipeline as P
import os
import statistics
#import collections
#load R and the R packages required
#%load_ext rpy2.ipython
#%R require(ggplot2)
# use these functions to display tables nicely as html
from IPython.display import display, HTML
plt.style.use('ggplot')
#plt.style.available_____no_output_____
</code>
This is where we are and when the notebook was run
_____no_output_____
<code>
!pwd
!date_____no_output_____
</code>
First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains_____no_output_____
<code>
database_path = '../csvdb'
output_path = '.'
#database_path= "/ifs/projects/charlotteg/pipeline_peakcalling/csvdb"_____no_output_____
</code>
This code adds a button to see/hide code in html _____no_output_____
<code>
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
_____no_output_____
</code>
The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name_____no_output_____
<code>
def getTableNamesFromDB(database_path):
# Create a SQL connection to our SQLite database
con = sqlite3.connect(database_path)
cur = con.cursor()
# the result of a "cursor.execute" can be iterated over by row
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
available_tables = (cur.fetchall())
#Be sure to close the connection.
con.close()
return available_tables
db_tables = getTableNamesFromDB(database_path)
print('Tables contained by the database:')
for x in db_tables:
print('\t\t%s' % x[0])
#This function retrieves a table from sql database and indexes it with track name
def getTableFromDB(statement,database_path):
'''gets table from sql database depending on statement
and set track as index if contains track in column names'''
conn = sqlite3.connect(database_path)
df = pd.read_sql_query(statement,conn)
if 'track' in df.columns:
df.index = df['track']
return df_____no_output_____
</code>
Insert Size Summary
====================_____no_output_____1) lets getthe insert_sizes table from database
Firsly lets look at the summary statistics that us the mean fragment size, sequencing type and mean read length. This table is produced using macs2 for PE data, or bamtools for SE data
If IDR has been run the insert_size table will contain entries for the pooled and pseudo replicates too - we don't really want this as it will duplicate the data from the origional samples so we subset this out _____no_output_____
<code>
insert_df = getTableFromDB('select * from insert_sizes;',database_path)
insert_df = insert_df[insert_df["filename"].str.contains('pseudo')==False].copy()
insert_df = insert_df[insert_df["filename"].str.contains('pooled')==False].copy()_____no_output_____def add_expt_to_insertdf(dataframe):
''' splits track name for example HsTh1-RATotal-R1.star into expt
featues, expt, sample_treatment and replicate and adds these as
collumns to the dataframe'''
expt = []
treatment = []
replicate = []
for value in dataframe.filename:
x = value.split('/')[-1]
x = x.split('_insert')[0]
# split into design features
y = x.split('-')
expt.append(y[-3])
treatment.append(y[-2])
replicate.append(y[-1])
if len(expt) == len(treatment) and len(expt)== len(replicate):
print ('all values in list correctly')
else:
print ('error in loading values into lists')
#add collums to dataframe
dataframe['expt_name'] = expt
dataframe['sample_treatment'] = treatment
dataframe['replicate'] = replicate
return dataframe
insert_df = add_expt_to_insertdf(insert_df)
insert_df_____no_output_____
</code>
lets graph the fragment length mean and tag size grouped by sample so we can see if they are much different_____no_output_____
<code>
ax = insert_df.boxplot(column='fragmentsize_mean', by='sample_treatment')
ax.set_title('for mean fragment size',size=10)
ax.set_ylabel('mean fragment length')
ax.set_xlabel('sample treatment')
ax = insert_df.boxplot(column='tagsize', by='sample_treatment')
ax.set_title('for tag size',size=10)
ax.set_ylabel('tag size')
ax.set_xlabel('sample treatment')
ax.set_ylim(((insert_df.tagsize.min()-2),(insert_df.tagsize.max()+2)))_____no_output_____
</code>
Ok now get get the fragment length distributiions for each sample and plot them _____no_output_____
<code>
def getFraglengthTables(database_path):
'''Takes path to sqlite3 database and retrieves fraglengths tables for individual samples
, returns a dictionary where keys = sample table names, values = fraglengths dataframe'''
frag_tabs = []
db_tables = getTableNamesFromDB(database_path)
for table_name in db_tables:
if 'fraglengths' in str(table_name[0]):
tab_name = str(table_name[0])
statement ='select * from %s;' % tab_name
df = getTableFromDB(statement,database_path)
frag_tabs.append((tab_name,df))
print('detected fragment length distribution tables for %s files: \n' % len(frag_tabs))
for val in frag_tabs:
print(val[0])
return frag_tabs
def getDFofFragLengths(database_path):
''' this takes a path to database and gets a dataframe where length of fragments is the index,
each column is a sample and values are the number of reads that have that fragment length in that
sample
'''
fraglength_dfs_list = getFraglengthTables(database_path)
dfs=[]
for item in fraglength_dfs_list:
track = item[0].split('_filtered_fraglengths')[0]
df = item[1]
#rename collumns so that they are correct - correct this in the pipeline then delete this
#df.rename(columns={'frequency':'frag_length', 'frag_length':'frequency'}, inplace=True)
df.index = df.frag_length
df.drop('frag_length',axis=1,inplace=True)
df.rename(columns={'frequency':track},inplace=True)
dfs.append(df)
frag_length_df = pd.concat(dfs,axis=1)
frag_length_df.fillna(0, inplace=True)
return frag_length_df
#Note the frequency and fragment lengths are around the wrong way!
#frequency is actually fragment length, and fragement length is the frequency
#This gets the tables from db and makes master df of all fragment length frequencies
frag_length_df = getDFofFragLengths(database_path)
#plot fragment length frequencies
ax = frag_length_df.divide(1000).plot()
ax.set_ylabel('Number of fragments\n(thousands)')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim()
_____no_output_____
</code>
Now lets zoom in on the interesting region of the plot (the default in the code looks at fragment lengths from 0 to 800bp - you can change this below by setting the tuple in the ax.set_xlim() function_____no_output_____
<code>
ax = frag_length_df.divide(1000).plot(figsize=(9,9))
ax.set_ylabel('Number of fragments\n(thousands)')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim((0,800))_____no_output_____
</code>
it is a bit trickly to see differences between samples of different library sizes so lets look and see if the reads for each fragment length is similar _____no_output_____
<code>
percent_frag_length_df = pd.DataFrame(index=frag_length_df.index)
for column in frag_length_df:
total_frags = frag_length_df[column].sum()
percent_frag_length_df[column] = frag_length_df[column].divide(total_frags)*100
ax = percent_frag_length_df.plot(figsize=(9,9))
ax.set_ylabel('Percentage of fragments')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('percentage fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim((0,800))
_____no_output_____
</code>
SUMMARISE HERE
==============
From these plots you should be able to tell wether there are any distinctive patterns in the size of the fragment lengths,this is especially important for ATAC-Seq data as in successful experiments you should be able to detect nucleosome phasing - it can also indicate over fragmentation or biases in cutting._____no_output_____Lets looks at the picard insert size metrics also _____no_output_____
<code>
insert_df = getTableFromDB('select * from picard_stats_insert_size_metrics;',database_path)
for c in insert_df.columns:
print (c)
insert_df_____no_output_____
</code>
These metrics are actually quite different to the ones we calculate themselves - for some reason it seems to split the files into 2 and dives a distribution for smaller fragments and for larger fragments- not sure why at the moment _____no_output_____
|
{
"repository": "kevinrue/cgat-flow",
"path": "cgatpipelines/tools/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report_insert_sizes.ipynb",
"matched_keywords": [
"ATAC-seq"
],
"stars": 11,
"size": 16301,
"hexsha": "cb96ac93524f713579a02fcbe5e5bf3146938277",
"max_line_length": 365,
"avg_line_length": 33.5411522634,
"alphanum_fraction": 0.597693393
}
|
# Notebook from lafleur1/isolearn
Path: examples/splicing_cnn_perturbed_multicell.ipynb
<code>
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, Input, Lambda, Concatenate
from keras.layers import Conv1D, MaxPooling1D
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras import backend as K
import keras.losses
import tensorflow as tf
import pandas as pd
import os
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import isolearn.io as isoio
import isolearn.keras as iso
from scipy.stats import pearsonr
Using TensorFlow backend.
</code>
<h2>Load 5' Alternative Splicing Data</h2>
- Load a Pandas DataFrame + Matlab Matrix of measured Splicing Sequences<br/>
- isolearn.io loads all .csv and .mat files of a directory into memory as a dictionary<br/>
- The DataFrame has one column - padded_sequence - containing the splice donor sequence<br/>
- The Matrix contains RNA-Seq counts of measured splicing at each position across the sequence<br/>
_____no_output_____
<code>
#Load Splicing Data
splicing_dict = isoio.load('data/processed_data/splicing_5ss_data/splicing_5ss_data')
_____no_output_____
</code>
<h2>Create a Training and Test Set</h2>
- We create an index containing row numbers corresponding to training and test sequences<br/>
- Notice that we do not alter the underlying DataFrame, we only make lists of pointers to rows<br/>
_____no_output_____
<code>
#Generate training, validation and test set indexes
valid_set_size = 0.10
test_set_size = 0.10
data_index = np.arange(len(splicing_dict['df']), dtype=np.int)
train_index = data_index[:-int(len(data_index) * (valid_set_size + test_set_size))]
valid_index = data_index[train_index.shape[0]:-int(len(data_index) * test_set_size)]
test_index = data_index[train_index.shape[0] + valid_index.shape[0]:]
print('Training set size = ' + str(train_index.shape[0]))
print('Validation set size = ' + str(valid_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))Training set size = 211718
Validation set size = 26465
Test set size = 26464
</code>
<h2>Create Data Generators</h2>
- In Isolearn, we always build data generators that will encode and feed us the data on the fly<br/>
- Here, for example, we create a training and test generator separately (using list comprehension)<br/>
- First argument: The list of row indices (of data points) for this generator<br/>
- Second argument: Dictionary or data sources<br/>
- Third argument: Batch size for the data generator
- Fourth argument: List of inputs, where each input is specified as a dictionary of attributes<br/>
- Fifth argument: List of outputs<br/>
- Sixth argument: List of any randomizers (see description below)<br/>
- Seventh argument: Shuffle the dataset or not<br/>
- Eight argument: True if some data source matrices are in sparse format<br/>
- Ninth argument: In Keras, we typically want to specfiy the Outputs as Inputs when training. <br/>This argument achieves this by moving the outputs over to the input list and replaces the output with a dummy encoder.<br/>
In this example, we specify a One-Hot encoder as the input encoder for the entire splice donor sequence (centered on the splice donor).<br/>
We also specify the target output as the normalized RNA-Seq count at position 120 in the count matrix for each cell line (4 outputs).<br/>
Besides the canonical splice donor at position 120 in the sequence, there are many other splice donors inserted randomly at neighboring positions. If we wanted to learn a general model of splicing, it would be a lot better if we could stochastically "align" sequences on any of the possible splice donors, perturbing both the input sequence and the RNA-Seq count matrix that we estimate splice donor usage from.<br/>
This is achieved using the built-in CutAlignSampler class, which allows us to randomly sample a position in the sequence with supporting splice junction counts, and shift both the sequence and splice count vector to be centered around that position. In this example, we specfiy the sampling rate of splice donors to be 0.5 (p_pos) and the rate of sampling some other, non-splice-site, position at a rate of 0.5 (p_neg).<br/>
_____no_output_____
<code>
#Create a One-Hot data generator, to be used for a convolutional net to regress SD1 Usage
total_cuts = splicing_dict['hek_count'] + splicing_dict['hela_count'] + splicing_dict['mcf7_count'] + splicing_dict['cho_count']
shifter = iso.CutAlignSampler(total_cuts, 240, 120, [], 0.0, p_pos=0.5, p_neg=0.5, sparse_source=True)
splicing_gens = {
gen_id : iso.DataGenerator(
idx,
{
'df' : splicing_dict['df'],
'hek_count' : splicing_dict['hek_count'],
'hela_count' : splicing_dict['hela_count'],
'mcf7_count' : splicing_dict['mcf7_count'],
'cho_count' : splicing_dict['cho_count'],
},
batch_size=32,
inputs = [
{
'id' : 'seq',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : iso.SequenceExtractor('padded_sequence', start_pos=0, end_pos=240, shifter=shifter if gen_id == 'train' else None),
'encoder' : iso.OneHotEncoder(seq_length=240),
'dim' : (240, 4),
'sparsify' : False
}
],
outputs = [
{
'id' : cell_type + '_sd1_usage',
'source_type' : 'matrix',
'source' : cell_type + '_count',
'extractor' : iso.CountExtractor(start_pos=0, end_pos=240, static_poses=[-1], shifter=shifter if gen_id == 'train' else None, sparse_source=False),
'transformer' : lambda t: t[120] / np.sum(t)
} for cell_type in ['hek', 'hela', 'mcf7', 'cho']
],
randomizers = [shifter] if gen_id in ['train'] else [],
shuffle = True if gen_id in ['train'] else False,
densify_batch_matrices=True,
move_outputs_to_inputs=True if gen_id in ['train', 'valid'] else False
) for gen_id, idx in [('train', train_index), ('valid', valid_index), ('test', test_index)]
}
_____no_output_____
</code>
<h2>Keras Loss Functions</h2>
Here we specfiy a few loss function (Cross-Entropy and KL-divergence) to be used when optimizing our Splicing CNN.<br/>
_____no_output_____
<code>
#Keras loss functions
def sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.sum(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def mean_sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.mean(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.sum(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
def mean_sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.mean(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
_____no_output_____
</code>
<h2>Splicing Model Definition</h2>
Here we specfiy the Keras Inputs that we expect to receive from the data generators.<br/>
We also define the model architecture (2 convolutional-layer CNN with MaxPooling).<br/>_____no_output_____
<code>
#Splicing Model Definition (CNN)
#Inputs
seq_input = Input(shape=(240, 4))
#Outputs
true_usage_hek = Input(shape=(1,))
true_usage_hela = Input(shape=(1,))
true_usage_mcf7 = Input(shape=(1,))
true_usage_cho = Input(shape=(1,))
#Shared Model Definition (Applied to each randomized sequence region)
layer_1 = Conv1D(64, 8, padding='valid', activation='relu')
layer_1_pool = MaxPooling1D(pool_size=2)
layer_2 = Conv1D(128, 6, padding='valid', activation='relu')
def shared_model(seq_input) :
return Flatten()(
layer_2(
layer_1_pool(
layer_1(
seq_input
)
)
)
)
shared_out = shared_model(seq_input)
#Layers applied to the concatenated hidden representation
layer_dense = Dense(256, activation='relu')
layer_drop = Dropout(0.2)
dropped_dense_out = layer_drop(layer_dense(shared_out))
#Final cell-line specific regression layers
layer_usage_hek = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_hela = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_mcf7 = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_cho = Dense(1, activation='sigmoid', kernel_initializer='zeros')
pred_usage_hek = layer_usage_hek(dropped_dense_out)
pred_usage_hela = layer_usage_hela(dropped_dense_out)
pred_usage_mcf7 = layer_usage_mcf7(dropped_dense_out)
pred_usage_cho = layer_usage_cho(dropped_dense_out)
#Compile Splicing Model
splicing_model = Model(
inputs=[
seq_input
],
outputs=[
pred_usage_hek,
pred_usage_hela,
pred_usage_mcf7,
pred_usage_cho
]
)
_____no_output_____
</code>
<h2>Loss Model Definition</h2>
Here we specfiy our loss function, and we build it as a separate Keras Model.<br/>
In our case, our loss model averages the KL-divergence of predicted vs. true Splice Donor Usage across the 4 different cell types.<br/>_____no_output_____
<code>
#Loss Model Definition
loss_hek = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hek, pred_usage_hek])
loss_hela = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hela, pred_usage_hela])
loss_mcf7 = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_mcf7, pred_usage_mcf7])
loss_cho = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_cho, pred_usage_cho])
total_loss = Lambda(
lambda l: (l[0] + l[1] + l[2] + l[3]) / 4.,
output_shape = (1,)
)(
[
loss_hek,
loss_hela,
loss_mcf7,
loss_cho
]
)
loss_model = Model([
#Inputs
seq_input,
#Target SD Usages
true_usage_hek,
true_usage_hela,
true_usage_mcf7,
true_usage_cho
], total_loss)_____no_output_____
</code>
<h2>Optimize the Loss Model</h2>
Here we use SGD to optimize the Loss Model (defined in the previous notebook cell).<br/>
Since our Loss Model indirectly depends on predicted outputs from our CNN Splicing Model, SGD will optimize the weights of our CNN<br/>
<br/>
Note that we very easily pass the data generators, and run them in parallel, by simply calling Keras fit_generator.<br/>
_____no_output_____
<code>
#Optimize CNN with Keras using the Data Generators to stream genomic data features
opt = keras.optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
loss_model.compile(loss=lambda true, pred: pred, optimizer=opt)
callbacks =[
EarlyStopping(monitor='val_loss', min_delta=0.001, patience=2, verbose=0, mode='auto')
]
loss_model.fit_generator(
generator=splicing_gens['train'],
validation_data=splicing_gens['valid'],
epochs=10,
use_multiprocessing=True,
workers=4,
callbacks=callbacks
)
Epoch 1/10
6615/6616 [============================>.] - ETA: 0s - loss: 0.0754
6616/6616 [==============================] - 470s 71ms/step - loss: 0.0754 - val_loss: 0.1041
Epoch 2/10
Epoch 1/10
6616/6616 [==============================] - 452s 68ms/step - loss: 0.0561 - val_loss: 0.0950
Epoch 3/10
6616/6616 [==============================] - 449s 68ms/step - loss: 0.0536 - val_loss: 0.0928
Epoch 4/10
6616/6616 [==============================] - 462s 70ms/step - loss: 0.0509 - val_loss: 0.0913
Epoch 5/10
6616/6616 [==============================] - 466s 70ms/step - loss: 0.0497 - val_loss: 0.0912
Epoch 6/10
6616/6616 [==============================] - 459s 69ms/step - loss: 0.0489 - val_loss: 0.0883
Epoch 7/10
6616/6616 [==============================] - 455s 69ms/step - loss: 0.0482 - val_loss: 0.0881
Epoch 8/10
6616/6616 [==============================] - 472s 71ms/step - loss: 0.0471 - val_loss: 0.0821
Epoch 9/10
6616/6616 [==============================] - 475s 72ms/step - loss: 0.0467 - val_loss: 0.0855
Epoch 10/10
6616/6616 [==============================] - 472s 71ms/step - loss: 0.0465 - val_loss: 0.0828
#Save model
save_dir = os.path.join(os.getcwd(), 'saved_models')
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_name = 'splicing_cnn_perturbed_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model.save(model_path)
print('Saved trained model at %s ' % model_path)Saved trained model at /home/johli/isolearn/example/saved_models/splicing_cnn_perturbed_multicell.h5
#Load model
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'splicing_cnn_perturbed_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model = load_model(model_path)/home/johli/anaconda3/envs/aparent/lib/python3.6/site-packages/keras/engine/saving.py:292: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
</code>
<h2>Evaluate the Splicing CNN</h2>
Here we run our Splicing CNN on the Test set data generator (using Keras predict_generator).<br/>
We then compare our predictions of splice donor usage against the true RNA-Seq measurements.<br/>
_____no_output_____
<code>
#Evaluate predictions on test set
predictions = splicing_model.predict_generator(splicing_gens['test'], workers=4, use_multiprocessing=True)
pred_usage_hek, pred_usage_hela, pred_usage_mcf7, pred_usage_cho = [np.ravel(prediction) for prediction in predictions]
targets = zip(*[splicing_gens['test'][i][1] for i in range(len(splicing_gens['test']))])
true_usage_hek, true_usage_hela, true_usage_mcf7, true_usage_cho = [np.concatenate(list(target)) for target in targets]
cell_lines = [
('hek', (pred_usage_hek, true_usage_hek)),
('hela', (pred_usage_hela, true_usage_hela)),
('mcf7', (pred_usage_mcf7, true_usage_mcf7)),
('cho', (pred_usage_cho, true_usage_cho))
]
for cell_name, [y_true, y_pred] in cell_lines :
r_val, p_val = pearsonr(y_pred, y_true)
print("Test set R^2 = " + str(round(r_val * r_val, 2)) + ", p = " + str(p_val))
#Plot test set scatter
f = plt.figure(figsize=(4, 4))
plt.scatter(y_pred, y_true, color='black', s=5, alpha=0.05)
plt.xticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.yticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.xlabel('Predicted SD1 Usage', fontsize=14)
plt.ylabel('True SD1 Usage', fontsize=14)
plt.title(str(cell_name), fontsize=16)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.tight_layout()
plt.show()
Test set R^2 = 0.86, p = 0.0
</code>
|
{
"repository": "lafleur1/isolearn",
"path": "examples/splicing_cnn_perturbed_multicell.ipynb",
"matched_keywords": [
"RNA-seq"
],
"stars": 5,
"size": 344836,
"hexsha": "cb96b70509722fac6f4edae6abcfb1025642c4ee",
"max_line_length": 81280,
"avg_line_length": 530.5169230769,
"alphanum_fraction": 0.9419869155
}
|
# Notebook from dongreenberg/rfcs
Path: 0003-Aqua_0.7_operator_redesign.ipynb
# Aqua 0.7 Operator Redesign
_17-Jan-19, donny@______no_output_____| **Status** | **Accepted** |
|:------------------|:----------------------------------------------|
| **RFC #** | 0003 |
| **Authors** | Donny Greenberg ([email protected]) |
| **Deprecates** | NA |
| **Submitted** | 2020-01-17 |
| **Updated** | 2020-01-23 |
## Purpose
To improve the transparency, ease of understanding, and programming power of Aqua’s operator logic and usage. Specifically, to reconcile with the Terra operator hierarchy and make the Aqua algorithmic flow more visible, explicit, and extensible.
Throughout this doc, we rely on definitions of Operators roughly derived from the first chapter of John Watrous's "The Theory of Quantum Information," with a focus on Square Operators over binary alphabets._____no_output_____## Background: Motivation and Opportunities
The representation of matrices sparsely as linear combinations of Pauli operators is critical in many quantum algorithms. As such, the Operator classes are the workhorses of Aqua today (0.6.2), containing both the expectation value and evolution logic used by most of its algorithms.
However, there are several opportunities for improvement:
* **Basic Construction & Rapid Protoyping:** Aqua's Operators were initially built as procedural infrastructure rather than first-class programming primitives. Improvements to syntax and interfaces can enable the succinctness and power typical of mathematical Operator language
* **Separation of Operator Math and Operator Algorithms**
* Ease of understanding: The "Operator algorithm" logic - the ExpectationValue, Evolution, grouping, and symmetry analysis - is mostly spread across the 3000-line operator hierarchy, and is very branchy for different modes of execution
* Ease of extension: Modification to the expectation value, evolution, grouping, and symmetry logic is a core use case (e.g. the [CVaR expectation](https://arxiv.org/abs/1907.04769), [linear combination evolution](https://arxiv.org/abs/1202.5822), or the many recent papers on [Pauli grouping](https://www.nature.com/articles/nature23879)), but not explicitly supported today
* **Smooth Borders with Broader Qiskit**
* Terra's `quantum_info` module also supports operator math, but is mostly matrix-based
* **Remote Operator Algorithms:** Aer's fast ExpectationValue is not transparently or cleanly interchangeable with Aqua's local ExpectationValue today. The concept of an Algorithm not provided by Aqua is not yet defined to support this type of interchangeability cleanly_____no_output_____### Present State of Operators in Qiskit
Both Aqua and Terra include suites of modules to support Operator math, but do so very differently.
* Aqua
* Operators are focused primarily on the procedural requirements of algorithmic execution
* Modules are very large and include hundreds of lines of procedural algorithm code
* Interfaces were not initial built for end-user usage as a programming primitive, and are therefore wordy and difficult for users to understand
* Syntax is not built for rapid prototyping and lacks syntactic power of mathematical Operator language
* Primarily focused on Pauli-basis Operators
* WeightedPauli - $2^n\times 2^n$ Operators sparsely represented as complex combination of Paulis
* MatrixOperator in the standard basis with $2^n\times 2^n$ elements was initially built for performance improvements which are no longer relevant
* Only dependency on Terra is through Pauli module, but this is largely symbolic (not an inexorable component)
* Terra
* Operator math is mostly built around QCVV (Quantum Characterization Verification & Validation) and open Quantum systems modelling use cases
* Support for Channel, Choi, Superoperator, Kraus, etc.
* Operators are largely matrix-based and therefore do not support the Pauli-basis operations necessary to non-exponentially execute quantum algorithms
* Used by:
* Aqua, 29 dependencies - Only Pauli module
* Aer, 10 dependencies
* Ignis, 2 dependencies
* Ignis includes a `clifford.py` module somewhat specific to characterization needs._____no_output_____### Aqua Present Usage (0.6.2)
Within Aqua, the primary uses of Operators are:
* Qubit Observable (Hamiltonian, Cost Function, etc.) Construction
* Used as sparse representations of large observables when constructing problems in Chemistry, Physics, Optimization, and Finance today
* Also often a translation step between domain-specific problems and Quantum hardware-addressable equivalents
* ExpectationValues
* Primarily used in VQE (and derivatives QAOA, UCCSD, etc.) as a device-executable cost function of the ansatz state
* Expectation values can only be taken of Operators in the Pauli basis on Quantum hardware
* Also present in the "Evolution of Hamiltonian" algorithm, which is simply state evolution by one operator followed by an expectation value by another operator
* State Evolution
* Used in QPE (and derivatives HHL, iQPE, etc.) as a Quantum circuit-representable matrix exponentiation
* Used in UCCSD and QAOA ansatze and EOH algorithm as representation of system dynamics to simulate time evolution of a system on quantum hardware
* Evolution can only be taken by Operators in the Pauli basis on Quantum hardware_____no_output_____#### Other Important Aqua Operator Features
* __Grouping__ - Grouping is a technique to reduce the number of circuit evaluations required to compute an ExpectationValue based on mutually commuting Paulis in the Operator decomposition.
* __Tapering__ - Tapering is a technique to remove qubits from a Hamiltonian of interest by identifying Z2 symmetries in the Hamiltonian.
* __Gradients__ - Many variational algorithms are improved dramatically when exact gradients of gate parameters with respect to the cost function observable are computed analytically rather than numerically. Aqua can compute these gradients and provide them to the optimizer directly._____no_output_____### Aqua Present (0.6.2) Operator Object Model and Hierarchy
Aqua's Operators are organized as follows:
* `qiskit.aqua.operators`
* base_operator.py: `BaseOperator(ABC)`
* matrix_operator.py: `MatrixOperator(BaseOperator)`
* weighted_pauli_operator.py: `WeightedPauliOperator(BaseOperator)`, __and__ `Z2Symmetries`
* tpb_grouped_weighted_pauli_operator.py: `TPBGroupedWeightedPauliOperator(WeightedPauliOperator)`, essentially a wrapper around `WeightedPauliOperator` for backward compatibility.
* pauli_graph: `PauliGraph`
* op_converter.py: `to_weighted_pauli_operator(operator)`, `to_matrix_operator(operator)`, `to_tpb_grouped_weighted_pauli_operator(operator, grouping_func, **kwargs)`
* common.py: Utility functions, inc. `evolution_instruction`, `pauli_measurement(circuit, pauli, qr, cr, barrier=False)`, `measure_pauli_z(data, pauli)`, `covariance(data, pauli_1, pauli_2, avg_1, avg_2)`, etc.
* `qiskit.chemistry` __- OUT OF SCOPE OF THIS DOC__
* fermionic_operator.py: `FermionicOperator`, contains `jordan_wigner`, `parity`, `bravyi_kitaev` Fermion-to-qubit operator mappings.
* bksf.py: Another mapping
* `.core`
* chemistry_operator.py: `ChemistryOperator(ABC)`
* hamiltonian.py: `Hamiltonian(ChemistryOperator)`_____no_output_____### Terra Present (0.11.0) Operator Object Model and Hierarchy
Terra's Operators are organized as follows:
* `qiskit.quantum_info`
* `.operators`
* base_operator.py, pauli.py, operator.py (matrix operator), measures.py (`process_fidelity`), predicates.py (`is_unitary_matrix`, `is_hermitian_matrix`, `matrix_equal`, etc.), quaternion.py
* `.channel`
* quantum_channel.py (base), chi.py, choi.py, kraus.py, ptm.py, stinespring.py, superop.py, transformations.py
* `.states`
* quantum_state.py (base), densitymatrix.py, statevector.py, measures.py (`state_fidelity`), states.py (`basis_state`, `projector`, `purity`)
* `.analysis`
* average.py - ExpectationValue of diagonal operator
* make_observable.py - Convert an observable in matrix form to dictionary form
#### WeightedPauliOperator Not Available in Terra
Terra does not contain any of the logic for working in the Pauli-basis implemented in Aqua today, and is not interoptable with Aqua's operator algorithms. As such, these utilities are only accessible to Aqua users._____no_output_____### Operator Construction and Manipulation Present State
The center of Qiskit's algorithmic Operator logic is the WeightedPauli, being the only non-exponential scaling operator basis available today (the only other being the standard basis).
Qiskit supports several methods of WeightedPauli operator construction, none of which are self explanatory to a new user:_____no_output_____
<code>
# from qiskit.quantum_info.operators import WeightedPauliOperator
from qiskit.aqua.operators import WeightedPauliOperator, MatrixOperator, op_converter
from qiskit.quantum_info.operators import Pauli_____no_output_____pauli_op = WeightedPauliOperator([
[.5, Pauli.from_label('IX')],
[.2, Pauli.from_label('ZY')],
[.1j, Pauli.from_label('ZZ')],
])_____no_output_____pauli_op = WeightedPauliOperator.from_list(
paulis=[Pauli.from_label('IX'),
Pauli.from_label('ZY'),
Pauli.from_label('ZZ')],
weights=[.5, .2, .1j])_____no_output_____mat = [[0. +0.1j, 0.5-0.2j, 0. +0.j , 0. +0.j ],
[0.5+0.2j, 0. -0.1j, 0. +0.j , 0. +0.j ],
[0. +0.j , 0. +0.j , 0. -0.1j, 0.5+0.2j],
[0. +0.j , 0. +0.j , 0.5-0.2j, 0. +0.1j]]
mat_op = MatrixOperator(mat)
pauli_op_from_mat = op_converter.to_weighted_pauli_operator(mat_op)
pauli_op == pauli_op_from_mat_____no_output_____
</code>
Classical matrices can be exported for classical usage, again if the user already knows the Operator hierarchy somewhat well:_____no_output_____
<code>
op_converter.to_matrix_operator(pauli_op).matrix.toarray()_____no_output_____
</code>
Composition uses the `*` operator, while Terra's operators and Python use `@`._____no_output_____
<code>
3*pauli_op + .2j*pauli_op == (3+.2j)*pauli_op_____no_output_____print((pauli_op * pauli_op).print_details())II (0.28+0j)
ZZ 0j
ZY 0j
IX 0j
</code>
### Aqua's ExpectationValue is Procedural and Inextensible
Aqua's ExpectationValue is not contained within a single function or module, but rather split into several functions without a clear interface or flow for user usage. This is due to structural constraints in Aqua which are no longer present, where the algorithm requiring the expectation value held the backend object and could run circuits, but the operator could not. We encourage the reader to scan lines [361-395 of Aqua 6.1 VQE’s](https://github.com/Qiskit/qiskit-aqua/blob/stable/qiskit/aqua/algorithms/adaptive/vqe/vqe.py#L361) ExpectationValue calculation to try to understand where and how the expectation is computed. We’ve been asked by numerous Aqua users to explain how this code works, and most do not attempt to use it on their own.
The following is the shortest possible way to write an expectation value in Aqua. Note that it fundamentally requires the user to understand a certain execution flow, the correct functions to use to do this, and how those functions work with their execution mode. This takes a few hours to understand at least, often days. Further, there are no hints that a change from the Z basis for each Pauli is being performed here, or matrix multiplication if the system chooses to do that instead._____no_output_____
<code>
from qiskit.aqua.operators import WeightedPauliOperator
from qiskit.aqua.components.variational_forms import RY
from qiskit.quantum_info import Pauli
from qiskit import BasicAer, execute, QuantumCircuit
from qiskit.circuit import Parameter
qasm_sim = BasicAer.get_backend('qasm_simulator')_____no_output_____op = WeightedPauliOperator([
[.5, Pauli.from_label('IX')],
[.2j, Pauli.from_label('ZY')],
])
circuit = QuantumCircuit(2)
circuit.h([0,1])
evaluation_circuits = op.construct_evaluation_circuit(wave_function=circuit, statevector_mode=False)
result = execute(evaluation_circuits, qasm_sim).result()
expect, std = op.evaluate_with_result(result=result, statevector_mode=False)
expect_____no_output_____
</code>
#### Alternative Expectation Values and the Aer Expectation Value
Because the ExpectationValue logic is embedded directly in the Operator, modifications to the ExpectationValue (e.g. CVaR) are impossible without editing the Operator directly with heavy branching or duplicating the entire Operator. This branching is already in effect within Aqua, automatically choosing between several execution modes mostly opaquely to the user. This is also the case for grouping, evolution, and symmetry logic.
The most dramatic example of this is the Aer-provided fast ExpectationValue simulation, which is so buried into the Operator it is effectively a super-superuser feature today. It was introduced quickly to achieve critical performance gains, but must be formalized to become a true first-class feature.
* In Aqua, there is no simple way to specify which ExpectationValue algorithm the user wants, Aer or otherwise, and most users do not know that the Aer Expectation Exists
* Aer's ExpectationValue is woven throughout the core operator code in a way that is branchy, inexorable, and difficult for users to understand and control
* A new ExpectationValue, such as one provided by BasicAer or IBMQProvider, would simply introduce additional branches following the existing style_____no_output_____### Aqua's State Evolution is Inextensible and Difficult to Navigate
Evolution is somewhat more succinct, but more difficult to navigate in code. The logic for evolution is distributed over several branchy static modules, and the evolution is pre-compiled as a CNOT-chain circuit, which is often not the ideal evaluation format (e.g. matrix multiplication if simulating, or Swap Networks)._____no_output_____
<code>
from qiskit.circuit import Parameter
op = WeightedPauliOperator([
[.5, Pauli.from_label('IX')],
[.2, Pauli.from_label('ZY')],
])
circuit = QuantumCircuit(2)
θ = Parameter('θ')
instr = op.evolve_instruction(evo_time=θ)
circuit.append(instr, [0,1])
print(circuit.draw(fold=4000))
print('Decomposed:')
circuit.decompose().draw(fold=4000) ┌─────────────────┐
q_0: |0>┤0 ├
│ Evolution^1(θ) │
q_1: |0>┤1 ├
└─────────────────┘
Decomposed:
</code>
## Requirements and Design
1. Location and Ownership
1. Operators
1. Provider-specific Algorithms
1. Object Model
1. Operator Definition - Primitives and Composites
1. Algorithms Definition - Primitives and Composite Operations
1. Parameterization and Eagerness
1. Changes to Terra
1. Changes to Aqua
1. Algorithms as composite Operations
1. Circuit Execution Algorithms
1. Expectation Algorithms
1. Evolution Algorithms
1. Other Primitive Algorithms_____no_output_____### Location and Ownership in Qiskit
Given the presence of Operator logic in both Aqua and Terra, there are several options for their placement within Qiskit. The primary considerations here relate to which master branch tests them, who owns what in the case of breakage, and who owns what in the case of design.
In addition, some remote Operator algorithms are being discussed, with one already in production - the Aer Expectation Value. The location of these algorithms is also an important question._____no_output_____#### Operator Location Considerations
* The Operator's centrality to Aqua means relying on an external library is a big overhead
* Reliance on Terra has created frequent firedrills because behavior and interfaces change without integration testing
* Firedrills are very difficult to troubleshoot because presently there is no integration testing between Terra and Aqua or design review to check whether a change will have downstream implications
* Operator is so central to Aqua that it will require strong ownership by the Aqua team, constant maintenance and changes
* Centralized Operator primitives can simplify interfaces across Qiskit
* By accepting a common Operator format derived from Terra, methods in different areas of Qiskit can communicate in a consistent format without dependencies
* For example, Aer's expectation value can take a circuit and an Operator, rather than depend on Aqua to define its interface, or rely on an informal interface (e.g. lists) which must be validated
* Terra and Aqua's respective Operators can be delineated somewhat cleanly
* Aqua and Terra's operators are seemingly used by completely different users for very different tasks (QA&A vs. QCVV or circuit analysis)
* Terra's Operators are primarily matrix-based, while Aqua's are primarily composites of sparse representations (e.g. sums of Paulis or Circuits)
* Though some are definitely shared, such as Pauli
* Operators and Gates may need to be reconciled at some point
* The X, Y, and Z Paulis are not different from the X, Y, and Z Gates
* Both the gate and operator models include functionality for converting unitary matrices to circuit operations_____no_output_____#### Operator Location Options
**A.** Move Aqua Operators into Terra, with:
1. Joint ownership by Aqua team
2. Aqua integration tests run on Terra's master branch (e.g. pulling in Aqua's master branch to execute tests). _Unit tests alone are not sufficient, as they are usually modified along with breaking changes to pass._
3. Aligned release cycles so Aqua does not need to scramble to release when Terra does
**Big-A.** Combine Aqua and Terra into a single repo and jointly own Operators
**B.** Move all operators and states into Aqua, jointly owned by Terra team
**C.** Leave Operators split between Aqua and Terra, with dependency on Terra for some primitives (QuantumCircuit, Pauli), with joint ownership and Aqua integration testing
##### **Decision:** Following a discussion in Aqua Design Review, option **A** will be pursued for the remainder of this doc._____no_output_____#### Provider-Specific Algorithm Location Options (Decision)
**A.** Remote algorithms live in provider repo, and are tested and released at provider’s discretion
**B.** Remote algorithms live in Aqua, with Aqua integration testing of functionality in provider repo
**C.** Remote algorithms live in Aqua, with agreed upon interface to enforce consistency, and data interchange (e.g. an Operator format defined in Terra) tested in provider repo_____no_output_____### Object Model and Hierarchy
What is an Operator _to a QA&A (Quantum Algorithms & Applications) programmer?_
Ignoring the Physical definition of an Operator for a moment, as a _Quantum programming primitive,_ the Operator is:
* __Recursively defined__ - Operators can be one of several _primitives_ - e.g. Matrix, Pauli, Clifford, QuantumCircuit, or an arbitrary combination of these primitives, e.g. Addition, Tensor, Composition.
* It makes complete mathematical sense to add two primitives together, e.g. `(my_matrix+my_circuit)@my_pauli`. In classical programming, this would be like `5.7 + "pickle"`.
* __Both code and data__ - The Operator encodes both data (e.g. a matrix for eigensolution or a wavefunction being prepared) and computation (measure my wavefunction in this basis). There is little distinction between the two in Quantum programming.
* __Linear__ - The Operator is a recursively powerful construct, allowing algorithmic rearrangement not typically allowed in classical computation.
* `op1(op2(A,B)) == op1(op2(A)), op2(B))` in many cases, e.g. Expectation(A+B).
* The idea that `program(a*circuita + b*circuitb)` gives a mathematically valid result is highly surprising.
* __Algorithmically ubiquitous__ - Every quantum algorithm uses Operators. Algorithms are nearly always defined in literature by Operator operations. This language is rigorous, accepted, and compact.
* __Eagerly Computable__ - In most cases, Operator computation can be partially compiled as parameters become available, allowing improved performance, functional modularity (e.g. passing a ready-to-run algorithm), and inspection transparency. For example:
* A circuit can be compiled to a Qobj with parameters missing, to be filled in later
* The full list of circuits necessary to execute an algorithm can be prepared pending some operator coefficients
* A full algorithm can be prepared and passed to a user pending the insertion of some subcomponent (a choice of ExpectationValue algorithm) or parameters_____no_output_____#### Operator Definition: Primitives and Combinations
Operators can be _primitives_ or _combinations._ Primitives are base-level Operator representations which are not defined in terms of other primitives, but can be converted into one another with some computational work. Combinations are Operators which are constructed from functions of multiple primitives, such as sums and tensors. Combinations store the primitives from which they are constructed. Note that many Gates are present in other classes of primitives, and this must be reconciled as a follow-on to this redesign. The following should all be modules in the Operator hierarchy:
* Primitives
* Matrix
* Pauli - X, Y, Z, I
* QuantumCircuit, Gate
* Clifford
* Projector - Ze, O, P, M
* Stabilizer
* Graph State - Stored as a graph
* QuantumCircuit - Implicitly starts from |0⟩⟨0|
* Others (follow-on): ZX, MPS, Dirac Matrix, Gell-Mann matrix
* Combinations
* OpSum - Generalization of WeightedPauli. Stores a list of Operators of equal dimension and complex weights
* OpComposition - Stores a list of Operators which are all of equal dimension
* OpKron - Stores a list of Operators of any size
* OpVec - Stores a list of Operators of any size
* OpExp - Stores a single Operator, acting as a placeholder for some Evolution algorithm to replace later
* OpCombo - custom, user-defined recombination function
_____no_output_____
<code>
from qiskit.aqua.operators.pauli import X, Y, Z, I
op_new = .5*(I^X) + .2*(Z^Y) + .1j*(Z^Z)
op_new == pauli_op_____no_output_____
</code>
Note that to support the above, the existing Pauli in Terra would need to support Tensor, sum, and scalar multiplication which can return an OpSum and OpKron.
The following overload operations are also desirable:
* Operator composition using `@` overload
* __Decision:__ deprecate the `*` overload for composition?
* Power (`**3`), kronpower (`^3`)_____no_output_____
<code>
(pauli_op^2)**2 == (pauli_op^pauli_op)@(pauli_op^pauli_op)_____no_output_____from qiskit.aqua.ansatz import Ry
from qiskit.aqua.operators.projectors import Ze, O, P
ansatz = Ry(qubits=2, depth=3) @ (P^(-.1*O + 3*Ze))
# This is an OpSum of two circuits!_____no_output_____
</code>
#### Algorithms Definition: Primitives and Composites
Operations on Operators also can be described as primitives or combinations of such. Primitives are computations which can be performed directly on some available computation engine, such as Numpy or Quantum Hardware, while composites are constructed from piping primitives together. Algorithms accept only _specific primitives,_ so an algorithm taking a Pauli vs. one taking a matrix are fundamentally different, but are also defined over certain combinations of their input primitives. For example, a Change-of-Basis Expectation Value is defined to accept a Pauli and a Projector (or QuantumCircuit acting as one from Zero implicitly), but can also accept sums, tensors, and vectorizations of Paulis and Projectors. If an unsupported primitive, such as Matrix or OpComposition were passed in, an exception would be thrown.
* Primitives
* Classical sum, product, tensor, trace, etc.
* Z-Basis QuantumCircuit measurement / Trace (traditional QASM backend)
* Primitive Conversion - Pauli to matrix, matrix to Pauli, etc.
* Evolution Conversion - Trotter, Suzuki, etc.
* Pauli Sum, Composition, Tensor
* Change of Basis - Pauli, Fourier
* Optimizers
* External functions, such as Drivers or imports
* Composites
* ExpectationValue
* Existing Aqua Algorithms: VQE, QPE, HHL, etc.
* Gradients
Over time, we have found that it is easiest to describe the behavior of Algorithms in terms of the flow of Operators through various components and subroutines. This description is naturally recursive, and considerably easier to understand than the present presentation of algorithmic flow in Aqua._____no_output_____To demonstrate this, consider the following VQE coded from scratch in this model:_____no_output_____
<code>
ansatz = Ry(qubits=2, depth=3) @ (P^P)
# Ansatz state = Ry(θ)|++⟩
hamiltonian = 3*(I^Z) + .4j*(X^Z)
expectation = PauliExpectation(ansatz, hamiltonian, backend)
print(expectation.run({ansatz.params: np.zeroes(len(ansatz.params))}))
# Print starting expectation
gradient = ParamShiftGradient(expectation)
optimizer = AQGD(initial_point=np.zeroes(len(ansatz.params)))
my_vqe = AQGD(cost_fn=expectation.run, grad_fn=gradient.run)
min_eig = my_vqe.run()_____no_output_____
</code>
#### Parameterization and Eagerness
Operators and algorithms can be _parameterized,_ or missing some key information in order to execute. For Operators these may be sum coefficients, evolution times, QuantumCircuit parameters, and more. For Algorithms these may be input operators, execution parameters, or instances of algorithms used in computation which cannot be inferred by default (e.g. backend on which to execute, optimizer, etc.).
##### Eager Parameterization+Execution Interface Options:
An algorithm should execute as soon as it has filled the parameters necessary to do so. This is called **Eager Execution.** In a similar vein, OpSum can be seen as eagerly waiting for the contained operators to be summable, e.g. replaced with scalars by an expectation value. (**Decision**) Some interface options for eagerness:
**Option A**: Algorithms should be **callable** with a parameter dictionary, triggering a breadth-first search to parameterize any sub-objects with the parameter dictionary. This may be too much hocus pocus and difficult for implementers of algorithms to understand. A user may want to parameterize without executing, so an `execute` parameter should be available in the parameterization function._____no_output_____
<code>
my_op = Parameter('t1')*(Z^Z) + .6*(X^I)
my_vqe = VQE(backend=Parameter('backend'),
operator=my_op,
ansatz=Ry(qubits=2, reps=3),
optimizer=SLSQP(initial_point=Parameter('initial_rotations')))
my_vqe({'t1': .2j, 'backend': Aer.get_backend('qasm_simulator')})
# Didn't return anything yet
rots = np.zeros(len(my_vqe.ansatz.params))
min_eig = my_vqe({'initial_rotations': rots})
# Now a value is returned, and other execution information can be found inside the object_____no_output_____ # Alternatively
my_vqe({'initial_rotations': rots}, execute=False)
min_eig = my_vqe()_____no_output_____
</code>
**Option B:** Algorithms should have a `.run(param_dict)` method which accepts parameters and performs the breadth-first parameterization. The form factor of this would be similar to the above, but with `run()` instead of direct function calls. This has the benefit of some backward compatibility.
**Option C:** Algorithms should support separate parameterization and execution functions. This is the most explicit, but is clunky in an eager execution regime, where execution is automatic if the algorithm is sufficiently parameterized.
All of an Algorithm or Operator's pending Parameters should be recursively returned by a `.params` function. _(Tentative)_ A `deepcopy` option should be available to return a deep copy of the algorithm with the desired parameterization, rather than parameterize the algorithm in-place (this is evaluated with `execute=False` by default)._____no_output_____##### Eager Partial Computation
Aqua should be **eager** in partial computation while some parameters necessary for execution are not yet available, to allow for inspection transparency and performance. For example, once backend information is available, circuits should be transpiled for the backend or otherwise prepared for execution. This can avoid many transpilations or preparations later if the circuits are duplicated for Operator composition, as in Change-of-Basis expectation values or gradients.
The choice of which partial computation to perform is left to the algorithm, so only worthwhile partial computations are performed. If parameters change, re-preparing the partial computation can be expensive, so a `lazy` parameter should be available in the callable function._____no_output_____### Changes to Terra
The `quantum_info` directory should be organized as follows:
* channel
* ...
* matrix.py **- Decision: Rename operator.py to matrix.py or matrix_op.py?**
* pauli.py
* clifford.py **- Decision: Use the Ignis's Clifford?**
* projector.py
* stabilizer.py
* Composites
* op_sum.py, op_composite.py, op_kron.py, op_vec.py, op_exp.py
In addition to the functionality detailed in [Object Model and Hierarchy](#Object-Model-and-Hierarchy) above, Terra should support the following for all of the above Non-matrix-based operators:
* `to_matrix()` - Method to allow quick access to unscalable classical tools, e.g. numpy eigensolution
* `to_quantum_circuits()` - returns a single or list of quantum circuits and coefficients representing the full Operator, including any distributive composition, tensors, etc.
* Trace, Partial Trace, Determinant, Norms, Adjoints - Where possible, linear algebra should be easily accessible _____no_output_____##### Follow-on: Terra Reconciliation Between Operators and Gates
Terra's Operators and Gates are currently fully distinct from one another. The X, Y, Z, Clifford Gates, Evolution by a matrix-specified Unitary (UnitaryGate), and more are direct overlaps between the two, but not interoperable. At some point, Terra should address this difference to allow Operators to be inserted onto a circuit, maintain only a single set of primitive unitaries, allow Gates to be composed with Operators, etc._____no_output_____### Changes to Aqua
The changes to Aqua are basically just to
* deprecate the Operators after moving their logic into Terra,
* change the Aqua algorithms to rely on the new Terra operators,
* break up the Expectation, Evolution, circuit execution, and gradient code to be first-class algorithms users can extend and understand,
* and change the exsting Aqua algorithms to rely on these new algorithms._____no_output_____##### Change Algorithms to rely on Terra operators and new Operator algorithms
In particular, algorithms should be accessible with only Terra-defined inputs (meaning constructed using Terra alone) to provide a seamless experience between Terra and Aqua usage, and extensible interfaces. For example, a VQE should be runnable by passing only a parameterized QuantumCircuit and Terra-defined Operator, allowing a provider or collaborator to share a custom VQE without an unnecessary dependency on Aqua. In particular, this allows the Aer Expectation Value to be defined with the same interface as Aqua's Pauli Expectation, without a dependency on Aqua._____no_output_____##### Circuit Execution Algorithms - **Decision: Name - CircuitExecution? QCExecute? QuantumMeasureZ? RunCircuit?**
Circuit execution is a utility in Aqua today, mediated by the QuantumInstance, which most users do not understand, and growing increasingly branchy to accommodate more and more execution variants. Measurement error mitigation, noisy simulation setup, hardware API fault handling, and more all fall into the same execution flow in various branches.
Circuit execution is an algorithm for sampling a circuit's expectation in exponentially many ${Z, I}^{\otimes n}$ bases, but is not reflected an an algorithm today. It should be promoted to be a first-class algorithm to be more transparent and compartmental, wherein for example, code for simulation and code for execution on hardware can be kept distinct. A CircuitExecution Algorithm accepts a backend and interacts with it in some well-defined way - in way breaking up and organizing of the functionality of the QuantumInstance. Some examples of CircuitExecution algorithms are:
1. QuantumHardware - An Execution algorithm tailored for execution on remote hardware, including fault handling, slicing to limit job sizes, etc. Can stack up a queue of circuits for batch execution, or accept a list of jobids to use as the first n results objects, allowing the user to reuse results from a terminated execution.
1. IdealSimulator - Algorithm tailored for execution in ideal simulation.
1. NoisySimulator - Utility for querying a Hardware backend's properties and providing a noisy simulator using Aer's "noise config from device" functionality.
1. ErrorMitigatedExecutor - OUT OF SCOPE, BEING COVERED IN ANOTHER DOC.
If none is explicitly specified, Aqua should aggressively guess the preferred execution algorithm for the user given the backend and other execution parameters._____no_output_____##### Expectation Algorithms
Aqua should support the following ExpectationValue algorithms. An `ExpectationBase` class should allow automatic selection of an evolution algorithm by default if none is specified - e.g. if the user has Aer installed, VQE will use the AerExpectation by default instead of QASM execution. Other possible expectation values include:
1. PauliExpectation (Change-of-Basis)
1. CVaRExpectation
1. AerExpectation - relies on Aer's fast expectation feature
1. MatrixExpectation
1. (Tentative) BasicAerExpectation
1. RichardsonExpectation - OUT OF SCOPE, BEING COVERED IN ANOTHER DOC.
##### Grouping
Grouping is an important feature within the PauliExpectation in Aqua today, but is not used by default, and has an interface which is not obvious. Grouping should be moved into the PauliExpectation, with a simple interface for the user to specify whether to group the Paulis, or how aggresively to do so. By default, the PauliExpectation should group Paulis as aggressively as is performant on the given execution backend._____no_output_____##### Circuit Evolution Algorithms
And similarly for Evolution, a variety of algorithms should be available for converting a OpExp composite operator into a sum, composition, etc. More specifically, circuit evolution algorithms take an OpExp placeholder and return operators which approximate the value of the exponentiation. For example, the PauliEvolution accepts a Pauli and returns a QuantumCircuit representing the unitary evolution of that Pauli. An `EvolutionBase` class should allow automatic selection of an evolution algorithm by default if none is specified.
1. PauliEvolution (Change-of-Basis)
1. SumEvolution
1. Trotter
1. Suzuki
1. MatrixEvolution
1. (Tentative) [LinCombEvolution](https://arxiv.org/abs/1202.5822)
1. (Tentative) AerEvolution
1. (Tentative) BasicAerEvolution
##### Other algorithms to build out into first-class Algorithm groups
1. Converters - convert lazily between Operator types
1. Gradient
1. Optimization_____no_output_____## Timeline and Gameplan
Stage 1: Implement new Operators in Terra with thorough unit and integration tests.
Stage 2: Implement Operator algorithms in Aqua, relying on Terra Operators
Stage 3: Migrate Aqua algorithms to rely on new Operator algorithms and new Terra Operators
Stage 4: Deprecate Present Aqua Operators (0.7 release)
Stage 5: Delete Present Aqua Operators (0.8 release)_____no_output_____## ⚰️⚰️⚰️⚰️⚰️⚰️ Graveyard ⚰️⚰️⚰️⚰️⚰️⚰️_____no_output_____### Other Benefits of OperatorFlow
* Obedient Eager Evaluation - Best of Eager and Lazy evaluation:
* Partially evaluate whatever you can with the parameters you have
* Allows transparency, inspection, rapid prototyping (e.g. Users couldn't find circuits or operator when working through JSON dictionaries)
* Performance - partially compiled algorithms save massive amounts of compilation and deepcopy time
* But not too early, not compiling preemptively for a possible parameter value
* Objects can be returned without being totally incorrectly constructed for the next step or engine (e.g. building massive CNOT chains for UCCSD simulations)
* Intractable but possible computations (e.g. convert to matrix and solve) are avoided
* Natural, Powerful, and Self-defining Programming Interfaces
* __An algorithm's behavior is simply defined by the operator primitives it accepts and returns__
* Nesting of algorithms is identical to user algorithm execution
* Ubiquitous parameters, and obvious interface for Optimization
* OpCombo coefficients, primitive parameters, and algorithm parameters can all be parameterized
* Algorithms of any level of completeness can be returned
* Optimization is universal - simply pass a nearly-complete algorithm to an optimizer and the callable interface executes when the optimizer provides the parameters_____no_output_____#### Grouping
Aqua's grouping functionality is only relevant to ExpectationValues today._____no_output_____
<code>
qaoa_cost_op = WeightedPauliOperator([
[.5, Pauli.from_label('ZIZ')],
[.2, Pauli.from_label('ZZI')],
[.1j, Pauli.from_label('IZZ')],
])
grouped_cost_op = TPBGroupedWeightedPauliOperator.sorted_grouping(qaoa_cost_op)
grouped_cost_op._basis_____no_output_____class VQE(QuantumAlgorithm):
def __init__(self, operator, var_form, optimizer,
initial_point=None, backend=backend, callback=None, ...):
...
self._expectation_value = ExpectationValue(self._operator, self._backend)
def _energy_evaluation(self, params):
circuits = self._var_form.construct_circuit(params)
energy, stdev = self._expectation_value.run(circuits)
return energy_____no_output_____
</code>
|
{
"repository": "dongreenberg/rfcs",
"path": "0003-Aqua_0.7_operator_redesign.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 54283,
"hexsha": "cb98c9b7e12ba30bbeb326a7449ade50a3d26d64",
"max_line_length": 833,
"avg_line_length": 43.4264,
"alphanum_fraction": 0.638155592
}
|
# Notebook from Ccx55/n2v
Path: TEM_training.ipynb
# Noise2Void - 2D Example for SEM data_____no_output_____
<code>
# We import all our dependencies.
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from n2v.utils.n2v_utils import manipulate_val_data
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from matplotlib import pyplot as plt
import urllib
import os
import zipfileUsing TensorFlow backend.
</code>
# Download Example Data
Data by Reza Shahidi and Gaspar Jekely, Living Systems Institute, Exeter<br>
Thanks!
_____no_output_____# Training Data Preparation_____no_output_____For training we load __one__ set of low-SNR images and use the <code>N2V_DataGenerator</code> to extract training <code>X</code> and validation <code>X_val</code> patches._____no_output_____
<code>
# We create our DataGenerator-object.
# It will help us load data and extract patches for training and validation.
datagen = N2V_DataGenerator()_____no_output_____# We load all the '.tif' files from the 'data' directory.
# If you want to load other types of files see the RGB example.
# The function will return a list of images (numpy arrays).
imgs = datagen.load_imgs_from_directory(directory = "C:/Users/ccx55/OneDrive/Documents/GitHub/Phd/Single-nanoparticle-catalysis/CO_OX_TEM/Data/200420/all_data/")
# Let's look at the shape of the images.
print(imgs[0].shape,imgs[1].shape)
# The function automatically added two extra dimensions to the images:
# One at the beginning, is used to hold a potential stack of images such as a movie.
# One at the end, represents channels.(1, 2048, 2048, 1) (1, 2048, 2048, 1)
# Lets' look at the images.
# We have to remove the added extra dimensions to display them as 2D images.
plt.imshow(imgs[0][0,...,0], cmap='magma')
plt.show()
plt.imshow(imgs[1][0,...,0], cmap='magma')
plt.show()_____no_output_____# We will use the first image to extract training patches and store them in 'X'
patch_shape = (96,96)
X = datagen.generate_patches_from_list(imgs[:1], shape=patch_shape)
# We will use the second image to extract validation patches.
X_val = datagen.generate_patches_from_list(imgs[1:], shape=patch_shape)
# Patches are created so they do not overlap.
# (Note: this is not the case if you specify a number of patches. See the docstring for details!)
# Non-overlapping patches would also allow us to split them into a training and validation set
# per image. This might be an interesting alternative to the split we performed above.Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
Generated patches: (3528, 96, 96, 1)
# Just in case you don't know how to access the docstring of a method:
datagen.generate_patches_from_list?_____no_output_____# Let's look at one of our training and validation patches.
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[0,...,0], cmap='magma')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[0,...,0], cmap='magma')
plt.title('Validation Patch');_____no_output_____
</code>
# Configure_____no_output_____Noise2Void comes with a special config-object, where we store network-architecture and training specific parameters. See the docstring of the <code>N2VConfig</code> constructor for a description of all parameters.
When creating the config-object, we provide the training data <code>X</code>. From <code>X</code> we extract <code>mean</code> and <code>std</code> that will be used to normalize all data before it is processed by the network. We also extract the dimensionality and number of channels from <code>X</code>.
Compared to supervised training (i.e. traditional CARE), we recommend to use N2V with an increased <code>train_batch_size</code> and <code>batch_norm</code>.
To keep the network from learning the identity we have to manipulate the input pixels during training. For this we have the parameter <code>n2v_manipulator</code> with default value <code>'uniform_withCP'</code>. Most pixel manipulators will compute the replacement value based on a neighborhood. With <code>n2v_neighborhood_radius</code> we can control its size.
Other pixel manipulators:
* normal_withoutCP: samples the neighborhood according to a normal gaussian distribution, but without the center pixel
* normal_additive: adds a random number to the original pixel value. The random number is sampled from a gaussian distribution with zero-mean and sigma = <code>n2v_neighborhood_radius</code>
* normal_fitted: uses a random value from a gaussian normal distribution with mean equal to the mean of the neighborhood and standard deviation equal to the standard deviation of the neighborhood.
* identity: performs no pixel manipulation
For faster training multiple pixels per input patch can be manipulated. In our experiments we manipulated about 0.198% of the input pixels per patch. For a patch size of 64 by 64 pixels this corresponds to about 8 pixels. This fraction can be tuned via <code>n2v_perc_pix</code>.
For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64).
In the past we experienced bleedthrough artifacts between channels if training was terminated to early. To counter bleedthrough we added the `single_net_per_channel` option, which is turned on by default. In the back a single U-Net for each channel is created and trained independently, thereby removing the possiblity of bleedthrough. <br/>
__Note:__ Essentially the network gets multiplied by the number of channels, which increases the memory requirements. If your GPU gets too small, you can always split the channels manually and train a network for each channel one after another._____no_output_____<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10. <br>For better results we suggest 100 to 200 <code>train_epochs</code>._____no_output_____
<code>
# train_steps_per_epoch is set to (number of training patches)/(batch size), like this each training patch
# is shown once per epoch.
config = N2VConfig(X, unet_kern_size=3,
train_steps_per_epoch=int(X.shape[0]/128), train_epochs=10, train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64),
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5)
# Let's look at the parameters stored in the config-object.
vars(config)_____no_output_____# a name used to identify the model
model_name = 'n2v_2D'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)C:\Users\ccx55\OneDrive\Documents\GitHub\n2v\n2v\models\n2v_standard.py:430: UserWarning: output path for model already exists, files may be overwritten: C:\Users\ccx55\OneDrive\Documents\GitHub\n2v\models\n2v_2D
warnings.warn('output path for model already exists, files may be overwritten: %s' % str(self.logdir.resolve()))
</code>
# Training
Training the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.
You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser._____no_output_____
<code>
# We are ready to start training now.
history = model.train(X, X_val)Preparing validation data: 28%|██▊ | 153/544 [00:00<00:00, 1523.55it/s]
</code>
### After training, lets plot training and validation loss._____no_output_____
<code>
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss']);['loss', 'lr', 'n2v_abs', 'n2v_mse', 'val_loss', 'val_n2v_abs', 'val_n2v_mse']
</code>
## Export Model in BioImage ModelZoo Format
See https://imagej.net/N2V#Prediction for details._____no_output_____
<code>
model.export_TF(name='Noise2Void - 2D SEM Example',
description='This is the 2D Noise2Void example trained on SEM data in python.',
authors=["Tim-Oliver Buchholz", "Alexander Krull", "Florian Jug"],
test_img=X_val[0,...,0], axes='YX',
patch_shape=patch_shape)INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmp2p3nbvb3/model/saved_model.pb
Model exported in BioImage ModelZoo format:
/home/tbuchhol/Gitrepos/n2v/examples/2D/denoising2D_SEM/models/n2v_2D/export.bioimage.io.zip
</code>
|
{
"repository": "Ccx55/n2v",
"path": "TEM_training.ipynb",
"matched_keywords": [
"ImageJ"
],
"stars": null,
"size": 381510,
"hexsha": "cb9a6d7e36019ab3ae16857f8787f0f2a9fdb2b8",
"max_line_length": 103592,
"avg_line_length": 606.5341812401,
"alphanum_fraction": 0.9258866085
}
|
# Notebook from jbhagan/jwst_validation_notebooks
Path: jwst_validation_notebooks/resample/jwst_resample_miri_test/jwst_resample_miri_testing.ipynb
<a id="title_ID"></a>
# JWST Pipeline Validation Testing Notebook: Calwebb_Image3, Resample step
<span style="color:red"> **Instruments Affected**</span>: FGS, MIRI, NIRCam, NIRISS, NIRSpec
Tested on MIRI Simulated data
### Table of Contents
<div style="text-align: left">
<br> [Introduction](#intro_ID) <br> [Run JWST Pipelines](#pipeline_ID) <br> [Imports](#imports_ID) <br> [Create an association table for your cal files and run them through calwebb_image3](#runpipeline_ID) <br> [Find Stars in Image and Determine their Coordinates](#runscript_ID) <br> [Compare RA and Dec to expected Values](#residual_ID) <br> [About This Notebook](#about_ID) <br>
</div>_____no_output_____<a id="intro_ID"></a>
# Introduction
This test is designed to test the resample step in the calwebb_image3 pipeline. At the end of the calwebb_image3 pipeline, the set of files defined in an association table will be distortion corrected and combined. Resample is the step that applies the distortion correction using the drizzling algorithm (as defined in the DrizzlePac handbook) and combines the listed files. For more information on the pipeline step visit the links below.
Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/resample/main.html
Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/resample
The data for this test were created with the MIRI Data Simulator, and the documentation for that code can be found here: http://miri.ster.kuleuven.be/bin/view/Public/MIRISim_Public
### Calibration WG Requested Algorithm:
A short description and link to the page: https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Image+Combination
### Defining Terms
Definition of terms or acronymns.
JWST: James Webb Space Telescope
MIRI: Mid-Infrared Instrument
MIRISim: MIRI Data Simulator
### Description of test
This test is performed by creating a set of simulated data with multiple point sources located at specified coordinates. The simulator puts in the expected distortion, so the initial output data comes out of the simulator in distorted coordinates. When this data is then run through calwebb_detector1, calwebb_image2 and calwebbb_image3, the combined, undistorted image should have the point sources registered at the expected locations. In flight, this test can be repeated with known stars that should be found at their expected coordinates.
### Create the data for testing
The set of data used in this particular test were created with the MIRI Data Simulator (MIRISim). Referring to the MIRISim link, you can see how to set up and run the simulator to re-create the input files if you wish. The data was run with a scene.ini file that specified what the scene should look like, with coordinates for the stars given in units of arcsecond offsets from the center of the field of view. The scene.ini file as well as the setup files simuation.ini and simulator.ini are needed to run the simulation.
Once in the mirisim conda environment, the simulation is run with the command line:
> mirisim simulation.ini
The simulator created four files, two exposures each at two different dither positions, using the specified filter. Make sure the WCSAXES header keyword in the SCI extension is set to 2 and not 4. If it is set to 4, change it to 2.
[Top of Page](#title_ID)_____no_output_____<a id="pipeline_ID"></a>
## Run JWST Pipelines
The four files were then run individually through the calwebb_detector1 and calwebb_image2 pipelines. When running the calwebb_detector1 pipeline, increase the threshold for a detection in the jump step from 4 sigma to 10 sigma to avoid a current issue where the jump detection step flags a large percentage of pixels as jumps. This can be done on the command line. (commands to be typed start with $)
The pipelines can be run on the command line with the following commands or put into a script while using the pipeline conda environment.
$ strun calwebb_detector1.cfg filename --steps.jump.rejection_threshold 10.0
The output of the calwebb_detector1 pipeline is a set of four *rate.fits files which will then be run through the calwebb_image2 pipeline.
$ strun calwebb_image2.cfg filename
The output of the calwebb_image2 pipeline was then a set of four *cal.fits files. An association table was created that included these four files as input, and then the files and the association table were run through the calwebb_image3 pipeline.
The cal files are stored in artifactory, and this notebook is meant to pull those files for the test of resample. Step through the cells of this notebook to run calwebb_image3 and then check the alignment.
[Top of Page](#title_ID)_____no_output_____
<a id="imports_ID"></a>
# Imports
The following packages will need to be imported for the scripts to work.
* astropy.io for opening files
* astropy.stats for sigma clipping routine
* astropy.visualization for image plotting
* ci_watson.artifactory_helpers to read in data from artifactory
* jwst.datamodels for opening files as a JWST Datamodel
* jwst.pipeline to run the pipeline step/module
* jwst.associations to create association table
* numpy for calculations
* matplotlib.pyplot.plt to generate plot
* os for path information
* photutils for star finding and aperture photometry
* regtest to retrieve data from artifactory needed to run notebook
[Top of Page](#title_ID)_____no_output_____
<code>
from astropy.io import ascii, fits
from astropy.stats import sigma_clipped_stats
from astropy.table import Column
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
from ci_watson.artifactory_helpers import get_bigdata
from jwst.datamodels import DrizProductModel, ImageModel
from jwst.pipeline import Image3Pipeline
from jwst import associations
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from jwst.associations import asn_from_list
import matplotlib.pyplot as plt
import numpy as np
import os
from photutils import CircularAperture, DAOStarFinder, CircularAnnulus, aperture_photometry
from jwst.regtest.regtestdata import RegtestData_____no_output_____
</code>
<a id="runpipeline_ID"></a>
# Open an association table for your cal files and run them through calwebb_image3
Load the association table to use the .cal files that were output from calwebb_image2. That will be the input for calwebb_image3 that uses the resample step to combine each of the individual images.
[Top of Page](#title_ID)_____no_output_____
<code>
# Use regtest infrastructure to access all input files associated with the association file
rtdata = RegtestData(inputs_root="jwst_validation_notebooks", env="validation_data")
rtdata.get_asn("resample/resample_miri_test/starfield_74_asnfile.json")
rtdata.input #this should be the list of files associated with the asn_____no_output_____# Run Calwebb_image3 on the association table
# set any specific parameters
# tweakreg parameters to allow data to run
fwhm=2.5 # Gaussian kernel FWHM of objects expected, default=2.5
minobj=5 # minimum number of objects needed to match positions for a good fit, default=15
snr= 250 # signal to noise threshold, default=5
sigma= 3 # clipping limit, in sigma units, used when performing fit, default=3
fit_geom='shift' # ftype of affine transformation to be considered when fitting catalogs, default='general'
use2dhist=False # boolean indicating whether to use 2D histogram to find initial offset, default=True
pipe3=Image3Pipeline()
pipe3.tweakreg.kernel_fwhm = fwhm
pipe3.tweakreg.snr_threshold = snr
pipe3.tweakreg.minobj = minobj
pipe3.tweakreg.sigma = sigma
pipe3.tweakreg.fitgeometry = fit_geom
pipe3.tweakreg.use2dhist = use2dhist
#pipe3.skymatch.skip = True # test to see if this affects the final output
pipe3.source_catalog.save_results = True
pipe3.save_results = True
# run Image3
im = pipe3.run(rtdata.input)
_____no_output_____
</code>
<a id="runscript_ID"></a>
# Find stars in image and determine their coordinates
The output of the pipeline command in the previous step (given our association table) is an i2d.fits file. This file is in the format of a JWST Data model type of DrizProductModel and should be opened as such. It is this file that we will use for source finding and to determine whether the stars are found in the expected locations. The i2d file and the associated text file containing the input coordinates of the stars can be found in artifactory.
[Top of Page](#title_ID)_____no_output_____#### Read in combined i2d data file and list of coordinates_____no_output_____
<code>
# Read in the combined data file and list of coordinates
with ImageModel('starfield_74_combined_i2d.fits') as im:
# raises exception if file is not the correct model
pass
coords = get_bigdata('jwst_validation_notebooks',
'validation_data',
'resample',
'resample_miri_test',
'radec_coords.txt')
# read in text file with RA and Dec input coordinates
RA_in, Dec_in = np.loadtxt( coords, dtype=str, unpack=True)
# put RA and Dec into floats
RA_sim = RA_in.astype(float)
Dec_sim = Dec_in.astype(float)
# pull out data portion of input file
data = im.data
# print stats on input image
mean, median, std = sigma_clipped_stats(data, sigma=200.0, maxiters=5) # default sigma=3
print(mean, median, std)
_____no_output_____
</code>
#### Run DAOStar finder to find sources in the image and examine the image and positions marked.
The block of code below will find the sources in the image, create apertures for each source found, and output the table of x, y coordinates along with the peak pixel value. It will also show a scaled version of the image and mark in blue the positions of sources found.
_____no_output_____
<code>
# Run DAOStarFinder to find sources in image
ap_radius = 4. # radius for aperture for centroiding and photometry
daofind = DAOStarFinder(fwhm=3.0, threshold=10.*std) # default threshold=5*std, fwhm=3
sources = daofind(data)
print(sources['xcentroid','ycentroid','peak'])
# create apertures for sources
positions = (sources['xcentroid'], sources['ycentroid'])
apertures = CircularAperture(positions, r=ap_radius)
# mark sources on image frame to see if the correct sources were found
norm = ImageNormalize(stretch=SqrtStretch())
# keep image stretch in mind for plotting. sky subtracted range ~ (-15, 10), single sample ~ (0, 20)
plt.imshow(data, cmap='Greys', origin='lower', vmin=-15,vmax=10, norm=norm)
apertures.plot(color='blue', lw=1.5, alpha=0.5)
plt.show()
_____no_output_____
</code>
#### Run photometry on apertures (with a specified annulus for background subtraction)
Set a specified annulus (inner and outer radii for the annulus).
Run photometry on aperture and annuli.
Subtract background values in annulus from aperture photometry.
Output should be a table of photometry values printed to the screen (full table has columns id, xcenter, ycenter, aperture_sum and the added columns annulus_median, aperture_bkg and aperture_sum_bkgsub). You can choose which columns you wish to see printed._____no_output_____
<code>
# set values for inner and outer annuli to collect background counts
inner_annulus = 10.
outer_annulus = 15.
# set up annulus for background
background_aper = CircularAnnulus(positions, r_in=inner_annulus, r_out=outer_annulus)
# perform photometry on apertures for targets and background annuli
phot_table = aperture_photometry(im.data, apertures)
# perform background subtraction with outlier rejection
bkg_median = []
bkg_mask = background_aper.to_mask(method='center')
bmask = bkg_mask[0]
for mask in bkg_mask:
aper_data = bmask.multiply(data)
aper_data = aper_data[mask.data > 0]
# perform sigma-clipped median
_, median_sigclip, _ = sigma_clipped_stats(aper_data)
bkg_median.append(median_sigclip)
bkg_median = np.array(bkg_median)
# do calculations on background regions found in annuli
# Get average background per pixel
phot_table['annulus_median'] = bkg_median
# Get total background in the science aperture (per pixel * area in aperture)
phot_table['aperture_bkg'] = bkg_median * apertures.area
# subtract background in aperture from flux in aperture
phot_table['aperture_sum_bkgsub'] = phot_table['aperture_sum'] - phot_table['aperture_bkg']
print(phot_table['aperture_sum','annulus_median','aperture_bkg','aperture_sum_bkgsub'])
_____no_output_____
</code>
#### Put x, y coordinates into RA and Dec using the wcs information from the files.
The output of the next block of code should be a table showing the x and y centroid positions as well as the associated RA and Dec values._____no_output_____
<code>
# using wcs info from images, put coordinates into RA, Dec
ra, dec = im.meta.wcs(sources['xcentroid'], sources['ycentroid'])
# add RA, Dec to sources table
ra_col = Column(name='RA', data=ra)
dec_col = Column(name='Dec', data=dec)
sources.add_column(ra_col)
sources.add_column(dec_col)
# print RA, Dec for each x, y position found
print(sources['xcentroid', 'ycentroid', 'RA', 'Dec'])
# add option to print out list of sources with flux values
outtable = 'sourcelist_phot_rate.txt'
sources.add_column(phot_table['aperture_sum'])
sources.add_column(phot_table['aperture_sum_bkgsub'])
_____no_output_____
</code>
#### Compare the RA and Dec positions used to create the simulated data to the values found in the output image.
Difference each set of RA and Dec coordinates in both the input list and the found coordinates, taking into account any angles close to 360/0 degrees. If the difference for both the RA and Dec are below a set tolerance, then the positions match. Take the matched positions and convert the differences from degrees to milli arcseconds, and output the RA and Dec positions as well as the differences. _____no_output_____
<code>
# Compare input RA, Dec to found RA, Dec
print(' RA found Dec found RA_Diff (mas) Dec_diff (mas) Bkg sub flux pass/fail')
for i in np.arange(0,len(RA_sim)):
for j in np.arange(0,len(ra)):
ra_diff = 180 - abs(abs(RA_sim[i] - ra[j])-180)
dec_diff = 180 - abs(abs(Dec_sim[i] - dec[j])-180)
if ra_diff < 1e-5 and dec_diff < 1e-5:
# put differences in milliarcseconds
ra_diff = ra_diff * 3600000
dec_diff = dec_diff * 3600000
if ra_diff < 30 and dec_diff < 30:
test = 'pass'
else:
test = 'fail'
print('{:15.6f} {:15.6f} {:15.6f} {:15.6f} {:15.6f} {}'.format(ra[j], dec[j], ra_diff, dec_diff,
phot_table['aperture_sum_bkgsub'][j], test))
_____no_output_____
</code>
<a id="residual_ID"></a>
# Compare output RA and Dec to expected values
The output RA and Dec coordinates should match the input RA and Dec coordinates to within 1/10 of a PSF FWHM (~0.03 arcsec for F770W).
Output RA_Diff and Dec_diff above should be on order of 30 or fewer milliarcseconds.
Check to see if your input flux is roughly what you expected based on the input data.
[Top of Page](#title_ID)_____no_output_____<a id="about_ID"></a>
## About this Notebook
**Author:** M. Cracraft, Research and Instrument Scientist II, INS/MIRI
<br>**Updated On:** 08/09/2019 to add in aperture photometry_____no_output_____An extra optional test that can be done is to plot the flux values against x or y values. Previous testing has shown a spatial dependence of the flux with y values, so a quick plot can show whether this problem is fixed or not. Prior to the resample step, there is no pattern, after the step, a pattern is clear. Just do this as a last check. If the scatter is not random, there may be a problem that needs to be checked. (Of course, this only works if you give an equivalent if not equal input count level to each input star.)_____no_output_____
<code>
plt.title('Surface brightness vs. y position on detector')
plt.ylim(35500,37500) # help weed out sources that were erroneously 'hits' (bad pixels, cosmic rays, etc)
plt.xlabel('y centroid position')
plt.ylabel('Surface brightness')
plt.plot(sources['ycentroid'], phot_table['aperture_sum_bkgsub'], marker='o',linestyle='') #ylim=(30000,40000))
plt.show()_____no_output_____
</code>
[Top of Page](#title_ID)
<img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/> _____no_output_____
|
{
"repository": "jbhagan/jwst_validation_notebooks",
"path": "jwst_validation_notebooks/resample/jwst_resample_miri_test/jwst_resample_miri_testing.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 21543,
"hexsha": "cb9a9892d0d3ec52f0789c3340d8415aa2ef5dea",
"max_line_length": 552,
"avg_line_length": 42.2411764706,
"alphanum_fraction": 0.6385832985
}
|
# Notebook from jagkagd/MIT-6.S191
Path: lab2/Part1_MNIST.ipynb
<table align="center">
<td align="center"><a target="_blank" href="http://introtodeeplearning.com">
<img src="http://introtodeeplearning.com/images/colab/mit.png" style="padding-bottom:5px;" />
Visit MIT Deep Learning</a></td>
<td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab2/Part1_MNIST.ipynb">
<img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" />Run in Google Colab</a></td>
<td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab2/Part1_MNIST.ipynb">
<img src="http://introtodeeplearning.com/images/colab/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td>
</table>
# Copyright Information_____no_output_____
<code>
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#_____no_output_____
</code>
# Laboratory 2: Computer Vision
# Part 1: MNIST Digit Classification
In the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.
First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab._____no_output_____
<code>
# Import Tensorflow 2.0
#%tensorflow_version 2.x
import tensorflow as tf
#!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0_____no_output_____
</code>
## 1.1 MNIST dataset
Let's download and load the dataset and display a few random samples from it:_____no_output_____
<code>
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step
</code>
Our training set is made up of 28x28 grayscale images of handwritten digits.
Let's visualize what some of these images and their corresponding training labels look like._____no_output_____
<code>
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])_____no_output_____
</code>
## 1.2 Neural Network for Handwritten Digit Classification
We'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:

_____no_output_____### Fully connected neural network architecture
To define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model.
In this next block, you'll define the fully connected layers of this simple work._____no_output_____
<code>
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()_____no_output_____
</code>
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. **_____no_output_____Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.
That defines our fully connected model! _____no_output_____
### Compile the model
Before training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#compile) step:
* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.
* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.
* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.
We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).
You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model. _____no_output_____
<code>
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])_____no_output_____
</code>
### Train the model
We're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training.
In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#fit) method on an instance of the `Model` class. We will use this to train our fully connected model
_____no_output_____
<code>
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)Epoch 1/5
938/938 [==============================] - 1s 1ms/step - loss: 0.3685 - accuracy: 0.8964
Epoch 2/5
938/938 [==============================] - 1s 1ms/step - loss: 0.2011 - accuracy: 0.9425
Epoch 3/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1516 - accuracy: 0.9571
Epoch 4/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1231 - accuracy: 0.9654
Epoch 5/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1038 - accuracy: 0.9709
</code>
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data._____no_output_____### Evaluate accuracy on the test dataset
Now that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array.
Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#evaluate) method to evaluate the model on the test dataset!_____no_output_____
<code>
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)313/313 [==============================] - 0s 1ms/step - loss: 0.1053 - accuracy: 0.9692
Test accuracy: 0.9692000150680542
</code>
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data.
What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...
_____no_output_____## 1.3 Convolutional Neural Network (CNN) for handwritten digit classification_____no_output_____As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:
_____no_output_____### Define the CNN model
We'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model._____no_output_____
<code>
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3, 3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3, 3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 900) 0
_________________________________________________________________
dense_7 (Dense) (None, 128) 115328
_________________________________________________________________
dense_8 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
</code>
### Train and test the CNN model
Now, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:_____no_output_____
<code>
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO_____no_output_____
</code>
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API._____no_output_____
<code>
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)Epoch 1/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0242 - accuracy: 0.9930
Epoch 2/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0170 - accuracy: 0.9948
Epoch 3/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0136 - accuracy: 0.9956
Epoch 4/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0115 - accuracy: 0.9963
Epoch 5/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0099 - accuracy: 0.9967
</code>
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#evaluate) method:_____no_output_____
<code>
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)313/313 [==============================] - 1s 2ms/step - loss: 0.0528 - accuracy: 0.9878
Test accuracy: 0.9878000020980835
</code>
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? _____no_output_____### Make predictions with the CNN model
With the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#predict) function call generates the output predictions given a set of input samples.
_____no_output_____
<code>
predictions = cnn_model.predict(test_images)_____no_output_____
</code>
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:_____no_output_____
<code>
predictions[0]_____no_output_____
</code>
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits.
Let's look at the digit that has the highest confidence for the first image in the test dataset:_____no_output_____
<code>
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0]) # TODO
print(prediction)7
</code>
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:_____no_output_____
<code>
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)Label of this digit is: 7
</code>
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:_____no_output_____
<code>
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)_____no_output_____
</code>
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!_____no_output_____
<code>
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
_____no_output_____
</code>
## 1.4 Training the model 2.0
Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential#fit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts.
As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTape#gradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.
We'll use this framework to train our `cnn_model` using stochastic gradient descent._____no_output_____
<code>
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images) # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
_____no_output_____
</code>
## 1.5 Conclusion
In this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias. _____no_output_____
|
{
"repository": "jagkagd/MIT-6.S191",
"path": "lab2/Part1_MNIST.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 264482,
"hexsha": "cb9e82d22e7741c4b26a4b2a8b053ca384c7da8f",
"max_line_length": 139412,
"avg_line_length": 260.5733990148,
"alphanum_fraction": 0.9157938915
}
|
# Notebook from FGDBTKD/DeepLearningProject
Path: Deep_Learning_Project.ipynb
<h1 align='center' style="margin-bottom: 0px"> An end to end implementation of a Machine Learning pipeline </h1>
<h4 align='center' style="margin-top: 0px"> SPANDAN MADAN</h4>
<h4 align='center' style="margin-top: 0px"> Visual Computing Group, Harvard University</h4>
<h4 align='center' style="margin-top: 0px"> Computer Science and Artificial Intelligence Laboratory, MIT</h4>_____no_output_____<h2 align='center' style="margin-top: 0px"><a href='https://github.com/Spandan-Madan/DeepLearningProject'>Link to Github Repo</a></h2>_____no_output_____# Section 1. Introduction
### Background
In the fall of 2016, I was a Teaching Fellow (Harvard's version of TA) for the graduate class on "Advanced Topics in Data Science (CS209/109)" at Harvard University. I was in-charge of designing the class project given to the students, and this tutorial has been built on top of the project I designed for the class.
### Why write yet another Tutorial on Machine Learning and Deep Learning?
As a researcher on Computer Vision, I come across new blogs and tutorials on ML (Machine Learning) every day. However, most of them are just focussing on introducing the syntax and the terminology relevant to the field. For example - a 15 minute tutorial on Tensorflow using MNIST dataset, or a 10 minute intro to Deep Learning in Keras on Imagenet.
While people are able to copy paste and run the code in these tutorials and feel that working in ML is really not that hard, it doesn't help them at all in using ML for their own purposes. For example, they never introduce you to how you can run the same algorithm on your own dataset. Or, how do you get the dataset if you want to solve a problem. Or, which algorithms do you use - Conventional ML, or Deep Learning? How do you evaluate your models performance? How do you write your own model, as opposed to choosing a ready made architecture? All these form fundamental steps in any Machine Learning pipeline, and it is these steps that take most of our time as ML practitioners.
This tutorial breaks down the whole pipeline, and leads the reader through it step by step in an hope to empower you to actually use ML, and not just feel that it was not too hard. Needless to say, this will take much longer than 15-30 minutes. I believe a weekend would be a good enough estimate.
### About the Author
I am <a href="http://spandanmadan.com/">Spandan Madan</a>, a graduate student at Harvard University working on Computer Vision. My research work is supervised collaboratively by Professor Hanspeter Pfister at Harvard, and Professor Aude Oliva at MIT. My current research focusses on using Computer Vision and Natural Language Techniques in tandem to build systems capable of reasoning using text and visual elements simultaneusly._____no_output_____# Section 2. Project Outline : Multi-Modal Genre Classification for Movies _____no_output_____## Wow, that title sounds like a handful, right? Let's break it down step by step.
### Q.1. what do we mean by Classification?
In machine learning, the task of classification means to use the available data to learn a <i>function</i> which can assign a category to a data point. For example, assign a genre to a movie, like "Romantic Comedy", "Action", "Thriller". Another example could be automatically assigning a category to news articles, like "Sports" and "Politics".
### More Formally
#### Given:
- A data point $x_i$
- A set of categories $y_1,y_2...y_n$ that $x_i$ can belong to. <br>
#### Task :
Predict the correct category $y_k$ for a new data point $x_k$ not present in the given dataset.
#### Problem :
We don't know how the $x$ and $y$ are related mathematically.
#### Assumption :
We assume there exists a function $f$ relating $x$ and $y$ i.e. $f(x_i)=y_i$
#### Approach :
Since $f$ is not known, we learn a function $g$, which approximates $f$.
#### Important consideration :
- If $f(x_i)=g(x_i)=y_i$ for all $x_i$, then the two functions $f$ and $g$ are exactly equal. Needless to say, this won't realistically ever happen, and we'll only be able to approximate the true function $f$ using $g$. This means, sometimes the prediction $g(x_i)$ will not be correct. And essentially, our whole goal is to find a $g$ which makes a really low number of such errors. That's basically all that we're trying to do.
- For the sake of completeness, I should mention that this is a specific kind of learning problem which we call "Supervised Learning". Also, the idea that $g$ approximates $f$ well for data not present in our dataset is called "Generalization". It is absolutely paramount that our model generalizes, or else all our claims will only be true about data we already have and our predictions will not be correct.
- We will look into generalization a little bit more a little ahead in the tutorial.
- Finally, There are several other kinds, but supervised learning is the most popular and well studied kind._____no_output_____### Q.2. What's Multi-Modal Classification then?
In the machine learning community, the term Multi-Modal is used to refer to multiple <i>kinds</i> of data. For example, consider a YouTube video. It can be thought to contain 3 different modalities -
- The video frames (visual modality)
- The audio clip of what's being spoken (audio modality)
- Some videos also come with the transcription of the words spoken in the form of subtitles (textual modality)
Consider, that I'm interested in classifying a song on YouTube as pop or rock. You can use any of the above 3 modalities to predict the genre - The video, the song itself, or the lyrics. But, needless to say, you can predict it much better if you could use all three simultaneously. This is what we mean by multi-modal classification. _____no_output_____# For this project, we will be using visual and textual data to classify movie genres._____no_output_____# Project Outline
- **Scraping a dataset** : The first step is to build a rich data set. We will collect textual and visual data for each movie.
- **Data pre-processing**
- **Non-deep Machine Learning models : Probabilistic and Max-Margin Classifiers.**
- **Intuitive theory behind Deep Learning**
- **Deep Models for Visual Data**
- **Deep Models for Text**
- **Potential Extensions**
- **Food for Thought**
_____no_output_____# Section 3. Building your very own DataSet.
_____no_output_____For any machine learning algorithm to work, it is imperative that we collect data which is "representative". Now, let's take a moment to discuss what the word representative mean.
### What data is good data? OR What do you mean by data being "representative"?
Let's look at this from first principles. Mathematically, the premise of machine learning (to be precise, the strand of machine learning we'll be working with here) is that given input variable X, and an output variable y, **IF** there is a function such that g(X)=y, then if g is unknown, we can "learn" a function f which approximates g. At the very heart, its not at all different from what you may have earlier studied as "curve fitting". For example, if you're trying to predict someone's movie preferences then X can be information about the person's gender, age, nationality and so on, while y can be the genre they most like to listen to!
Let's do a thought experiment. Consider the same example - I'm trying to predict people's movie preferences. I walk into a classroom today, and collect information about some students and their movie preferences. Now, I use that data to build a model. How well do you think I can predict my father's movie preferences? The answer is - probably not very well. Why? Intuitively, there was probably no one in the classroom who was my father's age. My model can tell me that as people go from age 18 to 30, they have a higher preference for documentaries over superhero movies. But does this trend continue at 55? Probably, they may start liking family dramas more. Probably they don't. In a nutshell, we cannot say with certainty, as our data tells us nothing about it. So, if the task was to make predictions about ANYONE's movie preferences, then the data collected from just undergraduates is NOT representative.
Now, let's see why this makes sense Mathematically. Look at the graph below._____no_output_____<img src="files/contour.png">
<center>Fig.1: Plot of a function we are trying to approximate(<a href="http://www.jzy3d.org/js/slider/images/ContourPlotsDemo.png">source</a>)</center>_____no_output_____If we consider that the variable plotted on the vertical axis is $y$, and the values of the 2 variables on the horizontal axes make the input vector $X$, then, our hope is that we are able to find a function $f$ which can approximate the function plotted here. If all the data I collect is such that $x_1$ belongs to (80,100) and $x_2$ belongs to (80,100), the learned function will only be able to learn the "yellow-green dipping bellow" part of the function. Our function will never be able to predict the behavior in the "red" regions of the true function. So, in order to be able to learn a good function, we need data sampled from a diverse set of values of $x_1$ and x2. That would be representative data to learn this contour._____no_output_____Therefore, we want to collect data which is representative of all possible movies that we want to make predictions about. Or else (which is often the case), we need to be aware of the limitations of the model we have trained, and the predictions we can make with confidence. The easiest way to do this is to only make predictions about the domain of data we collected the training data from. For example, in our case, let us start by assuming that our model will predict genres for only English movies. Now, the task is to collect data about a diverse collection of movies.
So how do we get this data then? Neither google, nor any university has released such a dataset. We want to collect visual and textual data about these movies. The simple answer is to scrape it from the internet to build our own dataset. For the purpose of this project, we will use movie posters as our visual data, and movie plots as textual data. Using these, we will build a model that can predict movie genres! _____no_output_____# We will be scraping data from 2 different movie sources - IMDB and TMDB_____no_output_____<h3>IMDB:http://www.imdb.com/</h3>
For those unaware, IMDB is the primary source of information about movies on the internet. It is immensely rich with posters, reviews, synopsis, ratings and many other information on every movie. We will use this as our primary data source.
<h3>TMDB:https://www.themoviedb.org/</h3>
TMDB, or The Movie DataBase, is an open source version of IMDB, with a free to use API that can be used to collect information. You do need an API key, but it can be obtained for free by just making a request after making a free account._____no_output_____#### Note -
IMDB gives some information for free through the API, but doesn't release other information about movies. Here, we will keep it legal and only use information given to us for free and legally. However, scraping does reside on the moral fence, so to say. People often scrape data which isn't exactly publicly available for use from websites. _____no_output_____
<code>
import torchvision
import urllib2
import requests
import json
import imdb
import time
import itertools
import wget
import os
import tmdbsimple as tmdb
import numpy as np
import random
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import pickle/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/special/__init__.py:640: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._ufuncs import *
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/linalg/basic.py:17: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._solve_toeplitz import levinson
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/linalg/__init__.py:191: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._decomp_update import *
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/special/_ellip_harm.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._ellip_harm_2 import _ellipsoid, _ellipsoid_norm
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/lil.py:16: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _csparsetools
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:167: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._shortest_path import shortest_path, floyd_warshall, dijkstra,\
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/csgraph/_validation.py:5: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._tools import csgraph_to_dense, csgraph_from_dense,\
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:169: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._traversal import breadth_first_order, depth_first_order, \
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:171: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._min_spanning_tree import minimum_spanning_tree
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:172: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._reordering import reverse_cuthill_mckee, maximum_bipartite_matching, \
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/optimize/_numdiff.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._group_columns import group_dense, group_sparse
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/interpolate/_bsplines.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _bspl
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/spatial/__init__.py:94: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .ckdtree import *
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/spatial/__init__.py:95: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .qhull import *
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/spatial/_spherical_voronoi.py:18: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _voronoi
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/spatial/distance.py:121: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _hausdorff
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/stats/_continuous_distns.py:17: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _stats
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/_libs/__init__.py:3: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .tslib import iNaT, NaT, Timestamp, Timedelta, OutOfBoundsDatetime
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/_libs/__init__.py:3: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from .tslib import iNaT, NaT, Timestamp, Timedelta, OutOfBoundsDatetime
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/__init__.py:26: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import (hashtable as _hashtable,
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/__init__.py:26: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import (hashtable as _hashtable,
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/dtypes/common.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import algos, lib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/dtypes/common.py:6: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import algos, lib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/util/hashing.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import hashing, tslib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/util/hashing.py:7: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import hashing, tslib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/indexes/base.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import (lib, index as libindex, tslib as libts,
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/indexes/base.py:6: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import (lib, index as libindex, tslib as libts,
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/indexes/datetimelike.py:28: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs.period import Period
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/indexes/datetimelike.py:28: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs.period import Period
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/sparse/array.py:32: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
import pandas._libs.sparse as splib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/sparse/array.py:32: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
import pandas._libs.sparse as splib
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/window.py:36: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
import pandas._libs.window as _window
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/window.py:36: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
import pandas._libs.window as _window
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/groupby.py:66: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import lib, groupby as libgroupby, Timestamp, NaT, iNaT
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/groupby.py:66: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import lib, groupby as libgroupby, Timestamp, NaT, iNaT
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/reshape/reshape.py:30: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from pandas._libs import algos as _algos, reshape as _reshape
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/core/reshape/reshape.py:30: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
from pandas._libs import algos as _algos, reshape as _reshape
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/io/parsers.py:43: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
import pandas._libs.parsers as parsers
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/pandas/io/parsers.py:43: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176
import pandas._libs.parsers as parsers
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/cluster/vq.py:88: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _vq
/anaconda3/envs/deeplearningproject/lib/python2.7/site-packages/scipy/cluster/hierarchy.py:178: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _hierarchy
</code>
# Here is a broad outline of technical steps to be done for data collection
* Sign up for TMDB (themoviedatabase.org), and set up API to scrape movie posters for above movies.
* Set up and work with TMDb to get movie information from their database
* Do the same for IMDb
* Compare the entries of IMDb and TMDb for a movie
* Get a listing and information of a few movies
* Think and ponder over the potential challenges that may come our way, and think about interesting questions we can answer given the API's we have in our hands.
* Get data from the TMDb
Let's go over each one of these one by one._____no_output_____## Signing up for TMDB and getting set up for getting movie metadata.
* Step 1. Head over to [tmdb.org] (https://www.themoviedb.org/?language=en) and create a new account there by signing up.
* Step 2. Click on your account icon on the top right, then from drop down menu select "Settings".
* Step 3. On the settings page, you will see the option "API" on the left pane. Click on that.
* Step 4. Apply for a new developer key. Fill out the form as required. The fields "Application Name" and "Application URL" are not important. Fill anything there.
* Step 5. It should generate a new API key for you and you should also receive a mail.
Now that you have the API key for TMDB, you can query using TMDB. Remember, it allows only 40 queries per 10 seconds.
An easy way to respect this is to just have a call to <i>time.sleep(1)</i> after each iteration. This is also being very nice to the server.
If you want to try and maximize your throughput you can embed every TMDB request in a nested try except block. If the first try fails, the second try first uses python's sleep function to give it a little rest, and then try again to make a request. Something like this -
~~~~
try:
search.movie(query=movie) #An API request
except:
try:
time.sleep(10) #sleep for a bit, to give API requests a rest.
search.movie(query=<i>movie_name</i>) #Make second API request
except:
print "Failed second attempt too, check if there's any error in request"
~~~~_____no_output_____## Using TMDB using the obtained API Key to get movie information_____no_output_____I have made these functions which make things easy. Basically, I'm making use of a library called tmdbsimple which makes TMDB using even easier. This library was installed at the time of setup.
However, if you want to avoid the library, it is also easy enough to load the API output directly into a dictionary like this without using tmdbsimple:
~~~
url = 'https://api.themoviedb.org/3/movie/1581?api_key=' + api_key
data = urllib2.urlopen(url).read()
# create dictionary from JSON
dataDict = json.loads(data)
~~~_____no_output_____
<code>
# set here the path where you want the scraped folders to be saved!
poster_folder='posters_final/'
if poster_folder.split('/')[0] in os.listdir('./'):
print('Folder already exists')
else:
os.mkdir('./'+poster_folder)Folder already exists
poster_folder_____no_output_____# For the purpose of this example, i will be working with the 1999 Sci-Fi movie - "The Matrix"!
api_key = 'a237bfff7e08d0e6902c623978183be0' #Enter your own API key here to run the code below.
# Generate your own API key as explained above :)
tmdb.API_KEY = api_key #This sets the API key setting for the tmdb object
search = tmdb.Search() #this instantiates a tmdb "search" object which allows your to search for the movie
import os.path
# These functions take in a string movie name i.e. like "The Matrix" or "Interstellar"
# What they return is pretty much clear in the name - Poster, ID , Info or genre of the Movie!
def grab_poster_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
posterp=movie.info()['poster_path']
title=movie.info()['original_title']
url='image.tmdb.org/t/p/original'+posterp
title='_'.join(title.split(' '))
strcmd='wget -O '+poster_folder+title+'.jpg '+url
os.system(strcmd)
def get_movie_id_tmdb(movie):
response = search.movie(query=movie)
movie_id=response['results'][0]['id']
return movie_id
def get_movie_info_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
info=movie.info()
return info
def get_movie_genres_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
genres=movie.info()['genres']
return genres_____no_output_____
</code>
While the above functions have been made to make it easy to get genres, posters and ID, all the information that can be accessed can be seen by calling the function get_movie_info() as shown below_____no_output_____
<code>
print get_movie_genres_tmdb("The Matrix")[{u'id': 28, u'name': u'Action'}, {u'id': 878, u'name': u'Science Fiction'}]
info=get_movie_info_tmdb("The Matrix")
print "All the Movie information from TMDB gets stored in a dictionary with the following keys for easy access -"
info.keys()All the Movie information from TMDB gets stored in a dictionary with the following keys for easy access -
</code>
So, to get the tagline of the movie we can use the above dictionary key - _____no_output_____
<code>
info=get_movie_info_tmdb("The Matrix")
print info['tagline']Welcome to the Real World.
</code>
## Getting movie information from IMDB_____no_output_____Now that we know how to get information from TMDB, here's how we can get information about the same movie from IMDB. This makes it possible for us to combine more information, and get a richer dataset. I urge you to try and see what dataset you can make, and go above and beyond the basic things I've done in this tutorial. Due to the differences between the two datasets, you will have to do some cleaning, however both of these datasets are extremely clean and it will be minimal._____no_output_____
<code>
# Create the IMDB object that will be used to access the IMDb's database.
imbd_object = imdb.IMDb() # by default access the web.
# Search for a movie (get a list of Movie objects).
results = imbd_object.search_movie('The Matrix')
# As this returns a list of all movies containing the word "The Matrix", we pick the first element
movie = results[0]
imbd_object.update(movie)
print "All the information we can get about this movie from IMDB-"
movie.keys()All the information we can get about this movie from IMDB-
print "The genres associated with the movie are - ",movie['genres']The genres associated with the movie are - [u'Action', u'Sci-Fi']
</code>
## A small comparison of IMDB and TMDB_____no_output_____Now that we have both systems running, let's do a very short comparison for the same movie?_____no_output_____
<code>
print "The genres for The Matrix pulled from IMDB are -",movie['genres']
print "The genres for The Matrix pulled from TMDB are -",get_movie_genres_tmdb("The Matrix")The genres for The Matrix pulled from IMDB are - [u'Action', u'Sci-Fi']
The genres for The Matrix pulled from TMDB are - [{u'id': 28, u'name': u'Action'}, {u'id': 878, u'name': u'Science Fiction'}]
</code>
As we can see, both the systems are correct, but the way they package information is different. TMDB calls it "Science Fiction" and has an ID for every genre. While IMDB calls it "Sci-Fi". Thus, it is important to keep track of these things when making use of both the datasets simultaneously._____no_output_____Now that we know how to scrape information for one movie, let's take a bigger step towards scraping multiple movies?_____no_output_____## Working with multiple movies : Obtaining Top 20 movies from TMDB_____no_output_____We first instantiate an object that inherits from class Movies from TMDB. Then We use the **popular()** class method (i.e. function) to get top movies. To get more than one page of results, the optional page argument lets us see movies from any specified page number._____no_output_____
<code>
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
print(len(top_movies['results']))
top20_movs=top_movies['results']20
</code>
Let's look at one of these movies. It's the same format as above, as we had information on the movie "The Matrix", as you can see below. It's a dictionary which can be queried for specific information on that movie_____no_output_____
<code>
first_movie=top20_movs[0]
print "Here is all the information you can get on this movie - "
print first_movie
print "\n\nThe title of the first movie is - ", first_movie['title']Here is all the information you can get on this movie -
{u'poster_path': u'/3IGbjc5ZC5yxim5W0sFING2kdcz.jpg', u'title': u'Solo: A Star Wars Story', u'overview': u'Through a series of daring escapades deep within a dark and dangerous criminal underworld, Han Solo meets his mighty future copilot Chewbacca and encounters the notorious gambler Lando Calrissian.', u'release_date': u'2018-05-15', u'popularity': 214.308, u'original_title': u'Solo: A Star Wars Story', u'backdrop_path': u'/96B1qMN9RxrAFu6uikwFhQ6N6J9.jpg', u'vote_count': 1804, u'video': False, u'adult': False, u'vote_average': 6.7, u'genre_ids': [28, 12, 878], u'id': 348350, u'original_language': u'en'}
The title of the first movie is - Solo: A Star Wars Story
</code>
Let's print out top 5 movie's titles! _____no_output_____
<code>
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
print title
if i==4:
breakSolo: A Star Wars Story
The Nun
Avengers: Infinity War
The Predator
Jurassic World: Fallen Kingdom
</code>
### Yes, I know. I'm a little upset too seeing Beauty and the Beast above Logan in the list!_____no_output_____Moving on, we can get their genres the same way._____no_output_____
<code>
for i in range(len(top20_movs)):
mov=top20_movs[i]
genres=mov['genre_ids']
print genres
if i==4:
break[28, 12, 878]
[27, 9648, 53]
[12, 878, 28]
[27, 878, 28, 35]
[28, 12, 878]
</code>
So, TMDB doesn't want to make your job as easy as you thought. Why these random numbers? Want to see their genre names? Well, there's the Genre() class for it. Let's get this done!_____no_output_____
<code>
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']_____no_output_____
</code>
Let's convert this list into a nice dictionary to look up genre names from genre IDs!_____no_output_____
<code>
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name_____no_output_____
</code>
Now, let's re-print the genres of top 20 movies? _____no_output_____
<code>
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
genre_ids=mov['genre_ids']
genre_names=[]
for id in genre_ids:
genre_name=Genre_ID_to_name[id]
genre_names.append(genre_name)
print title,genre_names
if i==4:
breakSolo: A Star Wars Story [u'Action', u'Adventure', u'Science Fiction']
The Nun [u'Horror', u'Mystery', u'Thriller']
Avengers: Infinity War [u'Adventure', u'Science Fiction', u'Action']
The Predator [u'Horror', u'Science Fiction', u'Action', u'Comedy']
Jurassic World: Fallen Kingdom [u'Action', u'Adventure', u'Science Fiction']
</code>
# Section 4 - Building a dataset to work with : Let's take a look at the top 1000 movies from the database_____no_output_____Making use of the same api as before, we will just pull results from the top 50 pages. As mentioned earlier, the "page" attribute of the command top_movies=all_movies.popular() can be used for this purpose._____no_output_____Please note: Some of the code below will store the data into python "pickle" files so that it can be ready directly from memory, as opposed to being downloaded every time. Once done, you should comment out any code which generated an object that was pickled and is no longer needed._____no_output_____
<code>
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
len(top_movies['results'])
top20_movs=top_movies['results']_____no_output_____# Comment out this cell once the data is saved into pickle file.
all_movies=tmdb.Movies()
top1000_movies=[]
print('Pulling movie list, Please wait...')
for i in range(1,51):
if i%15==0:
time.sleep(7)
movies_on_this_page=all_movies.popular(page=i)['results']
top1000_movies.extend(movies_on_this_page)
len(top1000_movies)
f3=open('movie_list.pckl','wb')
pickle.dump(top1000_movies,f3)
f3.close()
print('Done!')Pulling movie list, Please wait...
f3=open('movie_list.pckl','rb')
top1000_movies=pickle.load(f3)
f3.close()_____no_output_____
</code>
# Pairwise analysis of Movie Genres_____no_output_____As our dataset is multi label, simply looking at the distribution of genres is not sufficient. It might be beneficial to see which genres co-occur, as it might shed some light on inherent biases in our dataset. For example, it would make sense if romance and comedy occur together more often than documentary and comedy. Such inherent biases tell us that the underlying population we are sampling from itself is skewed and not balanced. We may then take steps to account for such problems. Even if we don't take such steps, it is important to be aware that we are making the assumption that an unbalanced dataset is not hurting our performance and if need be, we can come back to address this assumption. Good old scientific method, eh?
So for the top 1000 movies let's do some pairwise analysis for genre distributions. Our main purpose is to see which genres occur together in the same movie. So, we first define a function which takes a list and makes all possible pairs from it. Then, we pull the list of genres for a movie and run this function on the list of genres to get all pairs of genres which occur together_____no_output_____
<code>
# This function just generates all possible pairs of movies
def list2pairs(l):
# itertools.combinations(l,2) makes all pairs of length 2 from list l.
pairs = list(itertools.combinations(l, 2))
# then the one item pairs, as duplicate pairs aren't accounted for by itertools
for i in l:
pairs.append([i,i])
return pairs_____no_output_____
</code>
As mentioned, now we will pull genres for each movie, and use above function to count occurrences of when two genres occurred together_____no_output_____
<code>
# get all genre lists pairs from all movies
allPairs = []
for movie in top1000_movies:
allPairs.extend(list2pairs(movie['genre_ids']))
nr_ids = np.unique(allPairs)
visGrid = np.zeros((len(nr_ids), len(nr_ids)))
for p in allPairs:
visGrid[np.argwhere(nr_ids==p[0]), np.argwhere(nr_ids==p[1])]+=1
if p[1] != p[0]:
visGrid[np.argwhere(nr_ids==p[1]), np.argwhere(nr_ids==p[0])]+=1_____no_output_____
</code>
Let's take a look at the structure we just made. It is a 19X19 structure, as shown below. Also, see that we had 19 Genres. Needless to say, this structure counts the number of simultaneous occurrences of genres in same movie._____no_output_____
<code>
print visGrid.shape
print len(Genre_ID_to_name.keys())_____no_output_____annot_lookup = []
for i in xrange(len(nr_ids)):
annot_lookup.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(visGrid, xticklabels=annot_lookup, yticklabels=annot_lookup)_____no_output_____
</code>
The above image shows how often the genres occur together, as a heatmap_____no_output_____Important thing to notice in the above plot is the diagonal. The diagonal corresponds to self-pairs, i.e. number of times a genre, say Drama occurred with Drama. Which is basically just a count of the total times that genre occurred!
As we can see there are a lot of dramas in the data set, it is also a very unspecific label. There are nearly no documentaries or TV Movies. Horror is a very distinct label, and romance is also not too widely spread.
To account for this unbalanced data, there are multiple things we can try to explore what interesting relationships can be found._____no_output_____## Delving Deeper into co-occurrence of genres_____no_output_____What we want to do now is to look for nice groups of genres that co-occur, and see if it makes sense to us logically? Intuitively speaking, wouldn't it be fun if we saw nice boxes on the above plot - boxes of high intensity i.e. genres that occur together and don't occur much with other genres. In some ways, that would isolate the co-occurrence of some genres, and heighten the co-occurrence of others.
While the data may not show that directly, we can play with the numbers to see if that's possible. The technique used for that is called biclustering._____no_output_____
<code>
from sklearn.cluster import SpectralCoclustering_____no_output_____model = SpectralCoclustering(n_clusters=5)
model.fit(visGrid)
fit_data = visGrid[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
annot_lookup_sorted = []
for i in np.argsort(model.row_labels_):
annot_lookup_sorted.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(fit_data, xticklabels=annot_lookup_sorted, yticklabels=annot_lookup_sorted, annot=False)
plt.title("After biclustering; rearranged to show biclusters")
plt.show()_____no_output_____
</code>
Looking at the above figure, "boxes" or groups of movie genres automatically emerge!
Intuitively - Crime, Sci-Fi, Mystery, Action, Horror, Drama, Thriller, etc co-occur.
AND, Romance, Fantasy, Family, Music, Adventure, etc co-occur.
That makes a lot of intuitive sense, right?
One challenge is the broad range of the drama genre. It makes the two clusters highly overlapping. If we merge it together with action thriller, etc. We will end up with nearly all movies just having that label. _____no_output_____**Based on playing around with the stuff above, we can sort the data into the following genre categories - "Drama, Action, ScienceFiction, exciting(thriller, crime, mystery), uplifting(adventure, fantasy, animation, comedy, romance, family), Horror, History"**
Note: that this categorization is subjective and by no means the only right solution. One could also just stay with the original labels and only exclude the ones with not enough data. Such tricks are important to balance the dataset, it allows us to increase or decrease the strength of certain signals, making it possible to improve our inferences :)_____no_output_____# Interesting Questions
This really should be a place for you to get creative and hopefully come up with better questions than me.
Here are some of my thoughts:
- Which actors are bound to a genre, and which can easily hop genres?
- Is there a trend in genre popularity over the years?
- Can you use sound tracks to identify the genre of a movie?
- Are top romance actors higher paid than top action actors?
- If you look at release date vs popularity score, which movie genres have a longer shelf life?
Ideas to explore specifically for feature correlations:
- Are title length correlated with movie genre?
- Are movie posters darker for horror than for romance end comedy?
- Are some genres specifically released more often at a certain time of year?
- Is the RPG rating correlated with the genre?_____no_output_____# Based on this new category set, we will now pull posters from TMDB as our training data!_____no_output_____
<code>
# Done before, reading from pickle file now to maintain consistency of data!
# We now sample 100 movies per genre. Problem is that the sorting is by popular movies, so they will overlap.
# Need to exclude movies that were already sampled.
movies = []
baseyear = 2017
print('Starting pulling movies from TMDB. If you want to debug, uncomment the print command. This will take a while, please wait...')
done_ids=[]
for g_id in nr_ids:
#print('Pulling movies for genre ID '+g_id)
baseyear -= 1
for page in xrange(1,6,1):
time.sleep(0.5)
url = 'https://api.themoviedb.org/3/discover/movie?api_key=' + api_key
url += '&language=en-US&sort_by=popularity.desc&year=' + str(baseyear)
url += '&with_genres=' + str(g_id) + '&page=' + str(page)
data = urllib2.urlopen(url).read()
dataDict = json.loads(data)
movies.extend(dataDict["results"])
done_ids.append(str(g_id))
print("Pulled movies for genres - "+','.join(done_ids))_____no_output_____# f6=open("movies_for_posters",'wb')
# pickle.dump(movies,f6)
# f6.close()_____no_output_____f6=open("movies_for_posters",'rb')
movies=pickle.load(f6)
f6.close()_____no_output_____
</code>
Let's remove any duplicates that we have in the list of movies_____no_output_____
<code>
movie_ids = [m['id'] for m in movies]
print "originally we had ",len(movie_ids)," movies"
movie_ids=np.unique(movie_ids)
print len(movie_ids)
seen_before=[]
no_duplicate_movies=[]
for i in range(len(movies)):
movie=movies[i]
id=movie['id']
if id in seen_before:
continue
# print "Seen before"
else:
seen_before.append(id)
no_duplicate_movies.append(movie)
print "After removing duplicates we have ",len(no_duplicate_movies), " movies"_____no_output_____
</code>
Also, let's remove movies for which we have no posters!_____no_output_____
<code>
poster_movies=[]
counter=0
movies_no_poster=[]
print("Total movies : ",len(movies))
print("Started downloading posters...")
for movie in movies:
id=movie['id']
title=movie['title']
if counter==1:
print('Downloaded first. Code is working fine. Please wait, this will take quite some time...')
if counter%300==0 and counter!=0:
print "Done with ",counter," movies!"
print "Trying to get poster for ",title
try:
#grab_poster_tmdb(title)
poster_movies.append(movie)
except:
try:
time.sleep(7)
grab_poster_tmdb(title)
poster_movies.append(movie)
except:
movies_no_poster.append(movie)
counter+=1
print("Done with all the posters!")_____no_output_____print len(movies_no_poster)
print len(poster_movies)_____no_output_____# f=open('poster_movies.pckl','w')
# pickle.dump(poster_movies,f)
# f.close()_____no_output_____f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()_____no_output_____# f=open('no_poster_movies.pckl','w')
# pickle.dump(movies_no_poster,f)
# f.close()_____no_output_____f=open('no_poster_movies.pckl','r')
movies_no_poster=pickle.load(f)
f.close()_____no_output_____
</code>
# Congratulations, we are done scraping!_____no_output_____# Building a dataset out of the scraped information!_____no_output_____This task is simple, but **extremely** important. It's basically what will set the stage for the whole project. Given that you have the freedom to cast their own project within the framework I am providing, there are many decisions that you must make to finalize **your own version** of the project._____no_output_____As we are working on a **classification** problem, we need to make two decisions given the data at hand -
* What do we want to predict, i.e. what's our Y?
* What features to use for predicting this Y, i.e. what X should we use?_____no_output_____There are many different options possible, and it comes down to you to decide what's most exciting. I will be picking my own version for the example, **but it is imperative that you think this through, and come up with a version which excites you!**_____no_output_____As an example, here are some possible ways to frame Y, while still sticking to the problem of genre prediction -
* Assume every movie can have multiple genres, and then it becomes a multi-label classification problem. For example, a movie can be Action, Horror and Adventure simultaneously. Thus, every movie can be more than one genre.
* Make clusters of genres as we did in Milestone 1 using biclustering, and then every movie can have only 1 genre. This way, the problem becomes a simpler, multi-class problem. For example, a movie could have the class - Uplifting (refer Milestone 1), or Horror or History. No movie get's more than one class.
For the purposes of this implementation, I'm going with the first case explained above - i.e. a multi-label classification problem._____no_output_____Similarly, for designing our input features i.e. X, you may pick any features you think make sense, for example, the Director of a movie may be a good predictor for genre. OR, they may choose any features they design using algorithms like PCA. Given the richness of IMDB, TMDB and alternate sources like Wikipedia, there is a plethora of options available. **Be creative here!**_____no_output_____Another important thing to note is that in doing so, we must also make many more small implementation decisions on the way. For example, what genres are we going to include? what movies are we going to include? All these are open ended!_____no_output_____## My Implementation_____no_output_____Implementation decisions made -
* The problem is framed here as a multi-label problem explained above.
* We will try to predict multiple genres associated with a movie. This will be our Y.
* We will use 2 different kinds of X - text and images.
* For the text part - Input features being used to predict the genre is a form of the movie's plot available from TMDB using the property 'overview'. This will be our X.
* For the image part - we will use the scraped poster images as our X.
NOTE : We will first look at some conventional machine learning models, which were popular before the recent rise of neural networks and deep learning. For the poster image to genre prediction, I have avoided using this for the reason that conventional ML models are simply not used anymore without using deep learning for feature extraction (all discussed in detail ahead, don't be scared by the jargon). For the movie overview to genre prediction problem we will look at both conventional models and deep learning models.
Now, let's build our X and Y!_____no_output_____First, let's identify movies that have overviews. **Next few steps are going to be a good example on why data cleaning is important!**_____no_output_____
<code>
movies_with_overviews=[]
for i in range(len(no_duplicate_movies)):
movie=no_duplicate_movies[i]
id=movie['id']
overview=movie['overview']
if len(overview)==0:
continue
else:
movies_with_overviews.append(movie)
len(movies_with_overviews)_____no_output_____
</code>
Now let's store the genre's for these movies in a list that we will later transform into a binarized vector.
Binarized vector representation is a very common and important way data is stored/represented in ML. Essentially, it's a way to reduce a categorical variable with n possible values to n binary indicator variables. What does that mean? For example, let [(1,3),(4)] be the list saying that sample A has two labels 1 and 3, and sample B has one label 4. For every sample, for every possible label, the representation is simply 1 if it has that label, and 0 if it doesn't have that label. So the binarized version of the above list will be -
~~~~~
[(1,0,1,0]),
(0,0,0,1])]
~~~~~_____no_output_____
<code>
# genres=np.zeros((len(top1000_movies),3))
genres=[]
all_ids=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
genre_ids=movie['genre_ids']
genres.append(genre_ids)
all_ids.extend(genre_ids)_____no_output_____from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genres)_____no_output_____genres[1]_____no_output_____print Y.shape
print np.sum(Y, axis=0)_____no_output_____len(list_of_genres)_____no_output_____
</code>
This is interesting. We started with only 19 genre labels if you remember. But the shape for Y is 1666,20 while it should be 1666,19 as there are only 19 genres? Let's explore._____no_output_____Let's find genre IDs that are not present in our original list of genres!_____no_output_____
<code>
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name_____no_output_____for i in set(all_ids):
if i not in Genre_ID_to_name.keys():
print i_____no_output_____
</code>
Well, this genre ID wasn't given to us by TMDB when we asked it for all possible genres. How do we go about this now? We can either neglect all samples that have this genre. But if you look up you'll see there's too many of these samples. So, I googled more and went into their documentation and found that this ID corresponds to the genre "Foreign". So, we add it to the dictionary of genre names ourselves. Such problems are ubiquitous in machine learning, and it is up to us to diagnose and correct them. We must always make a decision about what to keep, how to store data and so on. _____no_output_____
<code>
Genre_ID_to_name[10769]="Foreign" #Adding it to the dictionary_____no_output_____len(Genre_ID_to_name.keys())_____no_output_____
</code>
Now, we turn to building the X matrix i.e. the input features! As described earlier, we will be using the overview of movies as our input vector! Let's look at a movie's overview for example!_____no_output_____
<code>
sample_movie=movies_with_overviews[5]
sample_overview=sample_movie['overview']
sample_title=sample_movie['title']
print "The overview for the movie",sample_title," is - \n\n"
print sample_overview_____no_output_____
</code>
## So, how do we store this movie overview in a matrix?
#### Do we just store the whole string? We know that we need to work with numbers, but this is all text. What do we do?!_____no_output_____The way we will be storing the X matrix is called a "Bag of words" representation. The basic idea of this representation in our context is that we can think of all the distinct words that are possible in the movies' reviews as a distinct object. And then every movie overview can be thought as a "Bag" containing a bunch of these possible objects.
For example, in the case of Zootopia the movie above - The "Bag" contains the words ("Determined", "to", "prove", "herself"......"the", "mystery"). We make such lists for all movie overviews. Finally, we binarize again like we did above for Y. scikit-learn makes our job easy here by simply using a function CountVectorizer() because this representation is so often used in Machine Learning._____no_output_____What this means is that, for all the movies that we have the data on, we will first count all the unique words. Say, there's 30,000 unique words. Then we can represent every movie overview as a 30000x1 vector, where each position in the vector corresponds to the presence or absence of a particular word. If the word corresponding to that position is present in the overview, that position will have 1, otherwise it will be 0.
Ex - if our vocabular was 4 words - "I","am","a","good","boy", then the representation for the sentence "I am a boy" would be [1 1 1 0 1], and for the sentence "I am good" would be [1 1 0 1 0]._____no_output_____
<code>
from sklearn.feature_extraction.text import CountVectorizer
import re_____no_output_____content=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
overview=movie['overview']
overview=overview.replace(',','')
overview=overview.replace('.','')
content.append(overview)_____no_output_____print content[0]
print len(content)_____no_output_____
</code>
# Are all words equally important?_____no_output_____#### At the cost of sounding "Animal Farm" inspired, I would say not all words are equally important.
For example, let's consider the overview for the Matrix - _____no_output_____
<code>
get_movie_info_tmdb('The Matrix')['overview']_____no_output_____
</code>
For "The Matrix" a word like "computer" is a stronger indicators of it being a Sci-Fi movie, than words like "who" or "powerful" or "vast". One way computer scientists working with natural language tackled this problem in the past (and it is still used very popularly) is what we call TF-IDF i.e. Term Frequence, Inverse Document Frequency. The basic idea here is that words that are strongly indicative of the content of a single document (every movie overview is a document in our case) are words that occur very frequently in that document, and very infrequently in all other documents. For example, "Computer" occurs twice here but probably will not in most other movie overviews. Hence, it is indicative. On the other hand, generic words like "a","and","the" will occur very often in all documents. Hence, they are not indicative.
So, can we use this information to reduce our insanely high 30,000 dimensional vector representation to a smaller, more handle-able number? But first up, why should we even care? The answer is probably one of the most used phrases in ML - "The Curse of Dimensionality"._____no_output_____# The Curse of Dimensionality_____no_output_____#### This section is strongly borrowing from one of the greatest <a href="https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf">ML papers I've ever read.</a>
This expression was coined by Bellman in 1961 to refer to the fact that many algorithms that work fine in low dimensions become intractable when the input is high-dimensional. The reason for them not working in high dimensions is very strongly linked to what we discussed earlier - having a representative dataset. Consider this, you have a function $f$ dependent only one dependent variable $x$, and $x$ can only integer values from 1 to 100. Since it's one dimensional, it can be plotted on a line. To get a representative sample, you'd need to sample something like - $f(1),f(20),f(40),f(60),f(80),f(100)$_____no_output_____Now, let's increase the dimensionality i.e. number of dependent variables and see what happens. Say, we have 2 variables $x_1$ and $x_2$, same possible as before - integers between 1 and 100. Now, instead of a line, we'll have a plane with $x_1$ and $x_2$ on the two axes. The interesting bit is that instead of 100 possible values of dependent variables like before, we now have 100,000 possible values! Basically, we can make 100x100 table of possible values of $x_1$ and $x_2$. Wow, that increased exponentially. Not just figuratively, but mathematically exponentially. Needless to say, to cover 5% of the space like we did before, we'd need to sample $f$ at 5000 values. _____no_output_____For 3 variables, it would be 100,000,000, and we'd need to sample at 500,000 points. That's already more than the number of data points we have for most training problems we will ever come across._____no_output_____Basically, as the dimensionality (number of features) of the examples grows, because a fixed-size training set covers a dwindling fraction of the input space. Even with a moderate dimension of 100 and a huge training set of a trillion examples, the latter covers only a fraction of about $10^{−18}$ of the input space. This is what makes machine learning
both necessary and hard._____no_output_____So, yes, if some words are unimportant, we want to get rid of them and reduce the dimensionality of our X matrix. And the way we will do it is using TF-IDF to identify un-important words. Python let's us do this with just one line of code (And this is why you should spend more time reading maths, than coding!)_____no_output_____
<code>
# The min_df paramter makes sure we exclude words that only occur very rarely
# The default also is to exclude any words that occur in every movie description
vectorize=CountVectorizer(max_df=0.95, min_df=0.005)
X=vectorize.fit_transform(content)_____no_output_____
</code>
We are excluding all words that occur in too many or too few documents, as these are very unlikely to be discriminative. Words that only occur in one document most probably are names, and words that occur in nearly all documents are probably stop words. Note that the values here were not tuned using a validation set. They are just guesses. It is ok to do, because we didn't evaluate the performance of these parameters. In a strict case, for example for a publication, it would be better to tune these as well. _____no_output_____
<code>
X.shape_____no_output_____
</code>
So, each movie's overview gets represented by a 1x1365 dimensional vector.
Now, we are ready for the kill. Our data is cleaned, hypothesis is set (Overview can predict movie genre), and the feature/output vectors are prepped. Let's train some models!_____no_output_____
<code>
import pickle
f4=open('X.pckl','wb')
f5=open('Y.pckl','wb')
pickle.dump(X,f4)
pickle.dump(Y,f5)
f6=open('Genredict.pckl','wb')
pickle.dump(Genre_ID_to_name,f6)
f4.close()
f5.close()
f6.close()_____no_output_____
</code>
# Congratulations, we have our data set ready!_____no_output_____A note : As we are building our own dataset, and I didn't want you to spend all your time waiting for poster image downloads to finish, I am working with an EXTREMELY small dataset. That is why, the results we will see for the deep learning portion will not be spectacular as compared to conventional machine learning methods. If you want to see the real power, you should spend some more time scraping something of the order of 100,000 images, as opposed to 1000 odd like I am doing here. Quoting the paper I mentioned above - MORE DATA BEATS A CLEVERER ALGORITHM.
#### As the TA, I saw that most teams working on the project had data of the order of 100,000 movies. So, if you want to extract the power of these models, consider scraping a larger dataset than me._____no_output_____# Section 5 - Non-deep, Conventional ML models with above data_____no_output_____Here is a layout of what we will be doing -
- We will implement two different models
- We will decide a performance metric i.e. a quantitative method to be sure about how well difference models are doing.
- Discussion of the differences between the models, their strengths, weaknesses, etc. _____no_output_____As discussed earlier, there are a LOT of implementation decisions to be made. Between feature engineering, hyper-parameter tuning, model selection and how interpretable do you want your model to be (Read : Bayesian vs Non-Bayesian approaches) a lot is to be decided. For example, some of these models could be:
- Generalized Linear Models
- SVM
- Shallow (1 Layer, i.e. not deep) Neural Network
- Random Forest
- Boosting
- Decision Tree
Or go more bayesian:
- Naive Bayes
- Linear or Quadratic Discriminant Analysis
- Bayesian Hierarchical models_____no_output_____The list is endless, and not all models will make sense for the kind of problem you have framed for yourself. ** Think about which model best fits for your purpose.**_____no_output_____For our purposes here, I will be showing the example of 2 very simple models, one picked from each category above -
1. SVM
2. Multinomial Naive Bayes_____no_output_____A quick overview of the whole pipeline coming below:
- A little bit of feature engineering
- 2 different Models
- Evaluation Metrics chosen
- Model comparisons_____no_output_____### Let's start with some feature engineering. _____no_output_____Engineering the right features depends on 2 key ideas. Firstly, what is it that you are trying to solve? For example, if you want to guess my music preferences and you try to train a super awesome model while giving it what my height is as input features, you're going to have no luck. On the other hand, giving it my Spotify playlist will solve the problem with any model. So, CONTEXT of the problem plays a role.
Second, you can only represent based on the data at hand. Meaning, if you didn't have access to my Spotify playlist, but to my Facebook statuses - You know all my statuses about Harvard may not be useful. But if you represent me as my Facebook statuses which are YouTube links, that would also solve the problem. So, AVAILABILITY OF DATA at hand is the second factor.
#### A nice way to think of it is to think that you start with the problem at hand, but design features constrained by the data you have available. If you have many independent features that each correlate well with the class, learning is easy. On the other hand, if the class is a very complex function of the features, you may not be able to learn it.
In the context of this problem, we would like to predict the genre of a movie. what we have access to - movie overviews, which are text descriptions of the movie plot. The hypothesis makes sense, overview is a short description of the story and the story is clearly important in assigning genres to movies.
So, let's improve our features by playing with the words in the overviews in our data. One interesting way to go back to what we discussed earlier - TF-IDF. We originally used it to filter words, but we can also assign the tf-idf values as "importance" values to words, as opposed to treating them all equally. Tf-idf simply tries to identify the assign a weightage to each word in the bag of words. _____no_output_____Once again, the way it works is - Most movie descriptions have the word "The" in it. Obviously, it doesn't tell you anything special about it. So weightage should be inversely proportional to how many movies have the word in their description. This is the IDF part.
On the other hand, for the movie interstellar, if the description has the word Space 5 times, and wormhole 2 times, then it's probably more about Space than about wormhole. Thus, space should have a high weightage. This is the TF part.
We simply use TF-IDf to assign weightage to every word in the bag of words. Which makes sense, right? :)_____no_output_____
<code>
from sklearn.feature_extraction.text import TfidfTransformer_____no_output_____tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X)
X_tfidf.shape_____no_output_____
</code>
Let's divide our X and Y matrices into train and test split. We train the model on the train split, and report the performance on the test split. Think of this like the questions you do in the problem sets v/s the exam. Of course, they are both (assumed to be) from the same population of questions. And doing well on Problem Sets is a good indicator that you'll do well in exams, but really, you must test before you can make any claims about you knowing the subject._____no_output_____
<code>
msk = np.random.rand(X_tfidf.shape[0]) < 0.8_____no_output_____X_train_tfidf=X_tfidf[msk]
X_test_tfidf=X_tfidf[~msk]
Y_train=Y[msk]
Y_test=Y[~msk]
positions=range(len(movies_with_overviews))
# print positions
test_movies=np.asarray(positions)[~msk]
# test_movies_____no_output_____from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import f1_score
from sklearn.metrics import make_scorer
from sklearn.metrics import classification_report_____no_output_____parameters = {'kernel':['linear'], 'C':[0.01, 0.1, 1.0]}
gridCV = GridSearchCV(SVC(class_weight='balanced'), parameters, scoring=make_scorer(f1_score, average='micro'))
classif = OneVsRestClassifier(gridCV)
classif.fit(X_train_tfidf, Y_train)_____no_output_____predstfidf=classif.predict(X_test_tfidf)
print classification_report(Y_test, predstfidf)_____no_output_____
</code>
As you can see, the performance is by and large poorer for movies which are less represented like War and animation, and better for categories like Drama._____no_output_____Numbers aside, let's look at our model's predictions for a small sample of movies from our test set._____no_output_____
<code>
genre_list=sorted(list(Genre_ID_to_name.keys()))_____no_output_____predictions=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predstfidf[i]
# print movie_label_scores
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictions.append(pred_genres)_____no_output_____import pickle
f=open('classifer_svc','wb')
pickle.dump(classif,f)
f.close()_____no_output_____for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictions[i])_____no_output_____
</code>
Let's try our second model? The naive bayes model._____no_output_____
<code>
from sklearn.naive_bayes import MultinomialNB
classifnb = OneVsRestClassifier(MultinomialNB())
classifnb.fit(X[msk].toarray(), Y_train)
predsnb=classifnb.predict(X[~msk].toarray())_____no_output_____import pickle
f2=open('classifer_nb','wb')
pickle.dump(classifnb,f2)
f2.close()_____no_output_____predictionsnb=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predsnb[i]
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictionsnb.append(pred_genres)_____no_output_____for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictionsnb[i])_____no_output_____
</code>
As can be seen above, the results seem promising, but how do we really compare the two models? We need to quantify our performance so that we can say which one's better. Takes us back to what we discussed right in the beginning - we're learning a function $g$ which can approximate the original unknown function $f$. For some values of $x_i$, the predictions will be wrong for sure, and we want to minimize it.
For multi label systems, we often keep track of performance using "Precision" and "Recall". These are standard metrics, and you can google to read up more about them if you're new to these terms._____no_output_____# Evaluation Metrics_____no_output_____We will use the standard precision recall metrics for evaluating our system._____no_output_____
<code>
def precision_recall(gt,preds):
TP=0
FP=0
FN=0
for t in gt:
if t in preds:
TP+=1
else:
FN+=1
for p in preds:
if p not in gt:
FP+=1
if TP+FP==0:
precision=0
else:
precision=TP/float(TP+FP)
if TP+FN==0:
recall=0
else:
recall=TP/float(TP+FN)
return precision,recall_____no_output_____precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictions[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))_____no_output_____precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictionsnb[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))_____no_output_____
</code>
The average precision and recall scores for our samples are pretty good! Models seem to be working! Also, we can see that the Naive Bayes performs outperforms SVM. **I strongly suggest you to go read about Multinomial Bayes and think about why it works so well for "Document Classification", which is very similar to our case as every movie overview can be thought of as a document we are assigning labels to.**_____no_output_____# Section 6 - Deep Learning : an intuitive overview_____no_output_____The above results were good, but it's time to bring out the big guns. So first and foremost, let's get a very short idea about what's deep learning. This is for peope who don't have background in this - it's high level and gives just the intuition. _____no_output_____As described above, the two most immportant concepts in doing good classification (or regression) are to 1) use the right representation which captures the right information about the data which is relevant to the problem at hand 2) Using the right model which has the capability of making sense of the representation fed to it. _____no_output_____While for the second part we have complicated and powerful models that we have studied at length, we don't seem to have a principled, mathematical way of doing the first part - i.e. representation. What we did above was to see "What makes sense", and go from there. That is not a good approach for complex data/ complex problems. Is there some way to automate this? Deep Learning, does just this._____no_output_____To just emphasize the importance of representation in the complex tasks we usually attempt with Deep Learning, let me talk about the original problem which made it famous. The paper is often reffered to as the "Imagenet Challenge Paper", and it was basically working on object recognition in images. Let's try to think about an algorithm that tries to detect a chair.
## If I ask you to "Define" a chair, how would you? - Something with 4 legs?_____no_output_____<img src="files/chair1.png" height="400" width="400">
<h3><center>All are chairs, none with 4 legs. (Pic Credit: Zoya Bylinskii)</center></h3>_____no_output_____## How about some surface that we sit on then?_____no_output_____<img src="files/chair2.png" height="400" width="400">
<h3><center>All are surfaces we sit on, none are chairs. (Pic Credit: Zoya Bylinskii)</center></h3>_____no_output_____Clearly, these definitions won't work and we need something more complicated. Sadly, we can't come up with a simple text rule that our computer can search for! And we take a more principled approach._____no_output_____The "Deep" in the deep learning comes from the fact that it was conventionally applied to Neural Networks. Neural Networks, as we all know, are structures organized in layers. Layers of computations. Why do we need layers? Because these layers can be seen as sub-tasks that we do in the complicated task of identifying a chair. It can be thought as a heirarchical break down of a complicated job into smalled sub-tasks.
Mathematically, each layer acts like a space transformation which takes the pixel values to a high dimensional space. When we start out, every pixel in the image is given equal importance in our matrix. With each layer, convolution operations give some parts more importance, and some lesser importance. In doing so, we transform our images to a space in which similar looking objects/object parts are closer (We are basically learning this space transformation in deep learning, nothing else)
_____no_output_____What exactly was learnt by these neural networks is hard to know, and an active area of research. But one very crude way to visualize what it does is to think like - It starts by learning very generic features in the first layer. Something as simple as vertical and horizontal lines. In the next layer, it learns that if you combine the vectors representing vertical and horizontal vectors in different ratios, you can make all possible slanted lines. Next layer learns to combine lines to form curves - Say, something like the outline of a face. These curves come together to form 3D objects. And so on. Building sub-modules, combining them in the right way which can give it semantics._____no_output_____**So, in a nutshell, the first few layers of a "Deep" network learn the right representation of the data, given the problem (which is mathematically described by your objective function trying to minimize difference between ground truth and predicted labels). The last layer simply looks how close or far apart things are in this high dimensional space.**_____no_output_____Hence, we can give any kind of data a high dimensional representation using neural networks. Below we will see high dimensional representations of both words in overviews (text) and posters (image). Let's get started with the posters i.e. extracting visual features from posters using deep learning._____no_output_____# Section 7 - Deep Learning for predicting genre from poster
Once again, we must make an implementation decision. This time, it has more to do with how much time are we willing to spend in return for added accuracy. We are going to use here a technique that is commonly referred to as Pre-Training in Machine Learning Literature.
Instead of me trying to re-invent the wheel here, I am going to borrow this short section on pre-training from Stanford University's lecture on <a href='http://cs231n.github.io/transfer-learning/'> CNN's</a>. To quote -
''In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. ''
There are three broad ways in which transfer learning or pre-training can be done. (The 2 concepts are different and to understand the difference clearly, I suggest you read the linked lecture thoroughly). The way we are going to about it is by using a pre-trained, released ConvNet as feature extractor. Take a ConvNet pretrained on ImageNet (a popular object detection dataset), remove the last fully-connected layer. After removing the last layer, what we have is just another neural network i.e. a stack of space tranformations. But, originally the output of this stack can be pumped into a single layer which can classify the image into categories like Car, Dog, Cat and so on.
What this means, is that in the space this stack transforms the images to, all images which contain a "dog" are closer to each other, and all images containing a "cat" are closer. Thus, it is a meaningful space where images with similar objects are closer.
Think about it, now if we pump our posters through this stack, it will embed them in a space where posters which contain similar objects are closer. This is a very meaningful feature engineering method! While this may not be ideal for genre prediction, it might be quite meaningful. For example, all posters with a gun or a car are probably action. While a smiling couple would point to romance or drama. The alternative would be to train the CNN from scratch which is fairly computationally intensive and involves a lot of tricks to get the CNN training to converge to the optimal space tranformation.
This way, we can start off with something strong, and then build on top. We pump our images through the pre-trained network to extract the visual features from the posters. Then, using these features as descriptors for the image, and genres as the labels, we train a simpler neural network from scratch which learns to do simply classification on this dataset. These 2 steps are exactly what we are going to do for predicting genres from movie posters._____no_output_____## Deep Learning to extract visual features from posters_____no_output_____The basic problem here we are answering is that can we use the posters to predict genre. First check - Does this hypothesis make sense? Yes. Because that's what graphic designers do for a living. They leave visual cues to semantics. They make sure that when we look at the poster of a horror movie, we know it's not a happy image. Things like that. Can our deep learning system infer such subtleties? Let's find out!_____no_output_____For Visual features, either we can train a deep neural network ourselves from scratch, or we can use a pre-trained one made available to us from the Visual Geometry Group at Oxford University, one of the most popular methods. This is called the VGG-net. Or as they call it, we will extract the VGG features of an image. Mathematically, as mentioned, it's just a space transformation in the form of layers. So, we simply need to perform this chain of transformations on our image, right? Keras is a library that makes it very easy for us to do this. Some other common ones are Tensorflow and PyTorch. While the latter two are very powerful and customizable and used more often in practice, Keras makes it easy to prototype by keeping the syntax simple.
We will be working with Keras to keep things simple in code, so that we can spend more time understanding and less time coding. Some common ways people refer to this step are - "Getting the VGG features of an image", or "Forward Propogating the image through VGG and chopping off the last layer". In keras, this is as easy as writing 4 lines. _____no_output_____
<code>
# Loading the list of movies we had downloaded posters for eariler -
f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()_____no_output_____from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
import pickle
model = VGG16(weights='imagenet', include_top=False)_____no_output_____allnames=os.listdir(poster_folder)
imnames=[j for j in allnames if j.endswith('.jpg')]
feature_list=[]
genre_list=[]
file_order=[]
print "Starting extracting VGG features for scraped images. This will take time, Please be patient..."
print "Total images = ",len(imnames)
failed_files=[]
succesful_files=[]
i=0
for mov in poster_movies:
i+=1
mov_name=mov['original_title']
mov_name1=mov_name.replace(':','/')
poster_name=mov_name.replace(' ','_')+'.jpg'
if poster_name in imnames:
img_path=poster_folder+poster_name
#try:
img = image.load_img(img_path, target_size=(224, 224))
succesful_files.append(poster_name)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
print features.shape
printe model.predict(x)
file_order.append(img_path)
feature_list.append(features)
genre_list.append(mov['genre_ids'])
if np.max(np.asarray(feature_list))==0.0:
print('problematic',i)
if i%250==0 or i==1:
print "Working on Image : ",i
# except:
# failed_files.append(poster_name)
# continue
else:
continue
print "Done with all features, please pickle for future use!"_____no_output_____len(genre_list)_____no_output_____len(feature_list)_____no_output_____print type(feature_list[0])
feature_list[0].shape
_____no_output_____# Reading from pickle below, this code is not to be run.
list_pickled=(feature_list,file_order,failed_files,succesful_files,genre_list)
f=open('posters_new_features.pckl','wb')
pickle.dump(list_pickled,f)
f.close()
print("Features dumped to pickle file")_____no_output_____f7=open('posters_new_features.pckl','rb')
list_pickled=pickle.load(f7)
f7.close()
# (feature_list2,file_order2)=list_pickled_____no_output_____
</code>
### Training a simple neural network model using these VGG features._____no_output_____
<code>
(feature_list,files,failed,succesful,genre_list)=list_pickled
_____no_output_____
</code>
Let's first get the labels on our 1342 samples first! As image download fails on a few instances, the best way to work with the right model is to read the poster names downloaded, and working from there. These posters cannot be uploaded to Github as they are too large, and so are being downloaded and read from my local computer. If you do re-do it, you might have to check and edit the paths in the code to make sure it runs._____no_output_____
<code>
(a,b,c,d)=feature_list[0].shape
feature_size=a*b*c*d
feature_size_____no_output_____
</code>
This looks odd, why are we re-running the loop we ran above again below? The reason is simple, the most important thing to know about numpy is that using vstack() and hstack() are highly sub-optimal. Numpy arrays when created, a fixed size is allocated in the memory and when we stack, a new one is copied and created in a new location. This makes the code really, really slow. The best way to do it (and this remains the same with MATLAB matrices if you work with them), is to create a numpy array of zeros, and over-write it row by row. The above code was just to see what size numpy array we will need!_____no_output_____The final movie poster set for which we have all the information we need, is 1265 movies. In the above code we are making an X numpy array containing the visual features of one image per row. So, the VGG features are reshaped to be in the shape (1,25088) and we finally obtain a matrix of shape (1265,25088)_____no_output_____
<code>
np_features=np.zeros((len(feature_list),feature_size))
for i in range(len(feature_list)):
feat=feature_list[i]
reshaped_feat=feat.reshape(1,-1)
np_features[i]=reshaped_feat_____no_output_____# np_features[-1]_____no_output_____X=np_features_____no_output_____from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genre_list)_____no_output_____Y.shape_____no_output_____
</code>
Our binarized Y numpy array contains the binarized labels corresponding to the genre IDs of the 1277 movies_____no_output_____
<code>
visual_problem_data=(X,Y)
f8=open('visual_problem_data_clean.pckl','wb')
pickle.dump(visual_problem_data,f8)
f8.close()_____no_output_____f8=open('visual_problem_data_clean.pckl','rb')
visual_features=pickle.load(f8)
f8.close()_____no_output_____(X,Y)=visual_features_____no_output_____X.shape_____no_output_____mask = np.random.rand(len(X)) < 0.8_____no_output_____X_train=X[mask]
X_test=X[~mask]
Y_train=Y[mask]
Y_test=Y[~mask]_____no_output_____X_test.shape
Y_test.shape_____no_output_____
</code>
Now, we create our own keras neural network to use the VGG features and then classify movie genres. Keras makes this super easy.
Neural network architectures have gotten complex over the years. But the simplest ones contain very standard computations organized in layers, as described above. Given the popularity of some of these, Keras makes it as easy as writing out the names of these operations in a sequential order. This way you can make a network while completely avoiding the Mathematics (HIGHLY RECOMMENDED SPENDING MORE TIME ON THE MATH THOUGH)_____no_output_____Sequential() allows us to make models the follow this sequential order of layers. Different kinds of layers like Dense, Conv2D etc can be used, and many activation functions like RELU, Linear etc are also available._____no_output_____# Important Question : Why do we need activation functions?
#### Copy pasting the answer I wrote for this question on <a href='https://www.quora.com/Why-do-neural-networks-need-an-activation-function/answer/Spandan-Madan?srid=5ydm'>Quora</a> Feel free to leave comments there.
""Sometimes, we tend to get lost in the jargon and confuse things easily, so the best way to go about this is getting back to our basics.
Don’t forget what the original premise of machine learning (and thus deep learning) is - IF the input and output are related by a function y=f(x), then if we have x, there is no way to exactly know f unless we know the process itself. However, machine learning gives you the ability to approximate f with a function g, and the process of trying out multiple candidates to identify the function g best approximating f is called machine learning.
Ok, that was machine learning, and how is deep learning different? Deep learning simply tries to expand the possible kind of functions that can be approximated using the above mentioned machine learning paradigm. Roughly speaking, if the previous model could learn say 10,000 kinds of functions, now it will be able to learn say 100,000 kinds (in actuality both are infinite spaces but one is larger than the other, because maths is cool that ways.)
If you want to know the mathematics of it, go read about VC dimension and how more layers in a network affect it. But I will avoid the mathematics here and rely on your intuition to believe me when I say that not all data can be classified correctly into categories using a linear function. So, we need our deep learning model to be able to approximate more complex functions than just a linear function.
Now, let’s come to your non linearity bit. Imagine a linear function y=2x+3, and another one y=4x+7. What happens if I pool them and take an average? I get another linear function y= 3x+5. So instead of doing those two computations separately and then averaging it out, I could have just used the single linear function y=3x+5. Obviously, this logic holds good if I have more than 2 such linear functions. This is exactly what will happen if you don’t have have non-linearities in your nodes, and also what others have written in their answers.
It simply follows from the definition of a linear function -
(i) If you take two linear functions, AND
(ii)Take a linear combination of them (which is how we combine the outputs of multiple nodes of a network)
You are BOUND to get a linear function because f(x)+g(x)=mx+b+nx+c=(m+n)x+(b+c)= say h(x).
And you could in essence replace your whole network by a simple matrix transformation which accounts for all linear combinations and up/downsamplings.
In a nutshell, you’ll only be trying to learn a linear approximation for original function f relating the input and the output. Which as we discussed above, is not always the best approximation. Adding non-linearities ensures that you can learn more complex functions by approximating every non-linear function as a LINEAR combination of a large number of non-linear functions.
Still new to the field, so if there’s something wrong here please comment below! Hope it helps""_____no_output_____#### Let's train our model then, using the features we extracted from VGG net
The model we will use has just 1 hidden layer between the VGG features and the final output layer. The simplest neural network you can get. An image goes into this network with the dimensions (1,25088), the first layer's output is 1024 dimensional. This hidden layer output undergoes a pointwise RELU activation. This output gets transformed into the output layer of 20 dimensions. It goes through a sigmoid.
The sigmoid, or the squashing function as it is often called, is a function which squashes numbers between 0 and 1. What are you reminded of when you think of numebers between 0 and 1? Right, probability.
By squashing the score of each of the 20 output labels between 0 and 1, sigmoid lets us interpret their scores as probabilities. Then, we can just pick the classes with the top 3 or 5 probability scores as the predicted genres for the movie poster! Simple! _____no_output_____
<code>
# Y_train[115]_____no_output_____from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
model_visual = Sequential([
Dense(1024, input_shape=(25088,)),
Activation('relu'),
Dense(256),
Activation('relu'),
Dense(19),
Activation('sigmoid'),
])
opt = optimizers.rmsprop(lr=0.0001, decay=1e-6)
#sgd = optimizers.SGD(lr=0.05, decay=1e-6, momentum=0.4, nesterov=False)
model_visual.compile(optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy'])_____no_output_____
</code>
We train the model using the fit() function. The parameters it takes are - training features and training labels, epochs, batch_size and verbose.
Simplest one - verbose. 0="dont print anything as you work", 1="Inform me as you go".
Often the data set is too large to be loaded into the RAM. So, we load data in batches. For batch_size=32 and epochs=10, the model starts loading rows from X in batches of 32 everytime it calculates the loss and updates the model. It keeps on going till it has covered all the samples 10 times.
So, the no. of times model is updated = (Total Samples/Batch Size) * (Epochs)_____no_output_____
<code>
model_visual.fit(X_train, Y_train, epochs=10, batch_size=64,verbose=1)_____no_output_____model_visual.fit(X_train, Y_train, epochs=50, batch_size=64,verbose=0)_____no_output_____
</code>
For the first 10 epochs I trained the model in a verbose fashion to show you what's happening. After that, in the below cell you can see I turned off the verbosity to keep the code cleaner. _____no_output_____
<code>
Y_preds=model_visual.predict(X_test)_____no_output_____sum(sum(Y_preds))_____no_output_____
</code>
### Let's look at some of our predictions? _____no_output_____
<code>
f6=open('Genredict.pckl','rb')
Genre_ID_to_name=pickle.load(f6)
f6.close()_____no_output_____sum(Y_preds[1])_____no_output_____sum(Y_preds[2])_____no_output_____genre_list=sorted(list(Genre_ID_to_name.keys()))_____no_output_____precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",','.join(predicted_genres)," Actual: ",','.join(gt_genre_names)_____no_output_____print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))_____no_output_____
</code>
So, even with just the poster i.e. visual features we are able to make great predictions! Sure, text outperforms the visual features, but the important thing is that it still works. In more complicated models, we can combine the two to make even better predictions. That is precisely what I work on in my research._____no_output_____These models were trained on CPU's, and a simple 1 layer model was used to show that there is a lot of information in this data that the models can extract. With a larger dataset, and more training I was able to bring these numbers to as high as 70%, which is the similar to textual features. Some teams in my class outperformed this even more. More data is the first thing you should try if you want better results. Then, you can start playing with training on GPUs, learning rate schedules and other hyperparameters. Finally, you can consider using ResNet, a much more powerful neural network model than VGG. All of these can be tried once you have a working knowledge of machine learning._____no_output_____# Section 8 - Deep Learning to get Textual Features_____no_output_____Let's do the same thing as above with text now?_____no_output_____We will use an off the shelf representation for words - Word2Vec model. Just like VGGnet before, this is a model made available to get a meaningful representation. As the total number of words is small, we don't even need to forward propagate our sample through a network. Even that has been done for us, and the result is stored in the form of a dictionary. We can simply look up the word in the dictionary and get the Word2Vec features for the word._____no_output_____You can download the dictionary from here - https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit <br>
Download it to the directory of this tutorial i.e. in the same folder as this ipython notebook.
_____no_output_____
<code>
from gensim import models
# model2 = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model2 = models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)_____no_output_____
</code>
Now, we can simply look up for a word in the above loaded model. For example, to get the Word2Vec representation of the word "King" we just do - model2['king']_____no_output_____
<code>
print model2['king'].shape
print model2['dog'].shape_____no_output_____
</code>
This way, we can represent the words in our overviews using this word2vec model. And then, we can use that as our X representations. So, instead of count of words, we are using a representation which is based on the semantic representation of the word. Mathematically, each word went from 3-4 dimensional (the length) to 300 dimensions!_____no_output_____For the same set of movies above, let's try and predict the genres from the deep representation of their overviews!_____no_output_____
<code>
final_movies_set = movies_with_overviews
len(final_movies_set)_____no_output_____from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')_____no_output_____movie_mean_wordvec=np.zeros((len(final_movies_set),300))
movie_mean_wordvec.shape_____no_output_____
</code>
Text needs some pre-processing before we can train the model. The only preprocessing we do here is - we delete commonly occurring words which we know are not informative about the genre. Think of it as the clutter in some sense. These words are often removed and are referred to as "stop words". You can look them up online. These include simple words like "a", "and", "but", "how", "or" and so on. They can be easily removed using the python package NLTK.
From the above dataset, movies with overviews which contain only stop words, or movies with overviews containing no words with word2vec representation are neglected. Others are used to build our Mean word2vec representation. Simply, put for every movie overview -
* Take movie overview
* Throw out stop words
* For non stop words:
- If in word2vec - take it's word2vec representation which is 300 dimensional
- If not - throw word
* For each movie, calculate the arithmetic mean of the 300 dimensional vector representations for all words in the overview which weren't thrown out
This mean becomes the 300 dimensional representation for the movie. For all movies, these are stored in a numpy array. So the X matrix becomes (1263,300). And, Y is (1263,20) i.e. binarized 20 genres, as before_____no_output_____**Why do we take the arithmetic mean?**
If you feel that we should have kept all the words separately - Then you're thinking correct, but sadly we're limited by the way current day neural networks work. I will not mull over this for the fear of stressing too much on an otherwise irrelevant detail. But if you're interested, read this awesome paper -
https://jiajunwu.com/papers/dmil_cvpr.pdf_____no_output_____
<code>
genres=[]
rows_to_delete=[]
for i in range(len(final_movies_set)):
mov=final_movies_set[i]
movie_genres=mov['genre_ids']
genres.append(movie_genres)
overview=mov['overview']
tokens = tokenizer.tokenize(overview)
stopped_tokens = [k for k in tokens if not k in en_stop]
count_in_vocab=0
s=0
if len(stopped_tokens)==0:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no nonstops"
else:
for tok in stopped_tokens:
if tok.lower() in model2.vocab:
count_in_vocab+=1
s+=model2[tok.lower()]
if count_in_vocab!=0:
movie_mean_wordvec[i]=s/float(count_in_vocab)
else:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no word2vec"_____no_output_____len(genres)_____no_output_____mask2=[]
for row in range(len(movie_mean_wordvec)):
if row in rows_to_delete:
mask2.append(False)
else:
mask2.append(True)_____no_output_____X=movie_mean_wordvec[mask2]_____no_output_____X.shape_____no_output_____Y=mlb.fit_transform(genres)_____no_output_____Y.shape_____no_output_____textual_features=(X,Y)
f9=open('textual_features.pckl','wb')
pickle.dump(textual_features,f9)
f9.close()_____no_output_____# textual_features=(X,Y)
f9=open('textual_features.pckl','rb')
textual_features=pickle.load(f9)
f9.close()_____no_output_____(X,Y)=textual_features_____no_output_____X.shape_____no_output_____Y.shape_____no_output_____mask_text=np.random.rand(len(X))<0.8_____no_output_____X_train=X[mask_text]
Y_train=Y[mask_text]
X_test=X[~mask_text]
Y_test=Y[~mask_text]_____no_output_____
</code>
Once again, we use a very similar, super simple architecture as before._____no_output_____
<code>
from keras.models import Sequential
from keras.layers import Dense, Activation
model_textual = Sequential([
Dense(300, input_shape=(300,)),
Activation('relu'),
Dense(19),
Activation('softmax'),
])
model_textual.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])_____no_output_____model_textual.fit(X_train, Y_train, epochs=10, batch_size=500)_____no_output_____model_textual.fit(X_train, Y_train, epochs=10000, batch_size=500,verbose=0)_____no_output_____score = model_textual.evaluate(X_test, Y_test, batch_size=249)_____no_output_____print("%s: %.2f%%" % (model_textual.metrics_names[1], score[1]*100))_____no_output_____Y_preds=model_textual.predict(X_test)_____no_output_____genre_list.append(10769)_____no_output_____print "Our predictions for the movies are - \n"
precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",predicted_genres," Actual: ",gt_genre_names_____no_output_____print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))_____no_output_____
</code>
Even without much tuning of the above model, these results are able to beat our previous results.
Note - I got accuracies as high as 78% when doing classification using plots scraped from Wikipedia. The large amount of information was very suitable for movie genre classification with a deep model. Strongly suggest you to try playing around with architectures._____no_output_____# Section 9 - Upcoming Tutorials and Acknowledgements
Congrats! This is the end of our pilot project! Needless to say, a lot of the above content may be new to you, or may be things that you know very well. If it's the former, I hope this tutorial would have helped you. If it is the latter and you think I wrote something incorrect or that my understanding can be improved, feel free to create a github issue so that I can correct it!
Writing tutorials can take a lot of time, but it is a great learning experience. I am currently working on a tutorial focussing on word embeddings, which will explore word2vec and other word embeddings in detail. While it will take some time to be up, I will post a link to it's repository on the README for this project so that interested readers can find it.
I would like to thank a few of my friends who had an indispensible role to play in me making this tutorial. Firstly, Professor Hanspeter Pfister and Verena Kaynig at Harvard, who helped guide this tutorial/project and scope it. Secondly, my friends Sahil Loomba and Matthew Tancik for their suggestions and editing the material and the presentation of the storyline. Thirdly, Zoya Bylinskii at MIT for constantly motivating me to put in my effort into this tutorial. Finally, all others who helped me feel confident enough to take up this task and to see it till the end. Thanks all of you!_____no_output_____
|
{
"repository": "FGDBTKD/DeepLearningProject",
"path": "Deep_Learning_Project.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 4625,
"size": 156937,
"hexsha": "cba092639e6d035837b24b6ad3933e74cc6e5529",
"max_line_length": 1450,
"avg_line_length": 44.3199661113,
"alphanum_fraction": 0.642741992
}
|
# Notebook from mbohling/spiking-neuron-model
Path: Hodgkin-Huxley/SpikingNeuronModel_HH.ipynb
<a href="https://colab.research.google.com/github/mbohling/spiking-neuron-model/blob/main/Hodgkin-Huxley/SpikingNeuronModel_HH.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____#The Spiking Neuron Model - Coding Challenge Problems (Part 3)
_____no_output_____# Hodgkin-Huxley Spiking Neuron Model
This interactive document is meant to be followed as the reader makes their way through chapter: *The Spiking Neuron Model*. Each model presented in the chapter will have a section consisting of a step-by-step walkthrough of a simple Python implementation. This is followed by an interface to run simulations with different parameter values to answer the Coding Challenge Problems.
For each model covered in the chapter, there is a section called **Coding Challenge Problems.** This is where you will find user-interface components such as value sliders for various parameters. Use these controls to answer the questions from the text.
**Content Creator**: Maxwell E. Bohling
**Content Reviewer**: Lawrence C. Udeigwe_____no_output_____## How It Works
Google Colab Notebooks have both *Content* cells and *Code* cells. As you progress through the notebook, you MUST make sure to run each code cell as you come to them. Otherwise, you may run into errors when executing a code cell. Each code cell has a Play button next to it which will execute the code. (Some code may be hidden by default. This is generally because the code is more complex and is not necessary to understand in order to complete the model implementations or to answer the chapter Coding Challenge Problems).
**IMPORTANT**: You have been provided a link to view a **copy** of the original notebooks. You will find that you can edit the content of any cell. If you accidently change a cell, such as a line of code and/or run into errors as you try to run subsequent blocks, simply refresh the page, OR go to the *Runtime menu* and select *Restart runtime*. It is also suggested that you go to the *Edit menu* and select *Clear all outputs*. This will always allow you to revert the notebook to the original version (though you will have to run each code block again.)
For each model covered in the chapter, there is a section called **Coding Challenge Problems**. This is where you will find user-interface components such as value sliders for various parameters. Use these controls to answer the questions from the text.
_____no_output_____ Execute the code block. **Initialize Setup**_____no_output_____
<code>
#@title Initialize Setup
#@markdown **(No need to understand this code, simply make sure you run this first).**
import sys
import functools as ft
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import ipywidgets as widgets
import scipy as sc
# [BLOCK TAG: INIT]
try:
blockSet = [ ]
except:
print('Something went wrong! Try Refreshing the page.')
blockTags = ['INIT','VP1','NP1','SS1','SS2','SS3','CS1','CS2','CS3','VR1']
def pushBlockStack(tag):
if tag in blockSet:
return 1
indx = blockTags.index(tag)
if len(blockSet) != indx:
print('ERROR: BLOCK TAG:',tag,'executed out of sequence. Missing BLOCK TAG:', blockTags[indx-1])
return 0
else:
blockSet.append(tag)
return 1
def printError():
message = 'Something went wrong!\n\n'
message = message + 'Check for the following:\n\n'
message = message + '\t1. All previous code blocks have been run the order they appear and output a success message.\n'
message = message + '\t2. No other code has been altered.\n\n'
message = message + 'and then try running the code block again.'
message = message + ' If there is still an error when executing the code block, try the following:\n\n'
message = message + '\t1. Go to the \'Runtime\' menu and select \'Restart Runtime\', then in the \'Edit\' menu, select \'Clear all outputs\'.\n'
message = message + '\t2. Refresh the page.\n\n'
message = message + 'and be sure to run each of the previous code blocks again beginning with \'Initialize Setup\'.\n'
print(message)
return 0
def printSuccess(block):
success = 0
if len(block) == 0 or pushBlockStack(block) != 0:
message = 'Success! Move on to the next section.'
print(message)
success = 1
return success
def checkVoltageParameters(Vrest):
print('Checking Voltage Parameters... ')
try:
check_Vrest = Vrest
except:
return 0
else:
vals = [Vrest]
correct_vals = [-65]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
return 0
return 1
def checkNeuronProperties(A, Ie, GL, GK, GNa, EL, EK, ENa):
print('Checking Neuron Properties... ')
try:
check_A = A
check_Ie = Ie
check_GL = GL
check_GK = GK
check_GNa = GNa
check_EL = EL
check_EK = EK
check_ENa = ENa
except:
return 0
else:
vals = [A, Ie, GL, GK, GNa, EL, EK, ENa]
correct_vals = [0.1, 1.75, 0.03, 3.6, 12, -54.4, -77, 50]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
return 0
return 1
def checkSimulationSetup(Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current):
print('Checking Simulation Setup... ')
try:
check_Vrest = Vrest
check_Vinitial = Vinitial
check_t0 = t0
check_dt = dt
check_t_final = t_final
check_time = time
check_n_initial = n_initial
check_m_initial = m_initial
check_h_initial = h_initial
check_start_current = start_current
check_end_current = end_current
except:
return 0
else:
vals = [Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current]
correct_vals = [-65, -65, 0, 0.01, 20, 0.1399, 0.0498, 0.6225, 5, 10]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
if len(time) != 2000 or time[0] != 0 or time[-1] != 20:
return 0
return 1
def checkValues():
chk = 3
if checkVoltageParameters(Vrest) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
if checkNeuronProperties(A, Ie, GL, GK, GNa, EL, EK, ENa) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
if checkSimulationSetup(Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
return chk
try:
check_sys = sys
except:
printError()
else:
modulename = 'functools'
if modulename not in sys.modules:
printError()
else:
printSuccess('INIT')_____no_output_____
</code>
## Walkthrough
The goal of this section is to write a Python implementation of the Hodgkin-Huxley model. Recall from the chapter text that we need to account for both activation and inactivation gating variables in order to simulate the persistent and transient conductances involved in the membrane current equation.
### Membrane Current
The Hodgkin-Huxley model is expressed as membrane current equation given as:
> $ \displaystyle i_{m} = \overline{g}_{L}(V-E_{L}) + \overline{g}_{K}n^4(V-E_{K}) + \overline{g}_{Na}m^3h(V-E_{Na})$
with maximal conductances $\overline{g}_{L},\;$ $\overline{g}_{K}\;$ $\overline{g}_{Na}\;$ and reversal potentials $E_{L},\;$ $E_{K},\;$ $E_{Na}$.
As with the previous models, Euler's method is used to compute the time evolution of the membrane potential $V$. For this model, we use the same numerical integration method to compute the evolution of the gating variables $n$, $m$, and $h$.
### Membrane Equation
Recall that the membrane equation is expressed as follows:
> $ \displaystyle \frac{dV}{dt} = -i_m+ \frac{I_{e}}{A} $_____no_output_____### Voltage Parameters
As opposed to the integrate-and-fire model, the Hodgkin-Huxley model does not utilize a spiking mechanism. Therefore, we only need to define the *voltage parameter* that determines the *resting* membrane potential value.
* $ V_{rest} = -65\;$*mV*
_____no_output_____
<code>
# [BLOCK TAG: VP1]
try:
check_BlockSet = blockSet
except:
print('ERROR: BLOCK TAG: VP1 executed out of sequence. Missing BLOCK TAG: INIT')
else:
try:
##################################################################################
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
##################################################################################
except:
printError()
else:
printSuccess('VP1')_____no_output_____
</code>
### Neuron Properties
The membrane equation is described by a total membrane current $i_{m}$ as a sum of:
1. A *leakage current*: $ \displaystyle\; \overline{g}_{L}(V-E_{L}) $
2. A *persistent current*: $\displaystyle\; \overline{g}_{K}n^4(V-E_{K}) $
3. A *transient current*: $\displaystyle\; \overline{g}_{Na}m^3h(V-E_{Na})$
Thus, the persistent conductance is modeled as a K$^+$ conductance and the transient conductance is modeled as a Na$^+$ conductance. For each current, we define the maximimal conductances:
* $ \displaystyle\; \overline{g}_{L} = 0.03\;$nS / mm$^2$
* $ \displaystyle\; \overline{g}_{K} = 3.6\;$nS / mm$^2$
* $ \displaystyle\; \overline{g}_{Na} = 12\;$nS / mm$^2$
and reversal potentials:
* $ \displaystyle\; E_{L} = -54.4\;$mV
* $ \displaystyle\; E_{K} = -77\;$mV
* $ \displaystyle\; E_{Na} = 50\;$mV
Lastly, as seen in the membrane equation for the model, we must define the value of the injected current, and the neuronal surface area:
* $ \displaystyle\; I_{e} = 1.75\;$nA
* $ \displaystyle\; A = 0.1\;$mm$^2$_____no_output_____
<code>
# [BLOCK TAG: NP1]
try:
##################################################################################
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
##################################################################################
except:
printError()
else:
printSuccess('NP1')_____no_output_____
</code>
### Simulation Setup
To setup our simulation, we need initial values of each variable: $V$, $n$, $m$, and $h$ as well as a list to hold the values over time.
Set initial values as:
* $V_{initial}= V_{rest} = -65\;$*mV*
* $n_{initial} = 0.1399$
* $m_{initial} = 0.0498$
* $h_{initial} = 0.6225$
With each value defined at time $t = 0$, let $V_0 = V_{initial}, n_0 = n_{initial}, m_0 = m_{initial}, h_0 = h_{initial} $.
The initial membrane current is then:
* $\displaystyle i_{initial} = \overline{g}_{L}(V_0-E_{L}) + \overline{g}_{K}n_0^4(V_0-E_{K}) + \overline{g}_{Na}m_0^3h_0(V_0-E_{Na})$
Here we make use of the **numpy** library (to learn more about how to use this library, go to https://numpy.org/doc/stable/)._____no_output_____
<code>
# [BLOCK TAG: SS1]
try:
##################################################################################
# Initial voltage
Vinitial = Vrest
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Initial membrane current
im_initial = GL*(Vinitial-EL)+GK*np.power(n_initial,4)*(Vinitial-EK)+GNa*np.power(m_initial,3)*h_initial*(Vinitial-ENa)
##################################################################################
except:
printError()
else:
printSuccess('SS1')_____no_output_____
</code>
We will be running a 20 ms simulation. The following lines of code setup a time span for the simulation. This is simply a matter of defining the start time $t_{0} = 0$ and the total length (in ms) of the simulation: $t_{final} = 20$.
Throughout the simulation, we calculate the membrane potential $V$ at each *time-step*. The time-step is the change in time for each iteration of the simulation, for example if $t_{0} = 0$, the next computation of $V$ is performed at $t_{0} + dt$.
Thus, by setting $dt = 0.01$ (in ms), the simulation will compute $V$, $n$, $m$, and $h$ at time $t = 1, 2, \ldots, t_{final}$. _____no_output_____
<code>
# [BLOCK TAG: SS2]
try:
##################################################################################
# Simulation Time Span (0 to 20ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
# What does the linspace() function do?
time = np.linspace(t0, t_final, 2000)
##################################################################################
except:
printError()
else:
printSuccess('SS2')_____no_output_____
</code>
Next, we define the time $t$ at which the injected current $I_{e}$ is *switched on* and applied to the neuron, and the time $t$ at which the injected current is *switched off*.
For the Hodgkin-Huxley model, we run a shorter simulation and we apply the current from $t = 5\;$ms to $t = 10\;$ms._____no_output_____
<code>
# [BLOCK TAG: SS3]
try:
##################################################################################
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
##################################################################################
except:
printError()
else:
printSuccess('SS3')_____no_output_____
</code>
### Computing and Storing $\frac{dV}{dt}$, $\frac{dn}{dt}$, $\frac{dm}{dt}$, $\frac{dh}{dt}$
We are about ready to finish the code implementation for simulating a Hodgkin-Huxley model neuron.
We need some way to store the values of the membrane potential $V, n, m, h$ at each time step. To do this, we simply create empty lists $V[t], n[t], m[t], h[t]$ with a length equal to the number of time-steps of our simulation._____no_output_____
<code>
# [BLOCK TAG: CS1]
try:
##################################################################################
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
##################################################################################
except:
printError()
else:
printSuccess('CS1')_____no_output_____
</code>
### Opening and Closing Rate Functions for Gating Variables
The gating variables $n$, $m$, and $h$ represent **probabilities** that a gate mechanism in both the persistent and transient ion-conducting channels are open or *activated*.
For any arbitrary gating variable $z$, the open probability of a channel at any time $ t $ is computed using an *opening* rate function $\alpha_{z}(V)$ and a *closing* rate $\beta_{z}(V)$, both of which are functions of the membrane potential $V$.
Each gating variable is numerically integrated using Euler's method throughout the simulation, where for any arbitrary gating variable $z$, the rate functions are given as follows:
> $ \displaystyle \tau_{z}(V)\frac{dz}{dt} = z_{\infty}(V) - z $
where
> $ \displaystyle \tau_{z}(V) = \frac{1}{\alpha_{z}(V) + \beta_{z}(V)} $
and
> $ \displaystyle z_{\infty}(V) = \frac{\alpha_{z}(V) }{\alpha_{z}(V) + \beta_{z}(V)} $
_____no_output_____#### Fitted Rate Functions
Hodgkin and Huxley had fit the opening and closing rate functions using experimental data. These are given as follows:
---
For activation variable $n$
> $ \displaystyle \alpha_{n}(V) = \frac{0.01(V+55)}{ 1 - \exp(-0.1(V+55))} $
> $ \displaystyle \beta_{n}(V) = 0.125\exp(-0.0125(V+65)) $
---
For activation variable $m$
> $ \displaystyle \alpha_{m}(V) = \frac{0.1(V+4)}{1 - \exp(-0.1(V+4))}$
> $ \displaystyle \beta_{m}(V) = 4\exp(-0.0556(V+65)) $
---
For inactivation variable $h$
> $ \displaystyle \alpha_{h}(V) = 0.07\exp(-0.05(V+65))$
> $ \displaystyle \beta_{h}(V) = \frac{1}{1 + \exp(-0.1(V+35))} $
We define separate functions for each gating variable. These take the membrane potential, $V$, as input, and ouput $dz$ where $z = n, m, h $.
Using the functional forms and fitted rate functions, these functions compute the changes dn, dm, and dh at each time-step dt which depend on the membrane potential V at time t.
_____no_output_____ Execute the code block. **Initialize Helper Functions**_____no_output_____
<code>
#@title Initialize Helper Functions
#@markdown **(Double-Click the cell to show the code)**
# [BLOCK TAG: CS2]
##################################################################################
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
##################################################################################
x = printSuccess('CS2')_____no_output_____
</code>
Finally, we run our simulation according to the updated *pseudocode*
---
*for each time-step from $t = t_{0}$ to $t = t_{final}$*
> *If the current time $t \geq start_{current}\ $ and $\ t \leq end_{current}$*
>> $I_{e} = 1.75\;$nA
> *otherwise*
>> $I_{e} = 0\;$nA
> *First compute the open probabilites for each gating variable*
> $ \displaystyle dn = $ **compute_dn**$(V[t], n[t])$
> *Update* $ n[t+1] = n[t] + dn $
> $ \displaystyle dm = $ **compute_dm**$(V[t], m[t])$
> *Update* $ m[t+1] = m[t] + dm $
> $ \displaystyle dh = $ **compute_dh**$(V[t], h[t])$
> *Update* $ h[t+1] = h[t] + dh $
> $ \displaystyle i_{m}[t+1] = \overline{g}_{L}(V[t]-E_{L}) + \overline{g}_{K}n[t+1]^4(V[t]-E_{K}) + \overline{g}_{Na}m[t+1]^3h[t+1](V[t]-E_{Na})$
> *Use Euler's Method of Numerical Integration*
> $ \displaystyle dV= dt\left(-i_m[t+1]+ \frac{I_{e}}{A}\right) $
> *Update* $V[t+1] = V[t] + dV$
*end*
---
This translates to the following Python code._____no_output_____
<code>
# [BLOCK TAG: CS3]
try:
chk = checkValues()
except:
printError()
else:
try:
##################################################################################
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# If time t >= 5 ms and t <= 10 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-1*im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
##################################################################################
except:
printError()
else:
if chk == 3:
printSuccess('CS3')
else:
printError()_____no_output_____
</code>
### Visualizing Results
Now we have values of $V$, $i_m$, $n$, $m$, and $h$ for each time-step of the simulation, we can visualize the results by using Python to plot the data. This makes use of another widely used library **plotly** (to learn more about plotting data with this library, go to https://plotly.com/python/reference/index/)._____no_output_____
<code>
# [BLOCK TAG: VR1]
try:
if 'CS2' not in blockSet:
print('ERROR: BLOCK TAG: VR1 executed out of sequence. Missing BLOCK TAG: CS3')
else:
try:
##################################################################################
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()
##################################################################################
printSuccess('VR1')
except:
printError()
except:
printError()_____no_output_____
</code>
## Hodgkin-Huxley Spiking Neuron Model - Full Code_____no_output_____
<code>
import numpy as np
from plotly.subplots import make_subplots
import plotly.graph_objects as go
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
# Simulation Time Span (0 to 15ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
time = np.linspace(t0, t_final, 2000)
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
# Initial voltage
Vinitial = Vrest
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Initial membrane current
im_initial = GL*(V[0]-EL)+GK*np.power(n[0],4)*(V[0]-EK)+GNa*np.power(m[0],3)*h[0]*(V[0]-ENa)
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# If time t >= 1 ms and t <= 2 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()_____no_output_____
</code>
## Coding Challenge Problems_____no_output_____
<code>
#@title Run Simulation
#@markdown Execute the code block and use the sliders to set values in order to answer the Coding Challenge Problems in the chapter text.
#@markdown (Tip: Select a slider and use the left and right arrow keys to slide to the desired value.)
import numpy as np
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import ipywidgets as widgets
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
# Initial voltage
Vinitial = Vrest
# Simulation Time Span (0 to 20ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
time = np.linspace(t0, t_final, int(t_final/dt))
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Initial membrane current
im_initial = GL*(V[0]-EL)+GK*np.power(n[0],4)*(V[0]-EK)+GNa*np.power(m[0],3)*h[0]*(V[0]-ENa)
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
def simulate_iaf_neuron(Ie, c):
# Time at which the current is applied - Units ms
start_current = c[0]
# Time at which the current is switched off - Units ms
end_current = c[1]
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# If time t >= 1 ms and t <= 2 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
return [V, im, n, m, h, time]
def plot_iaf_neuron(V, im, n, m, h, time):
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()
style = {'description_width':'auto'}
@widgets.interact(
Ie = widgets.FloatSlider(
value=1.75,
min=0.00,
max=5.00,
step=0.05,
description='Ie',
style = style,
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='1.2f'
),
c = widgets.FloatRangeSlider(
value=[5.00, 10.00],
min=1.00,
max=15.00,
step=0.10,
description='Ie: On/Off',
style = style,
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='1.2f'
)
)
def compute_iaf_neuron(Ie =1.75, c = [5.00, 10.00]):
[V, im, n, m, h, time] = simulate_iaf_neuron(Ie, c)
plot_iaf_neuron(V, im, n, m, h, time)_____no_output_____
</code>
|
{
"repository": "mbohling/spiking-neuron-model",
"path": "Hodgkin-Huxley/SpikingNeuronModel_HH.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 50878,
"hexsha": "cba0c7f8f7f95c5f19491af03298ce7ff746352a",
"max_line_length": 570,
"avg_line_length": 40.4757358791,
"alphanum_fraction": 0.459668226
}
|
# Notebook from balopat/Cirq
Path: docs/tutorials/google/floquet.ipynb
<code>
##### Copyright 2021 The Cirq Developers_____no_output_____#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License._____no_output_____
</code>
# Floquet calibration_____no_output_____<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/floquet"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>_____no_output_____This notebook demonstrates the Floquet calibration API, a tool for characterizing $\sqrt{\text{iSWAP}}$ gates and inserting single-qubit $Z$ phases to compensate for errors. This characterization is done by the Quantum Engine and the insertion of $Z$ phases for compensation/calibration is completely client-side with the help of Cirq utilities. At the highest level, the tool inputs a quantum circuit of interest (as well as a backend to run on) and outputs a calibrated circuit for this backend which can then be executed to produce better results._____no_output_____## Details on the calibration tool_____no_output_____In more detail, assuming we have a number-convserving two-qubit unitary gate, Floquet calibration (FC) returns fast, accurate estimates for the relevant angles to be calibrated. The `cirq.PhasedFSimGate` has five angles $\theta$, $\zeta$, $\chi$, $\gamma$, $\phi$ with unitary matrix
$$
\left[ \begin{matrix}
1 & 0 & 0 & 0 \\
0 & \exp(-i \gamma - i \zeta) cos( \theta ) & -i \exp(-i \gamma + i \chi) sin( \theta ) & 0 \\
0 & -i \exp(-i \gamma - i \chi) sin( \theta ) & \exp(-i \gamma + i \zeta) cos( \theta) & 0 \\
0 & 0 & 0 & \exp(-2 i \gamma -i \phi )
\end{matrix} \right]
$$
With Floquet calibration, every angle but $\chi$ can be calibrated. In experiments, we have found these angles change when gates are run in parallel. Because of this, we perform FC on entire moments of two-qubits gates and return different characterized angles for each.
After characterizing a set of angles, one needs to adjust the circuit to compensate for the offset. The simplest adjustment is for $\zeta$ and $\gamma$ and works by adding $R_z$ gates before and after the two-qubit gates in question. For many circuits, even this simplest compensation can lead to a significant improvement in results. We provide methods for doing this in this notebook and analyze results for an example circuit.
We do not attempt to correct the misaligned iSWAP rotation or the additional two-qubit phase in this notebook. This is a non-trivial task and we do currently have simple tools to achieve this. It is up to the user to correct for these as best as possible._____no_output_____Note: The Floquet calibration API and this documentation is ongoing work. The amount by which errors are reduced may vary from run to run and from circuit to circuit._____no_output_____## Setup_____no_output_____
<code>
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install cirq --quiet
print("installed cirq.")_____no_output_____from typing import Iterable, List, Optional, Sequence
import matplotlib.pyplot as plt
import numpy as np
import cirq
import cirq_google as cg # Contains the Floquet calibration tools._____no_output_____
</code>
Note: In order to run on Google's Quantum Computing Service, an environment variable `GOOGLE_CLOUD_PROJECT` must be present and set to a valid Google Cloud Platform project identifier. If this is not satisfied, we default to an engine simulator._____no_output_____Running the next cell will prompt you to authenticate Google Cloud SDK to use your project. See the [Getting Started Guide](../tutorials/google/start.ipynb) for more information._____no_output_____Note: Leave `project_id` blank to use a noisy simulator._____no_output_____
<code>
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
print("No processor_id provided and environment variable "
"GOOGLE_CLOUD_PROJECT not set, defaulting to noisy simulator.")
processor_id = None
engine = cg.PhasedFSimEngineSimulator.create_with_random_gaussian_sqrt_iswap(
mean=cg.SQRT_ISWAP_PARAMETERS,
sigma=cg.PhasedFSimCharacterization(
theta=0.01, zeta=0.10, chi=0.01, gamma=0.10, phi=0.02
),
)
sampler = engine
device = cg.Bristlecone
line_length = 20
else:
import os
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
processor_id = "" #@param {type:"string"}
engine = cg.get_engine()
device = cg.get_engine_device(processor_id)
sampler = cg.get_engine_sampler(processor_id, gate_set_name="sqrt_iswap")
line_length = 35_____no_output_____
</code>
## Minimal example for a single $\sqrt{\text{iSWAP}}$ gate_____no_output_____To see how the API is used, we first show the simplest usage of Floquet calibration for a minimal example of one $\sqrt{\text{iSWAP}}$ gate. After this section, we show detailed usage with a larger circuit and analyze the results._____no_output_____The gates that are calibrated by Floquet calibration are $\sqrt{\text{iSWAP}}$ gates:_____no_output_____
<code>
sqrt_iswap = cirq.FSimGate(np.pi / 4, 0.0)
print(cirq.unitary(sqrt_iswap).round(3))_____no_output_____
</code>
First we get two connected qubits on the selected device and define a circuit._____no_output_____
<code>
"""Define a simple circuit to use Floquet calibration on."""
qubits = cg.line_on_device(device, length=2)
circuit = cirq.Circuit(sqrt_iswap.on(*qubits))
# Display it.
print("Circuit to calibrate:\n")
print(circuit)_____no_output_____
</code>
The simplest way to use Floquet calibration is as follows._____no_output_____
<code>
"""Simplest usage of Floquet calibration."""
calibrated_circuit, *_ = cg.run_zeta_chi_gamma_compensation_for_moments(
circuit,
engine,
processor_id=processor_id,
gate_set=cg.SQRT_ISWAP_GATESET
)_____no_output_____
</code>
Note: Additional returned arguments, omitted here for simplicity, are described below._____no_output_____When we print out the returned `calibrated_circuit.circuit` below, we see the added $Z$ rotations to compensate for errors._____no_output_____
<code>
print("Calibrated circuit:\n")
calibrated_circuit.circuit_____no_output_____
</code>
This `calibrated_circuit` can now be executed on the processor to produce better results._____no_output_____## More detailed example with a larger circuit_____no_output_____We now use Floquet calibration on a larger circuit which models the evolution of a fermionic particle on a linear spin chain. The physics of this problem for a closed chain (here we use an open chain) has been studied in [Accurately computing electronic properties of materials using eigenenergies](https://arxiv.org/abs/2012.00921), but for the purposes of this notebook we can treat this just as an example to demonstrate Floquet calibration on._____no_output_____First we use the function `cirq_google.line_on_device` to return a line of qubits of a specified length._____no_output_____
<code>
line = cg.line_on_device(device, line_length)
print(line)_____no_output_____
</code>
This line is now broken up into a number of segments of a specified length (number of qubits)._____no_output_____
<code>
segment_length = 5
segments = [line[i: i + segment_length]
for i in range(0, line_length - segment_length + 1, segment_length)]_____no_output_____
</code>
For example, the first segment consists of the following qubits._____no_output_____
<code>
print(*segments[0])_____no_output_____
</code>
We now implement a number of Trotter steps on each segment in parallel. The middle qubit on each segment is put into the $|1\rangle$ state, then each Trotter step consists of staggered $\sqrt{\text{iSWAP}}$ gates. All qubits are measured in the $Z$ basis at the end of the circuit.
For convenience, this code is wrapped in a function._____no_output_____
<code>
def create_example_circuit(
segments: Sequence[Sequence[cirq.Qid]],
num_trotter_steps: int,
) -> cirq.Circuit:
"""Returns a linear chain circuit to demonstrate Floquet calibration on."""
circuit = cirq.Circuit()
# Initial state preparation.
for segment in segments:
circuit += [cirq.X.on(segment[len(segment) // 2])]
# Trotter steps.
for step in range(num_trotter_steps):
offset = step % 2
moment = cirq.Moment()
for segment in segments:
moment += cirq.Moment(
[sqrt_iswap.on(a, b) for a, b in zip(segment[offset::2],
segment[offset + 1::2])])
circuit += moment
# Measurement.
circuit += cirq.measure(*sum(segments, ()), key='z')
return circuit_____no_output_____
</code>
As an example, we show this circuit on the first segment of the line from above._____no_output_____
<code>
"""Example of the linear chain circuit on one segment of the line."""
num_trotter_steps = 20
circuit_on_segment = create_example_circuit(
segments=[segments[0]],
num_trotter_steps=num_trotter_steps,
)
print(circuit_on_segment.to_text_diagram(qubit_order=segments[0]))_____no_output_____
</code>
The circuit we will use for Floquet calibration is this same pattern repeated on all segments of the line._____no_output_____
<code>
"""Circuit used to demonstrate Floquet calibration."""
circuit = create_example_circuit(
segments=segments,
num_trotter_steps=num_trotter_steps
)_____no_output_____
</code>
### Execution on a simulator_____no_output_____To establish a "ground truth," we first simulate a segment on a noiseless simulator._____no_output_____
<code>
"""Simulate one segment on a simulator."""
nreps = 20_000
sim_result = cirq.Simulator().run(circuit_on_segment, repetitions=nreps)_____no_output_____
</code>
### Execution on the processor without Floquet calibration_____no_output_____We now execute the full circuit on a processor without using Floquet calibration._____no_output_____
<code>
"""Execute the full circuit on a processor without Floquet calibration."""
raw_results = sampler.run(circuit, repetitions=nreps)_____no_output_____
</code>
### Comparing raw results to simulator results_____no_output_____For comparison we will plot densities (average measurement results) on each segment. Such densities are in the interval $[0, 1]$ and more accurate results are closer to the simulator results.
To visualize results, we define a few helper functions._____no_output_____#### Helper functions_____no_output_____Note: The functions in this section are just utilities for visualizing results and not essential for Floquet calibration. As such this section can be safely skipped or skimmed._____no_output_____The next cell defines two functions for returning the density (average measurement results) on a segment or on all segments. We can optionally post-select for measurements with a specific filling (particle number) - i.e., discard measurement results which don't obey this expected particle number._____no_output_____
<code>
def z_density_from_measurements(
measurements: np.ndarray,
post_select_filling: Optional[int] = 1
) -> np.ndarray:
"""Returns density for one segment on the line."""
counts = np.sum(measurements, axis=1, dtype=int)
if post_select_filling is not None:
errors = np.abs(counts - post_select_filling)
counts = measurements[(errors == 0).nonzero()]
return np.average(counts, axis=0)
def z_densities_from_result(
result: cirq.Result,
segments: Iterable[Sequence[cirq.Qid]],
post_select_filling: Optional[int] = 1
) -> List[np.ndarray]:
"""Returns densities for each segment on the line."""
measurements = result.measurements['z']
z_densities = []
offset = 0
for segment in segments:
z_densities.append(z_density_from_measurements(
measurements[:, offset: offset + len(segment)],
post_select_filling)
)
offset += len(segment)
return z_densities_____no_output_____
</code>
Now we define functions to plot the densities for the simulator, processor without Floquet calibration, and processor with Floquet calibration (which we will use at the end of this notebook). The first function is for a single segment, and the second function is for all segments._____no_output_____
<code>
#@title
def plot_density(
ax: plt.Axes,
sim_density: np.ndarray,
raw_density: np.ndarray,
cal_density: Optional[np.ndarray] = None,
raw_errors: Optional[np.ndarray] = None,
cal_errors: Optional[np.ndarray] = None,
title: Optional[str] = None,
show_legend: bool = True,
show_ylabel: bool = True,
) -> None:
"""Plots the density of a single segment for simulated, raw, and calibrated
results.
"""
colors = ["grey", "orange", "green"]
alphas = [0.5, 0.8, 0.8]
labels = ["sim", "raw", "cal"]
# Plot densities.
for i, density in enumerate([sim_density, raw_density, cal_density]):
if density is not None:
ax.plot(
range(len(density)),
density,
"-o" if i == 0 else "o",
markersize=11,
color=colors[i],
alpha=alphas[i],
label=labels[i]
)
# Plot errors if provided.
errors = [raw_errors, cal_errors]
densities = [raw_density, cal_density]
for i, (errs, dens) in enumerate(zip(errors, densities)):
if errs is not None:
ax.errorbar(
range(len(errs)),
dens,
errs,
linestyle='',
color=colors[i + 1],
capsize=8,
elinewidth=2,
markeredgewidth=2
)
# Titles, axes, and legend.
ax.set_xticks(list(range(len(sim_density))))
ax.set_xlabel("Qubit index in segment")
if show_ylabel:
ax.set_ylabel("Density")
if title:
ax.set_title(title)
if show_legend:
ax.legend()
def plot_densities(
sim_density: np.ndarray,
raw_densities: Sequence[np.ndarray],
cal_densities: Optional[Sequence[np.ndarray]] = None,
rows: int = 3
) -> None:
"""Plots densities for simulated, raw, and calibrated results on all segments.
"""
if not cal_densities:
cal_densities = [None] * len(raw_densities)
cols = (len(raw_densities) + rows - 1) // rows
fig, axes = plt.subplots(
rows, cols, figsize=(cols * 4, rows * 3.5), sharey=True
)
if rows == 1 and cols == 1:
axes = [axes]
elif rows > 1 and cols > 1:
axes = [axes[row, col] for row in range(rows) for col in range(cols)]
for i, (ax, raw, cal) in enumerate(zip(axes, raw_densities, cal_densities)):
plot_density(
ax,
sim_density,
raw,
cal,
title=f"Segment {i + 1}",
show_legend=False,
show_ylabel=i % cols == 0
)
# Common legend for all subplots.
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels)
plt.tight_layout(pad=0.1, w_pad=1.0, h_pad=3.0)_____no_output_____
</code>
#### Visualizing results_____no_output_____Note: This section uses helper functions from the previous section to plot results. The code can be safely skimmed: emphasis should be on the plots._____no_output_____To visualize results, we first extract densities from the measurements._____no_output_____
<code>
"""Extract densities from measurement results."""
# Simulator density.
sim_density, = z_densities_from_result(sim_result,[circuit_on_segment])
# Processor densities without Floquet calibration.
raw_densities = z_densities_from_result(raw_results, segments)_____no_output_____
</code>
We first plot the densities on each segment. Note that the simulator densities ("sim") are repeated on each segment and the lines connecting them are just visual guides._____no_output_____
<code>
plot_densities(sim_density, raw_densities, rows=int(np.sqrt(line_length / segment_length)))_____no_output_____
</code>
We can also look at the average and variance over the segments._____no_output_____
<code>
"""Plot mean density and variance over segments."""
raw_avg = np.average(raw_densities, axis=0)
raw_std = np.std(raw_densities, axis=0, ddof=1)
plot_density(
plt.gca(),
sim_density,
raw_density=raw_avg,
raw_errors=raw_std,
title="Average over segments"
)_____no_output_____
</code>
In the next section, we will use Floquet calibration to produce better average results. After running the circuit with Floquet calibration, we will use these same visualizations to compare results._____no_output_____### Execution on the processor with Floquet calibration_____no_output_____There are two equivalent ways to use Floquet calibration which we outline below. A rough estimate for the time required for Floquet calibration is about 16 seconds per 10 qubits, plus 30 seconds of overhead, per calibrated moment._____no_output_____#### Simple usage_____no_output_____The first way to use Floquet calibration is via the single function call used at the start of this notebook. Here, we describe the remaining returned values in addition to `calibrated_circuit`._____no_output_____Note: We comment out this section so Floquet calibration on the larger circuit is only executed once in the notebook._____no_output_____
<code>
# (calibrated_circuit, calibrations
# ) = cg.run_zeta_chi_gamma_compensation_for_moments(
# circuit,
# engine,
# processor_id=processor_id,
# gate_set=cg.SQRT_ISWAP_GATESET
# )_____no_output_____
</code>
The returned `calibrated_circuit.circuit` can then be run on the engine. The full list of returned arguments is as follows:
* `calibrated_circuit.circuit`: The input `circuit` with added $Z$ rotations around each $\sqrt{\text{iSWAP}}$ gate to compensate for errors.
* `calibrated_circuit.moment_to_calibration`: Provides an index of the matching characterization (index in calibrations list) for each moment of the `calibrated_circuit.circuit`, or `None` if the moment was not characterized (e.g., for a measurement outcome).
* `calibrations`: List of characterization results for each characterized moment. Each characterization contains angles for each qubit pair._____no_output_____#### Step-by-step usage_____no_output_____Note: This section is provided to see the Floquet calibration API at a lower level, but the results are identical to the "simple usage" in the previous section._____no_output_____The above function `cirq_google.run_floquet_phased_calibration_for_circuit` performs the following three steps:
1. Find moments within the circuit that need to be characterized.
2. Characterize them on the engine.
3. Apply corrections to the original circuit.
To find moments that need to be characterized, we can do the following._____no_output_____
<code>
"""Step 1: Find moments in the circuit that need to be characterized."""
(characterized_circuit, characterization_requests
) = cg.prepare_floquet_characterization_for_moments(
circuit,
options=cg.FloquetPhasedFSimCalibrationOptions(
characterize_theta=False,
characterize_zeta=True,
characterize_chi=False,
characterize_gamma=True,
characterize_phi=False
)
)_____no_output_____
</code>
The `characterization_requests` contain information on the operations (gate + qubit pairs) to characterize._____no_output_____
<code>
"""Show an example characterization request."""
print(f"Total {len(characterization_requests)} moment(s) to characterize.")
print("\nExample request")
request = characterization_requests[0]
print("Gate:", request.gate)
print("Qubit pairs:", request.pairs)
print("Options: ", request.options)_____no_output_____
</code>
We now characterize them on the engine using `cirq_google.run_calibrations`._____no_output_____
<code>
"""Step 2: Characterize moments on the engine."""
characterizations = cg.run_calibrations(
characterization_requests,
engine,
processor_id=processor_id,
gate_set=cg.SQRT_ISWAP_GATESET,
max_layers_per_request=1,
)_____no_output_____
</code>
The `characterizations` store characterization results for each pair in each moment, for example._____no_output_____
<code>
print(f"Total: {len(characterizations)} characterizations.")
print()
(pair, parameters), *_ = characterizations[0].parameters.items()
print(f"Example pair: {pair}")
print(f"Example parameters: {parameters}")_____no_output_____
</code>
Finally, we apply corrections to the original circuit._____no_output_____
<code>
"""Step 3: Apply corrections to the circuit to get a calibrated circuit."""
calibrated_circuit = cg.make_zeta_chi_gamma_compensation_for_moments(
characterized_circuit,
characterizations
)_____no_output_____
</code>
The calibrated circuit can now be run on the processor. We first inspect the calibrated circuit to compare to the original._____no_output_____
<code>
print("Portion of calibrated circuit:")
print("\n".join(
calibrated_circuit.circuit.to_text_diagram(qubit_order=line).splitlines()[:9] +
["..."]))_____no_output_____
</code>
Note again that $\sqrt{\text{iSWAP}}$ gates are padded by $Z$ phases to compensate for errors. We now run this calibrated circuit._____no_output_____
<code>
"""Run the calibrated circuit on the engine."""
cal_results = sampler.run(calibrated_circuit.circuit, repetitions=nreps)_____no_output_____
</code>
### Comparing raw results to calibrated results_____no_output_____We now compare results with and without Floquet calibration, again using the simulator results as a baseline for comparison. First we extract the calibrated densities._____no_output_____
<code>
"""Extract densities from measurement results."""
cal_densities = z_densities_from_result(cal_results, segments)_____no_output_____
</code>
Now we reproduce the same density plots from above on each segment, this time including the calibrated ("cal") results._____no_output_____
<code>
plot_densities(
sim_density, raw_densities, cal_densities, rows=int(np.sqrt(line_length / segment_length))
)_____no_output_____
</code>
We also visualize the mean and variance of results over segments as before._____no_output_____
<code>
"""Plot mean density and variance over segments."""
raw_avg = np.average(raw_densities, axis=0)
raw_std = np.std(raw_densities, axis=0, ddof=1)
cal_avg = np.average(cal_densities, axis=0)
cal_std = np.std(cal_densities, axis=0, ddof=1)
plot_density(
plt.gca(),
sim_density,
raw_avg,
cal_avg,
raw_std,
cal_std,
title="Average over segments"
)_____no_output_____
</code>
Last, we can look at density errors between raw/calibrated results and simulated results._____no_output_____
<code>
"""Plot errors of raw vs calibrated results."""
fig, axes = plt.subplots(ncols=2, figsize=(15, 4))
axes[0].set_title("Error of the mean")
axes[0].set_ylabel("Density")
axes[1].set_title("Data standard deviation")
colors = ["orange", "green"]
labels = ["raw", "cal"]
for index, density in enumerate([raw_densities, cal_densities]):
color = colors[index]
label = labels[index]
average_density = np.average(density, axis=0)
sites = list(range(len(average_density)))
error = np.abs(average_density - sim_density)
std_dev = np.std(density, axis=0, ddof=1)
axes[0].plot(sites, error, color=color, alpha=0.6)
axes[0].scatter(sites, error, color=color)
axes[1].plot(sites, std_dev, label=label, color=color, alpha=0.6)
axes[1].scatter(sites, std_dev, color=color)
for ax in axes:
ax.set_xticks(sites)
ax.set_xlabel("Qubit index in segment")
plt.legend();_____no_output_____
</code>
|
{
"repository": "balopat/Cirq",
"path": "docs/tutorials/google/floquet.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 40681,
"hexsha": "cba20bfadd11f3e99c5f06afe28a1b0a69a766be",
"max_line_length": 558,
"avg_line_length": 29.9565537555,
"alphanum_fraction": 0.5667756446
}
|
# Notebook from jaypatel31/ML-For-Beginners
Path: 2-Regression/1-Tools/solution/lesson_1-R.ipynb
#Build a regression model: Get started with R and Tidymodels for regression models_____no_output_____## Introduction to Regression - Lesson 1
#### Putting it into perspective
✅ There are many types of regression methods, and which one you pick depends on the answer you're looking for. If you want to predict the probable height for a person of a given age, you'd use `linear regression`, as you're seeking a **numeric value**. If you're interested in discovering whether a type of cuisine should be considered vegan or not, you're looking for a **category assignment** so you would use `logistic regression`. You'll learn more about logistic regression later. Think a bit about some questions you can ask of data, and which of these methods would be more appropriate.
In this section, you will work with a [small dataset about diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). Imagine that you wanted to test a treatment for diabetic patients. Machine Learning models might help you determine which patients would respond better to the treatment, based on combinations of variables. Even a very basic regression model, when visualized, might show information about variables that would help you organize your theoretical clinical trials.
That said, let's get started on this task!
<br>Artwork by @allison_horst_____no_output_____## 1. Loading up our tool set
For this task, we'll require the following packages:
- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!
- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.
You can have them installed as:
`install.packages(c("tidyverse", "tidymodels"))`
The script below checks whether you have the packages required to complete this module and installs them for you in case some are missing._____no_output_____
<code>
if (!require("pacman")) install.packages("pacman")
pacman::p_load(tidyverse, tidymodels)Loading required package: pacman
</code>
Now, let's load these awesome packages and make them available in our current R session.(This is for mere illustration, `pacman::p_load()` already did that for you)_____no_output_____
<code>
# load the core Tidyverse packages
library(tidyverse)
# load the core Tidymodels packages
library(tidymodels)
_____no_output_____
</code>
## 2. The diabetes dataset
In this exercise, we'll put our regression skills into display by making predictions on a diabetes dataset. The [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt) includes `442 samples` of data around diabetes, with 10 predictor feature variables, `age`, `sex`, `body mass index`, `average blood pressure`, and `six blood serum measurements` as well as an outcome variable `y`: a quantitative measure of disease progression one year after baseline.
|Number of observations|442|
|----------------------|:---|
|Number of predictors|First 10 columns are numeric predictive|
|Outcome/Target|Column 11 is a quantitative measure of disease progression one year after baseline|
|Predictor Information|- age in years
||- sex
||- bmi body mass index
||- bp average blood pressure
||- s1 tc, total serum cholesterol
||- s2 ldl, low-density lipoproteins
||- s3 hdl, high-density lipoproteins
||- s4 tch, total cholesterol / HDL
||- s5 ltg, possibly log of serum triglycerides level
||- s6 glu, blood sugar level|
> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
Before you can manipulate data with R, you need to import the data into R's memory, or build a connection to the data that R can use to access the data remotely.
> The [readr](https://readr.tidyverse.org/) package, which is part of the Tidyverse, provides a fast and friendly way to read rectangular data into R.
Now, let's load the diabetes dataset provided in this source URL: <https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html>
Also, we'll perform a sanity check on our data using `glimpse()` and dsiplay the first 5 rows using `slice()`.
Before going any further, let's also introduce something you will encounter often in R code 🥁🥁: the pipe operator `%>%`
The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying "and then" in your code._____no_output_____
<code>
# Import the data set
diabetes <- read_table2(file = "https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt")
# Get a glimpse and dimensions of the data
glimpse(diabetes)
# Select the first 5 rows of the data
diabetes %>%
slice(1:5)_____no_output_____
</code>
`glimpse()` shows us that this data has 442 rows and 11 columns with all the columns being of data type `double`
<br>
> glimpse() and slice() are functions in [`dplyr`](https://dplyr.tidyverse.org/). Dplyr, part of the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges
<br>
Now that we have the data, let's narrow down to one feature (`bmi`) to target for this exercise. This will require us to select the desired columns. So, how do we do this?
[`dplyr::select()`](https://dplyr.tidyverse.org/reference/select.html) allows us to *select* (and optionally rename) columns in a data frame._____no_output_____
<code>
# Select predictor feature `bmi` and outcome `y`
diabetes_select <- diabetes %>%
select(c(bmi, y))
# Print the first 5 rows
diabetes_select %>%
slice(1:10)_____no_output_____
</code>
## 3. Training and Testing data
It's common practice in supervised learning to *split* the data into two subsets; a (typically larger) set with which to train the model, and a smaller "hold-back" set with which to see how the model performed.
Now that we have data ready, we can see if a machine can help determine a logical split between the numbers in this dataset. We can use the [rsample](https://tidymodels.github.io/rsample/) package, which is part of the Tidymodels framework, to create an object that contains the information on *how* to split the data, and then two more rsample functions to extract the created training and testing sets:
_____no_output_____
<code>
set.seed(2056)
# Split 67% of the data for training and the rest for tesing
diabetes_split <- diabetes_select %>%
initial_split(prop = 0.67)
# Extract the resulting train and test sets
diabetes_train <- training(diabetes_split)
diabetes_test <- testing(diabetes_split)
# Print the first 3 rows of the training set
diabetes_train %>%
slice(1:10)_____no_output_____
</code>
## 4. Train a linear regression model with Tidymodels
Now we are ready to train our model!
In Tidymodels, you specify models using `parsnip()` by specifying three concepts:
- Model **type** differentiates models such as linear regression, logistic regression, decision tree models, and so forth.
- Model **mode** includes common options like regression and classification; some model types support either of these while some only have one mode.
- Model **engine** is the computational tool which will be used to fit the model. Often these are R packages, such as **`"lm"`** or **`"ranger"`**
This modeling information is captured in a model specification, so let's build one!_____no_output_____
<code>
# Build a linear model specification
lm_spec <-
# Type
linear_reg() %>%
# Engine
set_engine("lm") %>%
# Mode
set_mode("regression")
# Print the model specification
lm_spec_____no_output_____
</code>
After a model has been *specified*, the model can be `estimated` or `trained` using the [`fit()`](https://parsnip.tidymodels.org/reference/fit.html) function, typically using a formula and some data.
`y ~ .` means we'll fit `y` as the predicted quantity/target, explained by all the predictors/features ie, `.` (in this case, we only have one predictor: `bmi` )_____no_output_____
<code>
# Build a linear model specification
lm_spec <- linear_reg() %>%
set_engine("lm") %>%
set_mode("regression")
# Train a linear regression model
lm_mod <- lm_spec %>%
fit(y ~ ., data = diabetes_train)
# Print the model
lm_mod_____no_output_____
</code>
From the model output, we can see the coefficients learned during training. They represent the coefficients of the line of best fit that gives us the lowest overall error between the actual and predicted variable.
<br>
## 5. Make predictions on the test set
Now that we've trained a model, we can use it to predict the disease progression y for the test dataset using [parsnip::predict()](https://parsnip.tidymodels.org/reference/predict.model_fit.html). This will be used to draw the line between data groups._____no_output_____
<code>
# Make predictions for the test set
predictions <- lm_mod %>%
predict(new_data = diabetes_test)
# Print out some of the predictions
predictions %>%
slice(1:5)_____no_output_____
</code>
Woohoo! 💃🕺 We just trained a model and used it to make predictions!
When making predictions, the tidymodels convention is to always produce a tibble/data frame of results with standardized column names. This makes it easy to combine the original data and the predictions in a usable format for subsequent operations such as plotting.
`dplyr::bind_cols()` efficiently binds multiple data frames column._____no_output_____
<code>
# Combine the predictions and the original test set
results <- diabetes_test %>%
bind_cols(predictions)
results %>%
slice(1:5)_____no_output_____
</code>
## 6. Plot modelling results
Now, its time to see this visually 📈. We'll create a scatter plot of all the `y` and `bmi` values of the test set, then use the predictions to draw a line in the most appropriate place, between the model's data groupings.
R has several systems for making graphs, but `ggplot2` is one of the most elegant and most versatile. This allows you to compose graphs by **combining independent components**._____no_output_____
<code>
# Set a theme for the plot
theme_set(theme_light())
# Create a scatter plot
results %>%
ggplot(aes(x = bmi)) +
# Add a scatter plot
geom_point(aes(y = y), size = 1.6) +
# Add a line plot
geom_line(aes(y = .pred), color = "blue", size = 1.5)_____no_output_____
</code>
> ✅ Think a bit about what's going on here. A straight line is running through many small dots of data, but what is it doing exactly? Can you see how you should be able to use this line to predict where a new, unseen data point should fit in relationship to the plot's y axis? Try to put into words the practical use of this model.
Congratulations, you built your first linear regression model, created a prediction with it, and displayed it in a plot!
_____no_output_____
|
{
"repository": "jaypatel31/ML-For-Beginners",
"path": "2-Regression/1-Tools/solution/lesson_1-R.ipynb",
"matched_keywords": [
"clinical trials"
],
"stars": 5,
"size": 17003,
"hexsha": "cba2408151f69347239c3e05d4c8ce849068a92d",
"max_line_length": 606,
"avg_line_length": 38.9084668192,
"alphanum_fraction": 0.5604305123
}
|
# Notebook from NelsonBilber/py.thehardway
Path: PythonTheHardway.ipynb
<code>
# python the hardway https://learnpythonthehardway.org/book/index.html
# Exercise 1 - Hello world
import sys
print ("Hello Snake")
Hello Snake
# Exercise 2 - simple math operations
print ("5 + 2 = ", 5 + 2)
print ("5 > 2 ? ", 5 > 2)
print ("7 / 4 = ", 7/4)
# print ("7 % 4 = ", 7%4)5 + 2 = 7
5 > 2 ? True
7 / 4 = 1.75
7 % 4 = 3
# Exercise 3 - variables and names
my_name = 'Zed A. Shaw'
my_age = 35 # not a lie
my_height = 74 # inches
my_weight = 180 # lbs
my_eyes = 'Blue'
my_teeth = 'White'
my_hair = 'Brown'
print ("Let's talk about %s." % my_name)
print ("He's %d inches tall." % my_height)
print ("He's %d pounds heavy." % my_weight)
print ("Actually that's not too heavy.")
print ("He's got %s eyes and %s hair." % (my_eyes, my_hair))
print ("His teeth are usually %s depending on the coffee." % my_teeth)
# this line is tricky, try to get it exactly right
print ("If I add %d, %d, and %d I get %d." % ( my_age, my_height, my_weight, my_age + my_height + my_weight))Let's talk about Zed A. Shaw.
He's 74 inches tall.
He's 180 pounds heavy.
Actually that's not too heavy.
He's got Blue eyes and Brown hair.
His teeth are usually White depending on the coffee.
If I add 35, 74, and 180 I get 289.
# Exercise 4 - input from console
age = input()
print ("What's your age ? ", age)
34
What's your age ? 34
#Exercise 13 using parameters
from sys import argv
program_name = argv
print ("second = ", program_name)second = ['F:\\Anaconda3\\lib\\site-packages\\ipykernel\\__main__.py', '-f', 'C:\\Users\\NelsonRodrigues\\AppData\\Roaming\\jupyter\\runtime\\kernel-9576f747-9618-4c5d-bbd2-3530d8483a7a.json']
# Exercise 15 - Read files
from sys import argv
file = open("t.txt","r")
for line in file:
print(line.rstrip())
file.close()This is stuff I typed into a file.
It is really cool stuff.
Lots and lots of fun to have in here.
# Exercise 16 - Write file
from sys import argv
file = open("t.txt","r")
out = open("t2.txt","w")
for line in file:
out_line = line.rstrip()
print (out_line)
out.write(out_line)
file.close()This is stuff I typed into a file.
It is really cool stuff.
Lots and lots of fun to have in here.
#Exercise 18 - functions
def print_two(*args):
arg1, arg2 = args
print ("arg1: %r, arg2: %r" % (arg1, arg2))
def print_twoV2(arg1, arg2):
print ("arg1: %r, arg2: %r" % (arg1, arg2))
print_two("Zed", "Ned")
print_twoV2("Zed", "Ned")arg1: 'Zed', arg2: 'Ned'
arg1: 'Zed', arg2: 'Ned'
# Exercise 19 - functions continue ...
def add(num1, num2):
return (num1 + num2)
print (add(1,2))
print (add(5+5,10+10))
3
30
# Exercise 28 - Booleans
True and True
False and True
"test" == "test"
3 == 3 and (not ("testing" == "testing" or "Python" == "Fun"))_____no_output_____#Exercise 29 - If conditions
people = 30
cars = 40
trucks = 15
if cars > people:
print ("We should take the cars.")
elif cars < people:
print ("We should not take the cars.")
else:
print ("We can't decide.")
We should take the cars.
# Exercise 32 - Loop and List
numbers = [1, 2, 3]
change = [1, 'pennies', 2, 'dimes', 3, 'quarters']
for num in numbers:
print (num)
for i in change:
print("I got %r" % i)
1
2
3
I got 1
I got 'pennies'
I got 2
I got 'dimes'
I got 3
I got 'quarters'
# Exercise 33 - while loops
i = 0
numbers = []
while i < 6:
print ("At the top i is %d" % i)
numbers.append(i)
i = i + 1
print ("Numbers now: ", numbers)
print ("At the bottom i is %d" % i)
print ("The numbers: ")
for num in numbers:
print (num)At the top i is 0
Numbers now: [0]
At the bottom i is 1
At the top i is 1
Numbers now: [0, 1]
At the bottom i is 2
At the top i is 2
Numbers now: [0, 1, 2]
At the bottom i is 3
At the top i is 3
Numbers now: [0, 1, 2, 3]
At the bottom i is 4
At the top i is 4
Numbers now: [0, 1, 2, 3, 4]
At the bottom i is 5
At the top i is 5
Numbers now: [0, 1, 2, 3, 4, 5]
At the bottom i is 6
The numbers:
0
1
2
3
4
5
# Exercise 34 - access list elems
animals = ['bear', "wolf"]
animals[1]
_____no_output_____# Exercise 39 - Dictionaries
stuff = {'name': 'Nelson', 'age' : 33}
print (stuff['name'])
# create a mapping of state to abbreviation
states = {
'Oregon': 'OR',
'Florida': 'FL',
'California': 'CA',
'New York': 'NY',
'Michigan': 'MI'
}
print (states)
# create a basic set of states and some cities in them
cities = {
'CA': 'San Francisco',
'MI': 'Detroit',
'FL': 'Jacksonville'
}
print ("Testing ....", cities[states['California']])
# print every state abbreviation
for state, abbrev in states.items():
print ("%s is abbreviated %s" % (state, abbrev))
Nelson
{'Oregon': 'OR', 'Florida': 'FL', 'New York': 'NY', 'California': 'CA', 'Michigan': 'MI'}
Testing .... San Francisco
Oregon is abbreviated OR
Florida is abbreviated FL
New York is abbreviated NY
California is abbreviated CA
Michigan is abbreviated MI
# Exercise 40 - Object Oriented Programming
class Song(object):
def __init__(self,lyrics):
self.lyrics = lyrics
def sing_me_a_song(self):
for line in self.lyrics:
print(line)
happy_bday = Song(["tada ttttt tada ..."])
happy_bday.sing_me_a_song()
tada ttttt tada ...
## Animal is-a object (yes, sort of confusing) look at the extra credit
class Animal(object):
pass
## ??
class Dog(Animal):
def __init__(self, name):
## ??
self.name = name
## ??
class Cat(Animal):
def __init__(self, name):
## ??
self.name = name
## ??
class Person(object):
def __init__(self, name):
## ??
self.name = name
## Person has-a pet of some kind
self.pet = None
## ??
class Employee(Person):
def __init__(self, name, salary):
## ?? hmm what is this strange magic?
super(Employee, self).__init__(name)
## ??
self.salary = salary
## ??
class Fish(object):
pass
## ??
class Salmon(Fish):
pass
## ??
class Halibut(Fish):
pass
## rover is-a Dog
rover = Dog("Rover")
## ??
satan = Cat("Satan")
## ??
mary = Person("Mary")
## ??
mary.pet = satan
## ??
frank = Employee("Frank", 120000)
## ??
frank.pet = rover
## ??
flipper = Fish()
## ??
crouse = Salmon()
## ??
harry = Halibut()_____no_output_____# Exercise 44 - Inheritance vs Composition
#Implicit ineritance
class Parent(object):
def implicit(self):
print ("PARENT implicit()")
class Child(Parent):
pass
dad = Parent()
son = Child()
dad.implicit()
son.implicit()
#Override explicit
class Parent(object):
def override(self):
print ("PARENT override()")
class Child(Parent):
def override(self):
print ("CHILD override()")
dad = Parent()
son = Child()
dad.override()
son.override()
# Altered
class Parent(object):
def altered(self):
print ("PARENT altered()")
class Child(Parent):
def altered(self):
print ("CHILD, BEFORE PARENT altered()")
super(Child, self).altered()
print ("CHILD, AFTER PARENT altered()")
dad = Parent()
son = Child()
dad.altered()
son.altered()PARENT implicit()
PARENT implicit()
PARENT override()
CHILD override()
PARENT altered()
CHILD, BEFORE PARENT altered()
PARENT altered()
CHILD, AFTER PARENT altered()
# Exercise 47 - Automated Testing
from nose.tools import *
class Room(object):
def __init__(self, name, description):
self.name = name
self.description = description
self.paths = {}
def go(self, direction):
return self.paths.get(direction, None)
def add_paths(self, paths):
self.paths.update(paths)
def test_room():
gold = Room("GoldRoom",
"""This room has gold in it you can grab. There's a
door to the north.""")
assert_equal(gold.name, "GoldRoom")
assert_equal(gold.paths, {})
_____no_output_____
</code>
|
{
"repository": "NelsonBilber/py.thehardway",
"path": "PythonTheHardway.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": null,
"size": 16951,
"hexsha": "cba35252fba8bbe6984c93f409ff39bd6a6914ec",
"max_line_length": 227,
"avg_line_length": 21.9857328145,
"alphanum_fraction": 0.4416848564
}
|
# Notebook from OpenSourceEconomics/soepy
Path: doc/source/data_frame_sim_test_analysis.ipynb
Simulation Demonstration
=====================_____no_output_____
<code>
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import soepy_____no_output_____
</code>
In this notebook we present descriptive statistics of a series of simulated samples with the soepy toy model.
soepy is closely aligned to the model in Blundell et. al. (2016). Yet, we wish to use the soepy package for estimation based on the German SOEP. In this simulation demonstration, some parameter values are partially set close to the parameters estimated in the seminal paper of Blundell et. al. (2016). The remainder of the parameter values are altered such that simulated wage levels and employment choice probabilities (roughly) match the statistics observed in the SOEP Data.
- the constants in the wage process gamma_0 equal are set to ensure alignment with SOEP data.
- the returns to experience in the wage process gamma_1 are set close to the coefficient values on gamma0, Blundell Table VIII, p. 1733
- the part-time experience accumulation parameter is set close to the coefficient on g(P), Blundell Table VIII, p. 1733,
- the experience depreciation parameter delta is set close to the coefffient values on delta, Blundell Table VIII, p. 1733,
- the disutility of part-time work parameter theta_p is set to ensure alignment with SOEP data,
- the disutility of full-time work parameter theta_f is set to ensure alignment with SOEP data.
To ensure that some individuals also choose to be non-emplyed, we set the period wage for nonemployed to be equal to some fixed value, constant over all periods. We call this income in unemployment "benefits"._____no_output_____
<code>
data_frame_baseline = soepy.simulate('toy_model_init_file_01_1000.yml')_____no_output_____data_frame_baseline.head(20)_____no_output_____#Determine the observed wage given period choice
def get_observed_wage (row):
if row['Choice'] == 2:
return row['Period Wage F']
elif row['Choice'] ==1:
return row['Period Wage P']
elif row['Choice'] ==0:
return row['Period Wage N']
else:
return np.nan
# Add to data frame
data_frame_baseline['Wage Observed'] = data_frame_baseline.apply(
lambda row: get_observed_wage (row),axis=1
)
# Determine the education level
def get_educ_level(row):
if row["Years of Education"] >= 10 and row["Years of Education"] < 12:
return 0
elif row["Years of Education"] >= 12 and row["Years of Education"] < 16:
return 1
elif row["Years of Education"] >= 16:
return 2
else:
return np.nan
data_frame_baseline["Educ Level"] = data_frame_baseline.apply(
lambda row: get_educ_level(row), axis=1
)_____no_output_____
</code>
Descriptive statistics to look at:
- average part-time, full-time and nonemployment rate - ideally close to population rates
- frequency of each choice per period - ideally more often part-time in early periods, more full-time in later periods
- frequency of each choice over all periods for individuals with different levels of education - ideally, lower educated more often unemployed and in part-time jobs
- average period wages over all individuals - series for all periods
- average period individuals over all individuals - series for all periods_____no_output_____
<code>
# Average non-employment, part-time, and full-time rates over all periods and individuals
data_frame_baseline['Choice'].value_counts(normalize=True).plot(kind = 'bar')
data_frame_baseline['Choice'].value_counts(normalize=True)_____no_output_____# Average non-employment, part-time, and full-time rates per period
data_frame_baseline.groupby(['Period'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)_____no_output_____
</code>
As far as the evolution of choices over all agents and periods is concerned, we first observe a declining tendency of individuals to be unemployed as desired in a perfectly calibrated simulation. Second, individuals in our simulation tend to choose full-time and non-employment less often in the later periods of the model. Rates of part-time employment increase for the same period. _____no_output_____
<code>
# Average non-employment, part-time, and full-time rates for individuals with different level of education
data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)_____no_output_____
</code>
As should be expected, the higher the education level of the individuals the lower the observed._____no_output_____
<code>
# Average wage for each period and choice
fig,ax = plt.subplots()
# Generate x axes values
period = np.arange(1,31)
# Generate plot lines
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 2].groupby(['Period'])['Period Wage F'].mean(),
color='green', label = 'F')
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 1].groupby(['Period'])['Period Wage P'].mean(),
color='orange', label = 'P')
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 0].groupby(['Period'])['Period Wage N'].mean(),
color='blue', label = 'N')
# Plot settings
ax.set_xlabel("period")
ax.set_ylabel("wage")
ax.legend(loc='best')_____no_output_____
</code>
The period wage of non-employment actually refers to the unemployment benefits individuals receive. The amount of the benefits is constant over time. Part-time and full-time wages rise as individuals gather more experience._____no_output_____
<code>
# Average wages by period
data_frame_baseline.groupby(['Period'])['Wage Observed'].mean().plot()_____no_output_____
</code>
Comparative Statics
------------------------_____no_output_____In the following, we discuss some comparative statics of the model.
While changing other parameter values we wish to assume that the parameters central to the part-time penalty phenomenon studied in Blundell (2016) stay the same as in the benchmark specification:
- part-time experience accumulation g_s1,2,3
- experience depreciation delta
Comparative statics:
Parameters in the systematic wage govern the choice between employment (either part-time, or full-time) and nonemployment. They do not determine the choice between part-time and full-time employment since the systematic wage is equal for both options.
- constnat in wage process gamma_0: lower/higher value of the coefficient implies that other components such as accumulated work experience and the productivity shock are relatively more/less important in determining the choice between employment and nonemployment. Decreasing the constant for individuals of a certain education level, e.g., low, results in these individuals choosing nonemployment more often.
- return to experience gamma_1: lower value of the coefficient implies that accumulated work experience is less relevant in determining the wage in comparison to other factors such as the constant or the productivity shock. Higher coefficients should lead to agents persistently choosing employment versus non-employment.
The productivity shock:
- productivity shock variances - the higher the variances, the more switching between occupational alternatives.
Risk aversion:
- risk aversion parameter mu: the more negative the risk aversion parameter, the more eager are agents to ensure themselves against productivity shoks through accumulation of experience. Therefore, lower values of the parameter are associated with higher rates of full-time employment.
The labor disutility parameters directly influence:
- benefits - for higher benefits individuals of all education levels would choose non-employment more often
- labor disutility for part-time theta_p - for a higher coefficient, individuals of all education levels would choose to work part-time more often
- labor disutility for full-time theta_f - for a higher coefficient, individuals of all education levels would choose to work part-time more often_____no_output_____Finally, we illustrate one of the changes discussed above. In the alternative specifications the return to experience coefficient gamma_1 for the individuals with medium level of educations is increased from 0.157 to 0.195. As a result, experience accumulation matters more in the utility maximization. Therefore, individuals with medium level of education choose to be employed more often. Consequently, also aggregate levels of nonemployment are lower in the model._____no_output_____
<code>
data_frame_alternative = soepy.simulate('toy_model_init_file_01_1000.yml')_____no_output_____# Average non-employment, part-time, and full-time rates for individuals with different level of education
[data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True),
data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True)]_____no_output_____# Average non-employment, part-time, and full-time rates for individuals with different level of education
data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)_____no_output_____# Average non-employment, part-time, and full-time rates over all periods and individuals
data_frame_alternative['Choice'].value_counts(normalize=True).plot(kind = 'bar')
data_frame_alternative['Choice'].value_counts(normalize=True)_____no_output_____
</code>
|
{
"repository": "OpenSourceEconomics/soepy",
"path": "doc/source/data_frame_sim_test_analysis.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 12,
"size": 103738,
"hexsha": "cba4b573ef7c6cba2c46de454725c3a9a50ef4a2",
"max_line_length": 16636,
"avg_line_length": 104.891809909,
"alphanum_fraction": 0.780186624
}
|
# Notebook from usgs-bcb/phenology-baps
Path: annual-indices-of-spring.ipynb
# Table of Contents
<div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Purpose" data-toc-modified-id="Purpose-1"><span class="toc-item-num">1 </span>Purpose</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Requirements" data-toc-modified-id="Requirements-2"><span class="toc-item-num">2 </span>Requirements</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Abstract-Stakeholder" data-toc-modified-id="Abstract-Stakeholder-2.1"><span class="toc-item-num">2.1 </span>Abstract Stakeholder</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Actual-Stakeholder" data-toc-modified-id="Actual-Stakeholder-2.2"><span class="toc-item-num">2.2 </span>Actual Stakeholder</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Dependencies" data-toc-modified-id="Dependencies-3"><span class="toc-item-num">3 </span>Dependencies</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#R-installation" data-toc-modified-id="R-installation-3.1"><span class="toc-item-num">3.1 </span>R installation</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#An-R-kernel-for-Jupyter-notebooks" data-toc-modified-id="An-R-kernel-for-Jupyter-notebooks-3.2"><span class="toc-item-num">3.2 </span>An R kernel for Jupyter notebooks</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Load-R-libraries-for-the-analyses" data-toc-modified-id="Load-R-libraries-for-the-analyses-3.3"><span class="toc-item-num">3.3 </span>Load R libraries for the analyses</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Analyses" data-toc-modified-id="Analyses-4"><span class="toc-item-num">4 </span>Analyses</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Leaf" data-toc-modified-id="First-Leaf-4.1"><span class="toc-item-num">4.1 </span>First Leaf</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Inputs" data-toc-modified-id="Inputs-4.1.1"><span class="toc-item-num">4.1.1 </span>Inputs</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Outputs" data-toc-modified-id="Outputs-4.1.2"><span class="toc-item-num">4.1.2 </span>Outputs</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Histogram" data-toc-modified-id="Histogram-4.1.2.1"><span class="toc-item-num">4.1.2.1 </span>Histogram</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Boxplots" data-toc-modified-id="Boxplots-4.1.2.2"><span class="toc-item-num">4.1.2.2 </span>Boxplots</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Ridgeline-Plots" data-toc-modified-id="Ridgeline-Plots-4.1.2.3"><span class="toc-item-num">4.1.2.3 </span>Ridgeline Plots</a></span></li></ul></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Bloom" data-toc-modified-id="First-Bloom-4.2"><span class="toc-item-num">4.2 </span>First Bloom</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Code" data-toc-modified-id="Code-5"><span class="toc-item-num">5 </span>Code</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Provenance" data-toc-modified-id="Provenance-6"><span class="toc-item-num">6 </span>Provenance</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Citations" data-toc-modified-id="Citations-7"><span class="toc-item-num">7 </span>Citations</a></span></li></ul></div>_____no_output_____# Purpose
This [biogeographical analysis package](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst) (BAP) uses the [USA National Phenology Network](https://www.usanpn.org/usa-national-phenology-network) (USA-NPN)'s modeled information on phenological changes to inform and support management decisions on the timing and coordination of season-specific activities within the boundaries of a user-specified management unit. While various categories of phenological information are applicable to the seasonal allocation of resources, this package focuses on one of those, USA-NPN's modeled spring indices of first leaf and first bloom. The use case for design and development of the BAP was that of a resource manager using this analysis package and USA-NPN's Extended Spring Indices to guide the timing and location of treaments within their protected area. _____no_output_____# Requirements_____no_output_____## Abstract Stakeholder
Stakeholders for the information produced by this analysis package are people making decisions based on the timing of seasonal events at a specific location. Examples include resource managers, health professionals, and recreationalists.
Note: For more on the concept of "Abstract Stakeholder" please see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#abstract-stakeholder)._____no_output_____## Actual Stakeholder
To be determined
Note: For more on the concept of "Actual Stakeholder" see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#actual-stakeholder)._____no_output_____
# Dependencies
This notebook was developed using the R software environment. Several R software packages are required to run this scientific code in a Jupyter notebook. An R kernel for Jupyter notebooks is also required.
## R installation
Guidance on installing the R software environment is available at the [R Project](https://www.r-project.org). Several R libraries, listed below, are used for the analyses and visualizations in this notebook. General instructions for finding and installing libraries are also provided at the [R Project](https://www.r-project.org) website.
## An R kernel for Jupyter notebooks
This notebook uses [IRkernel](https://irkernel.github.io). At the time of this writing (2018-05-06), Karlijn Willems provides excellent guidance on installing the IRkernel and running R in a Jupyter notebook in her article entitled ["Jupyter And R Markdown: Notebooks With R"](https://www.datacamp.com/community/blog/jupyter-notebook-r#markdown) _____no_output_____## Load R libraries for the analyses_____no_output_____
<code>
library(tidyverse)
library(ggplot2)
library(ggridges)
library(jsonlite)
library(viridis)_____no_output_____
</code>
# Analyses
An understanding of the USA National Phenology Network's suite of [models and maps](https://www.usanpn.org/data/maps) is required to properly use this analysis package and to assess the results.
The Extended Spring Indices, the model used to estimate the timing of "first leaf" and "first bloom" events for early spring indicator species at a specific location, are detailed on this [page](https://www.usanpn.org/data/spring_indices) of the USA-NPN website. Note both indices are based on the 2013 version of the underlying predictive model (Schwartz et al. 2013). The current model and its antecedents are described on the USA-NPN site and in peer-reviewed literatire (Ault et al. 2015, Schwartz 1997, Schwartz et al. 2006, Schwartz et al. 2013). Crimmins et al. (2017) documents the USA National Phenology Network gridded data products used in this analysis package. USA-NPN also provides an assessment of Spring Index uncertainty and error with their [Spring Index and Plausibility Dashboard](https://www.usanpn.org/data/si-x_plausibility).
_____no_output_____## First Leaf
This analysis looks at the timing of First Leaf or leaf out for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_leaf_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis. _____no_output_____### Inputs
The operational BAP prototype retrieves data in real-time from the [USA National Phenology Network](https://www.usanpn.org)'s Web Processing Service (WPS) using a developer key issued by USA-NPN. Their WPS allows a key holder to request and retrieve model output values for a specified model, area of interest and time period. Model output for the variable *average_leaf_prism* was retrieved 2018-01-27. The area of interest, Yellowstone National Park, was analyzed using information from the [Spatial Feature Registry](https://github.com/usgs-bis/nbmdocs/blob/master/docs/bis.rst). The specified time period was 1981 to 2016. This notebook provides a lightly processsed version of that retrieval, [YellowstoneNP-1981-2016-processed-numbers.json](./YellowstoneNP-1981-2016-processed-numbers.json), for those who do not have a personal developer key._____no_output_____
<code>
# transform the BIS emitted JSON into something ggplot2 can work with
yell <- read_json("YellowstoneNP-1981-2016-processed-numbers.json", simplifyDataFrame = TRUE, simplifyVector = TRUE, flatten = TRUE)
yelldf <- as_tibble(yell)
yellt <- gather(yelldf, Year, DOY)_____no_output_____
</code>
### Outputs_____no_output_____#### Histogram
Produce a histogram of modeled results for Yellowstone National Park for all years within the specified period of interest (1981 to 2016). The visualization allows the user to assess the range and distribution of all the modeled values for the user-selected area for the entire, user-specified time period. Here, the modeled Leaf Spring Index values for each of the grid that fall within the boundary of Yellowstone National Park are binned by Day of Year for the entire period of interest. The period of interest is 1981 to 2016 inclusive. Dotted vertical lines indicating the minimum (green), mean (red), and maximum (green) values of the dataset are also shown._____no_output_____
<code>
# produce a histogram for all years
ggplot(yellt, aes(DOY)) +
geom_histogram(binwidth = 1, color = "grey", fill = "lightblue") +
ggtitle("Histogram of First Leaf Spring Index, Yellowstone National Park (1981 - 2016)") +
geom_vline(aes(xintercept=mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5)_____no_output_____
</code>
This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the above histogram. Operationalized, online versions of this visualization should be based on the guidance provided by the ggplot2 developers. See their section entitled [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) for details and approaches. The webpage provides links to their source code. Also, note the modeled grid cell values are discrete and should be portrayed as such in an operationalized graphic._____no_output_____#### Boxplots
Produce a multiple boxplot display of the modeled results for Yellowstone National Park for each year within the specified time period. Each individual boxplot portrays that year's median, hinges, whiskers and "outliers". The multiple boxplot display allows the user to explore the distribution of modeled spring index values through time._____no_output_____
<code>
# Produce a mulitple boxplot display with a boxplot for each year
ggplot(yellt, aes(y = DOY, x = Year, group = Year)) +
geom_boxplot() +
geom_hline(aes(yintercept = median(DOY, na.rm=T)), color = "blue", linetype = "dotted", size = 0.5) +
ggtitle("DRAFT: Boxplot of Spring Index, Yellowstone National Park (1981 to 2016)")_____no_output_____
</code>
This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the multiple boxplot above. Base any operationalized, online versions of this visualization on the guidance provided by the ggplot2 developers. See their section entitled [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html) for details and approaches. Links to their source code are available at that web location. _____no_output_____#### Ridgeline Plots
Produce ridgeline plots for each year to better visualize changes in the distributions over time._____no_output_____
<code>
# ridgeline plot with gradient coloring based on day of year for each available year
ggplot(yellt, aes(x = DOY, y = Year, group = Year, fill = ..x..)) +
geom_density_ridges_gradient(scale = 3, rel_min_height = 0.01, gradient_lwd = 1.0, from = 80, to = 180) +
scale_x_continuous(expand = c(0.01, 0)) +
scale_y_continuous(expand = c(0.01, 0)) +
scale_fill_viridis(name = "Day of\nYear", option = "D", direction = -1) +
labs(title = 'DRAFT: Spring Index, Yellowstone National Park',
subtitle = 'Annual Spring Index by Year for the Period 1981 to 2016\nModel Results from the USA National Phenology Network',
y = 'Year',
x = 'Spring Index (Day of Year)',
caption = "(model results retrieved 2018-01-26)") +
theme_ridges(font_size = 12, grid = TRUE) +
geom_vline(aes(xintercept = mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) Picking joint bandwidth of 1.06
</code>
This notebook used the [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package to produce the ridgeline above. Base any operationalized, online versions of this visualization on the guidance provided by the ggridges developer. See their R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) for details and approaches. Source code is available at their [GitHub repo](https://github.com/clauswilke/ggridges). _____no_output_____## First Bloom
This analysis looks at the timing of First Bloom for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_bloom_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis.
Output visualizations and implementation notes follow the approach and patterns used for First Leaf: histograms, multiple boxplots and ridgeline plots._____no_output_____# Code
Code used for this notebook is available at the [usgs-bcb/phenology-baps](https://github.com/usgs-bcb/phenology-baps) GitHub repository. _____no_output_____# Provenance
This prototype analysis package was a collaborative development effort between USGS [Core Science Analytics, Synthesis, and Libraries](https://www.usgs.gov/science/mission-areas/core-science-systems/csasl?qt-programs_l2_landing_page=0#qt-programs_l2_landing_page) and the [USA National Phenology Network](https://www.usanpn.org). Members of the scientific development team met and discussed use cases, analyses, and visualizations during the third quarter of 2016. Model output choices as well as accessing the information by means of the USA-NPN Web Processing Service were also discussed at that time.
This notebook was based upon those group discussions and Tristan Wellman's initial ideas for processing and visualizing the USA-NPN spring index data. That initial body of work and other suppporting code is available at his GitHub repository, [TWellman/USGS_BCB-NPN-Dev-Space](https://github.com/TWellman/USGS_BCB-NPN-Dev-Space). This notebook used the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the histograms and boxplots and ridgeplots. The ggplot2 developers provide online guidance and links to their source code for these at [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) and [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html). The [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package is used to produce the ridgeline plot. Usage is described in the R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html). The underlying source code is available at the author Claus O. Wilke's [GitHub repo](https://github.com/clauswilke/ggridges). Software developers at the Fort Collins Science Center worked with members of the team to operationalize the scientific code and make it publically available on the web. An initial prototype application is available at (https://my-beta.usgs.gov/biogeography/).
_____no_output_____# Citations
Ault, T. R., M. D. Schwartz, R. Zurita-Milla, J. F. Weltzin, and J. L. Betancourt (2015): Trends and natural variability of North American spring onset as evaluated by a new gridded dataset of spring indices. Journal of Climate 28: 8363-8378.
Crimmins, T.M., R.L. Marsh, J. Switzer, M.A. Crimmins, K.L. Gerst, A.H. Rosemartin, and J.F. Weltzin. 2017. USA National Phenology Network gridded products documentation. U.S. Geological Survey Open-File Report 2017–1003. DOI: 10.3133/ofr20171003.
Monahan, W. B., A. Rosemartin, K. L. Gerst, N. A. Fisichelli, T. Ault, M. D. Schwartz, J. E. Gross, and J. F. Weltzin. 2016. Climate change is advancing spring onset across the U.S. national park system. Ecosphere 7(10):e01465. 10.1002/ecs2.1465
Schwartz, M. D. 1997. Spring index models: an approach to connecting satellite and surface phenology. Phenology in seasonal climates I, 23-38.
Schwartz, M.D., R. Ahas, and A. Aasa, 2006. Onset of spring starting earlier across the Northern Hemisphere. Global Change Biology, 12, 343-351.
Schwartz, M. D., T. R. Ault, and J. L. Betancourt, 2013: Spring onset variations and trends in the continental United States: past and regional assessment using temperature-based indices. International Journal of Climatology, 33, 2917–2922, 10.1002/joc.3625.
_____no_output_____
|
{
"repository": "usgs-bcb/phenology-baps",
"path": "annual-indices-of-spring.ipynb",
"matched_keywords": [
"biology"
],
"stars": 1,
"size": 472733,
"hexsha": "cba4efa00def010ade4378ee93985fb1eae76bd3",
"max_line_length": 245108,
"avg_line_length": 1114.9363207547,
"alphanum_fraction": 0.9419185883
}
|
# Notebook from timothyas/aste
Path: aste_llcreader_example.ipynb
# ASTE Release 1: Accessing the output with xmitgcm's llcreader module
The Arctic Subpolar gyre sTate Estimate (ASTE) is a medium resolution, dynamically consistent, data constrained
simulation of the ocean and sea ice state in the Arctic and subpolar gyre, spanning 2002-2017.
See details on Release 1 in [Nguyen et al, 2020].
This notebook serves as an example for accessing the output from this state estimate using xmitgcm's
[llcreader module](https://xmitgcm.readthedocs.io/en/latest/llcreader.html)
to get the output in an [xarray](http://xarray.pydata.org/en/stable/) dataset.
These capabilities heavily rely on [dask](https://dask.org/)
to lazily grab the data as we need it.
Users are strongly encouraged to check out [dask's best practices](https://docs.dask.org/en/latest/best-practices.html)
regarding memory management before performing more advanced calculations.
Any problems due to connections with the server can be reported as a
[GitHub Issue](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/creating-an-issue)
on the [ASTE repo](https://github.com/crios-ut/aste).
Finally, we are grateful to the Texas Advanced Computing Center (TACC) for providing storage on Amazon Web Services
(AWS) through cloud services integration on Frontera [Stanzione et al, 2020].
---
---
Nguyen, An T., Ocaña, V., Pillar, H., Bigdeli, A., Smith, T. A., & Heimbach, P. (2021). The Arctic Subpolar gyre sTate Estimate: a data-constrained and dynamically consistent ocean-sea ice estimate for 2002–2017. Submitted to Journal of Advances in Modeling Earth Systems.
Dan Stanzione, John West, R. Todd Evans, Tommy Minyard, Omar Ghattas, and Dhabaleswar K. Panda. 2020. Frontera: The Evolution of Leadership Computing at the National Science Foundation. In Practice and Experience in Advanced Research Computing (PEARC ’20), July 26–30, 2020, Portland, OR, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3311790.3396656_____no_output_____
<code>
import numpy as np
import warnings
import matplotlib.pyplot as plt
import xarray as xr
import cmocean
from dask.distributed import Client
from xmitgcm import llcreader
import ecco_v4_py_____no_output_____
</code>
## Get an xarray dataset with *all* ASTE_R1 variables, depth levels, and time steps
The function `get_dataset` by default grabs all available output, at all depth levels and all time steps, where each
time step represents a monthly mean for that field.
This may be suboptimal when operating on a machine with limited memory, for instance on a laptop.
See the [llcreader documentation](https://xmitgcm.readthedocs.io/en/latest/llcreader.html) for more examples
on how to subset the data with the [get_dataset method](https://xmitgcm.readthedocs.io/en/latest/llcreader.html#api-documentation),
including how to grab specific variables, vertical slices, or time steps._____no_output_____
<code>
aste = llcreader.CRIOSPortalASTE270Model()_____no_output_____ds = aste.get_dataset()_____no_output_____
</code>
### Grab a single monthly average
Here we subset the dataset to show the ASTE ocean state during a single month, September 2012.
Alternatively, one can provide the corresponding iteration to the `iters` option in `get_dataset`
to achieve the same behavior._____no_output_____
<code>
ds = ds.sel(time='2012-09')_____no_output_____
</code>
We are grabbing a single time slice to make this demo quick.
Of course, [xarray](http://xarray.pydata.org/en/stable/) makes it easy to compute full time mean quantities,
for example SST averaged from 2006 through 2017:
```
sst = ds['THETA'].sel(k=0,time=slice('2006','2017')).mean(dim='time')
```
but note that this will take longer than the plots below because `llcreader` has to grab
all of the 2006-2017 data from the cloud._____no_output_____### Some house keeping_____no_output_____- rename the `faces` dimension to `tiles` (shown and discussed below)
- split the coordinate and data variables to speed things up a bit_____no_output_____
<code>
ds = ds.rename({'face':'tile'})
cds = ds.coords.to_dataset().reset_coords()
ds = ds.reset_coords(drop=True)_____no_output_____
</code>
#### A list of all the data variables_____no_output_____
<code>
ncols=10
for i,f in enumerate(list(ds.data_vars),start=1):
end = '\n' if i%ncols==0 else ', '
print(f,end=end)ADVr_SLT, ADVr_TH, ADVxHEFF, ADVxSNOW, ADVx_SLT, ADVx_TH, ADVyHEFF, ADVySNOW, ADVy_SLT, ADVy_TH
DETADT2, DFrE_SLT, DFrE_TH, DFrI_SLT, DFrI_TH, DFxEHEFF, DFxESNOW, DFxE_SLT, DFxE_TH, DFyEHEFF
DFyESNOW, DFyE_SLT, DFyE_TH, ETAN, ETANSQ, GM_PsiX, GM_PsiY, KPPg_SLT, KPPg_TH, MXLDEPTH
PHIBOT, SALT, SFLUX, SIaaflux, SIacSubl, SIarea, SIatmFW, SIatmQnt, SIheff, SIhsnow
SIsnPrcp, SItflux, SIuice, SIvice, SRELAX, TFLUX, THETA, TRELAX, UVELMASS, VVELMASS
WSLTMASS, WTHMASS, WVELMASS, oceFWflx, oceQnet, oceQsw, oceSPDep, oceSPflx, oceSPtnd, oceSflux
oceTAUX, oceTAUY, sIceLoad,
</code>
#### and all the variables describing the underlying grid_____no_output_____
<code>
ncols=10
for i,f in enumerate(list(cds.data_vars),start=1):
end = '\n' if i%ncols==0 else ', '
print(f,end=end)niter, CS, SN, drC, drF, dxC, dxG, dyC, dyG, Depth
PHrefC, PHrefF, rA, rAs, rAw, rAz, Z, Zp1, rhoRef, XC
XG, YC, YG, hFacC, hFacS, hFacW, maskC, maskCtrlC, maskCtrlS, maskCtrlW
maskInC, maskInS, maskInW, maskS, maskW, Zl, Zu,
</code>
and we can get some nice meta data to explain what this means thanks to `xmitgcm`+`xarray`_____no_output_____
<code>
ds.ADVx_TH_____no_output_____
</code>
### A quick plot
This is just a sanity check - we have the output!_____no_output_____
<code>
%%time
ds.THETA.sel(k=0).plot(col='tile',col_wrap=3)CPU times: user 917 ms, sys: 148 ms, total: 1.07 s
Wall time: 1.96 s
</code>
## Use ECCOv4-py to make a nicer plot: average SST and SSS during September, 2012
The plot above shows the "tiled" LLC grid topology of ASTE, which can be cumbersome to work with.
This grid is familiar to anyone used to the global ECCO state estimate, which
[ecco_v4_py](https://github.com/ECCO-GROUP/ECCOv4-py) is
designed to deal with.
As of `ecco_v4_py` version 1.3.0, we can now use all the same functions with ASTE as well.
See below for an example of a nicer plot.
See [here](https://ecco-v4-python-tutorial.readthedocs.io/fields.html#geographical-layout)
to read more about the LLC grid._____no_output_____
<code>
sst = ds['THETA'].sel(k=0)
sss = ds['SALT'].sel(k=0)_____no_output_____%%time
fig = plt.figure(figsize=(18,6))
for i,(fld,cmap,cmin,cmax) in enumerate(zip([sst,sss],
['cmo.thermal','cmo.haline'],
[-1,30],[30,38]),start=1):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
fig,ax,p,cbar,*_=ecco_v4_py.plot_proj_to_latlon_grid(cds.XC,cds.YC,fld,
show_colorbar=True,
projection_type='ortho',
user_lon_0=-45,user_lat_0=50,
subplot_grid=[1,2,i],
cmap=cmap,cmin=cmin,cmax=cmax);CPU times: user 33.5 s, sys: 1.66 s, total: 35.2 s
Wall time: 23.4 s
</code>
## Use ECCOv4-py to get velocities in the "expected" direction
The first plot showed how the rotated fields in the ASTE domain can be difficult to visualize.
This is especially true for any vector field (e.g. zonal, meridional velocity), where the
vector components are also rotated with each "tile".
In order to visualize vector components, we can use the [vector_calc](https://github.com/ECCO-GROUP/ECCOv4-py/blob/master/ecco_v4_py/vector_calc.py)
module to perform the necessary interpolation and rotation operations.
Note that these routines are essentially simple wrappers around [xgcm](https://xgcm.readthedocs.io/en/latest/)
Grid operations, which make all of this possible while working with [xarray](http://xarray.pydata.org/en/stable/)
and [dask](https://dask.org/)._____no_output_____
<code>
# get an xgcm Grid object
grid = ecco_v4_py.get_llc_grid(cds,domain='aste')_____no_output_____%%time
uvel,vvel = ecco_v4_py.vector_calc.UEVNfromUXVY(ds['UVELMASS'].sel(k=0),
ds['VVELMASS'].sel(k=0),
coords=cds,
grid=grid)CPU times: user 43.6 ms, sys: 553 µs, total: 44.1 ms
Wall time: 43.3 ms
uvel.attrs = ds.UVELMASS.attrs
vvel.attrs = ds.VVELMASS.attrs_____no_output_____%%time
fig = plt.figure(figsize=(18,6))
vmax = .6
for i,(fld,cmap,cmin,cmax) in enumerate(zip([uvel,vvel],
['cmo.balance','cmo.balance'],
[-vmax]*2,[vmax]*2),start=1):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
fig,ax,p,cbar,*_=ecco_v4_py.plot_proj_to_latlon_grid(cds.XC,cds.YC,fld,
show_colorbar=True,
projection_type='ortho',
user_lon_0=-45,user_lat_0=50,
subplot_grid=[1,2,i],
cmap=cmap,cmin=cmin,cmax=cmax);CPU times: user 45.7 s, sys: 3.5 s, total: 49.2 s
Wall time: 52.8 s
</code>
## Use ECCOv4-py to compute volumetric transports: Fram Strait example
Compare to Fig. 14 of [Nguyen et al., 2020], showing the time mean:
- inflow of Atlantic waters to the Arctic = 6.2$\pm$2.3 Sv
- outflow of modified waters = -8.3$\pm$2.5 Sv
where positive indicates "toward the Arctic".
Again, we compute this quantity for a single time slice as a quick example, but this can be easily extended
to compute for example the time series of volumetric transport._____no_output_____
<code>
fsW = ecco_v4_py.calc_section_vol_trsp(ds,grid=grid,pt1=[-18.5,80.37],pt2=[1,80.14],coords=cds)
fsE = ecco_v4_py.calc_section_vol_trsp(ds,grid=grid,pt1=[1,80.14],pt2=[11.39,79.49],coords=cds)_____no_output_____fsW = fsW.swap_dims({'k':'Z'})
fsE = fsE.swap_dims({'k':'Z'})_____no_output_____plt.rcParams.update({'font.size':14})_____no_output_____fig,ax = plt.subplots(1,1,figsize=(6,8),constrained_layout=True)
for vds,lbl in zip([fsW,fsE],['Outflow','Inflow']):
mylbl = f'Total {lbl} %2.2f {vds.vol_trsp.units}' % vds.vol_trsp.values
vds.vol_trsp_z.plot(y='Z',ax=ax,label=mylbl)
ax.grid(True)
ax.set(ylim=[-3000,0],
xlabel=f'Volumetric Transport [{fsW.vol_trsp.units}]',
title=f'Fram Strait Volumetric Transport, Sep. 2012\nPositive into Arctic [{vds.vol_trsp.units}]')
ax.legend()_____no_output_____
</code>
|
{
"repository": "timothyas/aste",
"path": "aste_llcreader_example.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 3,
"size": 806077,
"hexsha": "cba71c140d3481efff69072cba4c7973c03b6599",
"max_line_length": 344536,
"avg_line_length": 740.1992653811,
"alphanum_fraction": 0.9394722837
}
|
# Notebook from arinmuk/python_apis
Path: 3/Activities/02-Ins_Google_Places/Solved/Google_Places.ipynb
<code>
# Dependencies
import requests
import json
# Google developer API key
from config import gkey_____no_output_____# geocoordinates
target_coordinates = "43.6187102, -116.2146068"
target_search = "Chinese"
target_radius = 8000
target_type = "restaurant"
# set up a parameters dictionary
params = {
"location": target_coordinates,
"keyword": target_search,
"radius": target_radius,
"type": target_type,
"key": gkey
}
# base url
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# run a request using our params dictionary
response = requests.get(base_url, params=params)_____no_output_____# print the response url, avoid doing for public github repos in order to avoid exposing key
#print(response.url)_____no_output_____# convert response to json
places_data = response.json()
# Print the json (pretty printed)
print(json.dumps(places_data, indent=4, sort_keys=True)){
"html_attributions": [],
"next_page_token": "CrQCLAEAABXBq82dpPa_Ss2qSyXm_Qc4pC6qh8idI86N41Zb0hmK7nDuVVZrIpkCcb3YuZ2dEdEmGfWsi2qA3fXVBk_NgjeiwVbv6hLZwO9GVxl5DyO6vA1NuNQdIWD0hK7NNHIRFbfdjnHbtCnG8pzeL3RGM4g8WhXP_BKk8415wvoey51gKF2piSpGIn2d1AMrws7VYmrBebjyIZ4t9yS5pgWGzkoBwkDO0KIZ4nRZXk8uAPaAD9iCh0roIVmH2yUK6FojZxtWIvLZ8kdpkO_uJnWuWvd7Frsjz5jjpIMv02MQfZ5gULyftfzlEaEsRBO_-OMxBquAtd7aPgTt1SeUsHAGdvt-gPWrOb2-lIEge_q99No8dOdnIQe65mAn1R43jYIUJ5GM7WYIs_rne2L6zd7PmmYSEGHg1KIE6mJ63bee_DmAeRYaFI9HqWEjeac8NcpUdJMuwBvcl_S3",
"results": [
{
"geometry": {
"location": {
"lat": 43.6180841,
"lng": -116.2031626
},
"viewport": {
"northeast": {
"lat": 43.61938607989273,
"lng": -116.2017191701073
},
"southwest": {
"lat": 43.61668642010729,
"lng": -116.2044188298928
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "e525d78c947c5527fadf72ee54777d3a3517c3a1",
"name": "Yen Ching Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3456,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/111898372840758744049/photos\">Robert Stucker</a>"
],
"photo_reference": "CmRaAAAAy4TObX-raN2LntfwUeoJ7rb1ljmBPHDC7bM6VipsCRCRdAakLi6YTegBMYZayqY0O72QXC7okLMYihuDPFKv3Slg3S3wyLhdaJZu0M-cmpybqJuLQIv3hz_THEecffdFEhCoIDK1BAolSMK3ksC4yeKkGhR8cbM5TAsKwnoOAxKs8FQyF3SExw",
"width": 4608
}
],
"place_id": "ChIJD1s29-P4rlQR0QVGxC9AWY0",
"plus_code": {
"compound_code": "JQ9W+6P Boise, Idaho",
"global_code": "85M5JQ9W+6P"
},
"price_level": 2,
"rating": 4.2,
"reference": "ChIJD1s29-P4rlQR0QVGxC9AWY0",
"scope": "GOOGLE",
"types": [
"bar",
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "305 N 9th St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6251218,
"lng": -116.2123933
},
"viewport": {
"northeast": {
"lat": 43.62631952989272,
"lng": -116.2111869701072
},
"southwest": {
"lat": 43.62361987010728,
"lng": -116.2138866298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "b1910608eed20a9557d6d9fb29675484932ad9ce",
"name": "North End Chinese Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1080,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/103039634556116345378/photos\">Duane Hughes</a>"
],
"photo_reference": "CmRaAAAAYfYuHBCyEeYIOHFztWve-9EFLKHYhs95-U1h4CH70CFJXX_8sQJUetoAl8LwqZpisrfYcmiU5yZsNIDPvT3BXW5FoeWI94dqvZsOkl6Ef_89kiksW9KTdCa98J2qfdieEhDuZnhw_41ejwXTxc3xz8bzGhTy6Fw9r0S-VuhafsU93NOcYOaY4w",
"width": 1920
}
],
"place_id": "ChIJP5lMQNz4rlQRPRigHmorMWw",
"plus_code": {
"compound_code": "JQGQ+22 Boise, Idaho",
"global_code": "85M5JQGQ+22"
},
"rating": 3.9,
"reference": "ChIJP5lMQNz4rlQRPRigHmorMWw",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "3955, 1806 W State St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6189873,
"lng": -116.2912796
},
"viewport": {
"northeast": {
"lat": 43.62033712989272,
"lng": -116.2900414701073
},
"southwest": {
"lat": 43.61763747010728,
"lng": -116.2927411298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "61f5604ae4b98f4178f367df4a80b246abebe85f",
"name": "Confucius Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1960,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/114834946914295989143/photos\">Michael Hirst</a>"
],
"photo_reference": "CmRaAAAAlP308LfVnNthklzJIdyWwTRRiMkP7tYGFVjfh2kPpfKB5VUoNsHi3R-axbi0y8tAcRx8ALhkms4mdiqFjT_O-kN9Vm4LENgk1E6tgXvw1ZM3UpadazWzMsPP_diEHpBVEhCUQdq6s6szs15doLsHMqsAGhSKT7ZChRI06-gYNJt_OxizowSw3A",
"width": 4032
}
],
"place_id": "ChIJxXnvGXVWrlQRdHTY4TuLvMo",
"plus_code": {
"compound_code": "JP95+HF Boise, Idaho",
"global_code": "85M5JP95+HF"
},
"price_level": 2,
"rating": 4.1,
"reference": "ChIJxXnvGXVWrlQRdHTY4TuLvMo",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "8775 W Fairview Ave, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6148417,
"lng": -116.2427971
},
"viewport": {
"northeast": {
"lat": 43.61619172989273,
"lng": -116.2415368701073
},
"southwest": {
"lat": 43.61349207010728,
"lng": -116.2442365298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "1cf6836746006f010fa362260831d258ddc3d8b7",
"name": "Golden Star Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1920,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/113168016592032142748/photos\">John Maulin</a>"
],
"photo_reference": "CmRaAAAACCeuZVHZO2at1erxpBJT1fTqUQLfd9hUynE0AD1zQbkyTCUKXTtTZyiUswF5cE_HuoVLqg1Pub-gQOfse4PfR_vKrx_fpD3wm0qfeyj3C3OdYtm849Gl9TWptoyGh1oBEhB1wIhLEAD5NtDfxQHjO7jIGhQVuxlVkq3D75YbWPrUTR0luUaBlA",
"width": 1080
}
],
"place_id": "ChIJTctsXaL4rlQRbC1U996y4-0",
"plus_code": {
"compound_code": "JQ74+WV Boise, Idaho",
"global_code": "85M5JQ74+WV"
},
"price_level": 2,
"rating": 4.3,
"reference": "ChIJTctsXaL4rlQRbC1U996y4-0",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "1142 N Orchard St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.5888893,
"lng": -116.2776039
},
"viewport": {
"northeast": {
"lat": 43.59031987989272,
"lng": -116.2762912701072
},
"southwest": {
"lat": 43.58762022010728,
"lng": -116.2789909298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "286cea6abbf1b8ca63e5b1d4223cf74c9998e69e",
"name": "Guang Zhou Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3024,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/114780042035600807806/photos\">Mikali Beaumont</a>"
],
"photo_reference": "CmRaAAAAZUxz3jnSN3Orq_v11c96M3GFy0uGVUbjAAwM1f9An-LGMJl9mN2fK6as_QW8A3rvxDJil4OCOdhrf7_syLj47i9KXXdzRXYAdK8CDoRMGpiKcTF6KxoakX-svpOqzFhTEhCjCOfyY-stf8ny6YNq2oSlGhQ_xLJz1jRhgopMYOfVUgjH64rYwQ",
"width": 4032
}
],
"place_id": "ChIJDT-7DlJWrlQRO7rtBTzZf6s",
"plus_code": {
"compound_code": "HPQC+HX Boise, Idaho",
"global_code": "85M5HPQC+HX"
},
"price_level": 1,
"rating": 4,
"reference": "ChIJDT-7DlJWrlQRO7rtBTzZf6s",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "7609 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.5908727,
"lng": -116.2901825
},
"viewport": {
"northeast": {
"lat": 43.59222267989272,
"lng": -116.2888915701073
},
"southwest": {
"lat": 43.58952302010728,
"lng": -116.2915912298928
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "fd94ac3594bbff6c11b913eacb10273d996e48b0",
"name": "Lucky Palace Chinese Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 2952,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/107131937753504663981/photos\">Carter Wigley</a>"
],
"photo_reference": "CmRaAAAAoMBPBUsG6_s5uDWriDkUWen09T8sHIGiB3cdzfgdJECKfLyXNAVRcdUCxoeM1Ts2ZhRyUBHfIkez6cOrGOqTwGtr3qA_UHQtqNoEvnciZ14XKdTMiMf3RvGTVZFyfL-CEhCmV2BO2lyLPYf_7pgmujQtGhSg_m1HY0jjkqokd0e2SaR4GJjr2g",
"width": 5248
}
],
"place_id": "ChIJ5RKyTVZWrlQRY-t-LLVVQ9M",
"plus_code": {
"compound_code": "HPR5+8W Boise, Idaho",
"global_code": "85M5HPR5+8W"
},
"price_level": 2,
"rating": 3.6,
"reference": "ChIJ5RKyTVZWrlQRY-t-LLVVQ9M",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "8630 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6487196,
"lng": -116.246199
},
"viewport": {
"northeast": {
"lat": 43.65013982989272,
"lng": -116.2449386701073
},
"southwest": {
"lat": 43.64744017010728,
"lng": -116.2476383298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "1d24153141ea5604e7f247978036ec0b8f45d75b",
"name": "New Garden Chinese Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1920,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/107509501816582528738/photos\">Emily Siess Donnellan</a>"
],
"photo_reference": "CmRaAAAAgcFMWpfuYFZaiQ7MwLUMB6S7UFnCsW0hncQg0uGA9B6IS3oTdv_GMRpYd9un-sNyBOCsEsEy6VmOEP6mThKUsnvO-Ec9XzlxfolJUMbe4BbwjSNdqoJ-txANDPpJNTK4EhBLurd6Z87REVyyYrK0eJCTGhRTKFQeaurCpcgoo6fQbIdkb3dY6A",
"width": 1080
}
],
"place_id": "ChIJvwYXUDz_rlQRKpVyY618JMM",
"plus_code": {
"compound_code": "JQX3+FG Boise, Idaho",
"global_code": "85M5JQX3+FG"
},
"rating": 4,
"reference": "ChIJvwYXUDz_rlQRKpVyY618JMM",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "4624 W State St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6136713,
"lng": -116.2057373
},
"viewport": {
"northeast": {
"lat": 43.61496122989272,
"lng": -116.2042701201073
},
"southwest": {
"lat": 43.61226157010728,
"lng": -116.2069697798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "b718466fdc95f8586e92b72cac5f0632bf5a24a4",
"name": "P.F. Chang's",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3072,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/113205105119011446064/photos\">Zaiga Morris</a>"
],
"photo_reference": "CmRaAAAAKr4kiXvWhD054NcsD8if7XU6HAZIDvNPMb06IoQHQ94asapUfjywVL9De9H326SeIy4MY1weSkRVQ0LhOQneCSKkZy_UrBX44BRqLowsG36CJ_ZhN56Gtfx1SUeg2BQ9EhD7T65JQdvTtsDbrXf4WhS_GhSC6R5EssNcdteNkhAgoGNdt26YwQ",
"width": 4096
}
],
"place_id": "ChIJEVx3TeX4rlQR4QkPYThohEg",
"plus_code": {
"compound_code": "JQ7V+FP Boise, Idaho",
"global_code": "85M5JQ7V+FP"
},
"price_level": 2,
"rating": 4,
"reference": "ChIJEVx3TeX4rlQR4QkPYThohEg",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "391 S 8th St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6201152,
"lng": -116.312547
},
"viewport": {
"northeast": {
"lat": 43.62139177989273,
"lng": -116.3111985701073
},
"southwest": {
"lat": 43.61869212010728,
"lng": -116.3138982298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "4929e977dc1f7858efe1b37024d1fa6db3684772",
"name": "China Grand Buffet",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3120,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/101287468922822261096/photos\">A Google User</a>"
],
"photo_reference": "CmRaAAAAoLgszF4FCXoF5OiB3Kb8HBFRTfQOREgfmZF7XXSSvxVVXX14djbHohRH9IpQwXDF21VnOjn3FmeBD3xHces73kFkfXcfklw5ZD3cLBFGl_7WuBFmVYexevXcsWFUuzYXEhCn868UcawaPMdYyClun2TpGhTzMhRuKNllEHG3E6qTtwYo-sNEnw",
"width": 4160
}
],
"place_id": "ChIJTSFfGNRVrlQRuTYkj0zGqMM",
"plus_code": {
"compound_code": "JMCP+2X Boise, Idaho",
"global_code": "85M5JMCP+2X"
},
"price_level": 1,
"rating": 4.1,
"reference": "ChIJTSFfGNRVrlQRuTYkj0zGqMM",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "10498 W Fairview Ave, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6028923,
"lng": -116.2443818
},
"viewport": {
"northeast": {
"lat": 43.60420812989273,
"lng": -116.2430330701073
},
"southwest": {
"lat": 43.60150847010728,
"lng": -116.2457327298928
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "81e7b34efcf3b72a23a6b3223f413b90aa792d04",
"name": "Mandarin Palace",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 2988,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/112784219864839598751/photos\">laurel clark</a>"
],
"photo_reference": "CmRaAAAA_YX-8QKPeLlD0_8_gTcXNcjW-4Pi4Kmj1iQOu_rnyr0f-YKHik4NY80eFuH7s6yDYavMvTRURrAlnUK0Nubp98A2cHYjt1yOoXOjLV_qTSvsxQ5d0KL70yVXtAQ7IUAsEhDVxkaBKlgjiwJnuihIbpdSGhT65BJeiwbvSQHPuZ16r02Kbx8xTg",
"width": 5312
}
],
"place_id": "ChIJecwebShWrlQRo_MtO2S0rdA",
"plus_code": {
"compound_code": "JQ34+56 Boise, Idaho",
"global_code": "85M5JQ34+56"
},
"rating": 3.7,
"reference": "ChIJecwebShWrlQRo_MtO2S0rdA",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "5020 Franklin Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6215454,
"lng": -116.2180356
},
"viewport": {
"northeast": {
"lat": 43.62301827989272,
"lng": -116.2166851201073
},
"southwest": {
"lat": 43.62031862010728,
"lng": -116.2193847798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "af8363c6aa7b756c9c0ebc6251030024a8067dd5",
"name": "Sushi Joy Asian Cuisine",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1920,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/112554802017139705501/photos\">Jose Perez</a>"
],
"photo_reference": "CmRaAAAAU44zJeOFhwxquE2GNjZOLc_ZB8UITo7Gmd5EOw6hZ5ugNUvR4h_m7jTuNyuCKOQTUm9HXbbFqJY-4lkgrEqvSYXJXBw4dlqr89_JpG7E4Y2BTHtc-PKKdHsM-Y7XRTazEhBXH8oFhlLXB8dMmAgM4D-cGhQzdXk743GPE2FJFPU1NdCbGd7sIw",
"width": 1080
}
],
"place_id": "ChIJCavi88L4rlQR8mDWej59VVc",
"plus_code": {
"compound_code": "JQCJ+JQ Boise, Idaho",
"global_code": "85M5JQCJ+JQ"
},
"price_level": 2,
"rating": 4.6,
"reference": "ChIJCavi88L4rlQR8mDWej59VVc",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "2275 W Main St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.577295,
"lng": -116.195009
},
"viewport": {
"northeast": {
"lat": 43.57901292989273,
"lng": -116.1936632701073
},
"southwest": {
"lat": 43.57631327010728,
"lng": -116.1963629298928
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "f545045ba2cd56701c0a4aac1994c9814ae16ba4",
"name": "Season Wok",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 2952,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/104156735198363326200/photos\">David Ku</a>"
],
"photo_reference": "CmRaAAAAwr8BUaeY_FmmUIHnfSmnQ6jR51_GQoCPBNKxg16z4K7N1qO8jYfeOhkTt3KyMDguchUBDE6GUh5x1-nL77akrAwJHczdEr1hTymIqYCegYxlyZs8IUOrcJ1ND3AImOcUEhDAO6gPFyHbRzDQzGgNE_ioGhQPT0-RBgjC-9nLrHorr5CtkhGx6g",
"width": 5248
}
],
"place_id": "ChIJ_9TdpxD4rlQRdg8ROoJbNSw",
"plus_code": {
"compound_code": "HRG3+WX Boise, Idaho",
"global_code": "85M5HRG3+WX"
},
"rating": 3.4,
"reference": "ChIJ_9TdpxD4rlQRdg8ROoJbNSw",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "2775 Broadway Ave, Boise"
},
{
"geometry": {
"location": {
"lat": 43.618054,
"lng": -116.282076
},
"viewport": {
"northeast": {
"lat": 43.61940457989272,
"lng": -116.2805948201073
},
"southwest": {
"lat": 43.61670492010728,
"lng": -116.2832944798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "f81ca5c2cbaac12e40248363e8e791490f61b119",
"name": "City Buffet",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 2988,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/103425469668596867526/photos\">dee shuck</a>"
],
"photo_reference": "CmRaAAAAJIvJdzgOLwh1NdhvtZf3Ma_PvHfIZGB728_rM1ERSyjkUdXl42pSNvceqNWLLDofb2ppXCMveuYWaUXlsSCDDGSMV2U57-uFHa3kFsJwh-gfddVYlD4Xan2C1zyzbVJ_EhBGAQv9C-av_kJcsNgaAITvGhSV01fTkimLQvr8Sy5SpaWX0h_BBQ",
"width": 5312
}
],
"place_id": "ChIJDU3kvQtWrlQRqEpYSPtJ_5I",
"plus_code": {
"compound_code": "JP99+65 Boise, Idaho",
"global_code": "85M5JP99+65"
},
"price_level": 2,
"rating": 3.9,
"reference": "ChIJDU3kvQtWrlQRqEpYSPtJ_5I",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "8049 W Fairview Ave, Boise"
},
{
"geometry": {
"location": {
"lat": 43.5906315,
"lng": -116.2801472
},
"viewport": {
"northeast": {
"lat": 43.59193827989272,
"lng": -116.2788408701073
},
"southwest": {
"lat": 43.58923862010727,
"lng": -116.2815405298928
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "ae08003ee569010604f4fa91f90e3d341be10ece",
"name": "Panda Express",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3024,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/104665938415869848703/photos\">Steven Sloan</a>"
],
"photo_reference": "CmRaAAAAMDBvf3ZHkj_rnLsrBF4RG5xTxIdwlHCBQCGEXAVpYt2RV14iJRZkOovJkwqozZOlnbr4v4K_EMsgM1-VghhYfy8Y-BkrxlnUO1AysAmHPuIWnJdc90CMBZp9DfZDyer1EhCp7ll5aZeyJ7vMZc7w8AwdGhQwqhzSJGpelWk3UucWjpBcsTsJ5A",
"width": 4032
}
],
"place_id": "ChIJb15nIlJWrlQRcvGGNq_0WC8",
"plus_code": {
"compound_code": "HPR9+7W Boise, Idaho",
"global_code": "85M5HPR9+7W"
},
"price_level": 1,
"rating": 4.1,
"reference": "ChIJb15nIlJWrlQRcvGGNq_0WC8",
"scope": "GOOGLE",
"types": [
"meal_takeaway",
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "7804 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.5915577,
"lng": -116.3116912
},
"viewport": {
"northeast": {
"lat": 43.59287852989272,
"lng": -116.3103414201073
},
"southwest": {
"lat": 43.59017887010728,
"lng": -116.3130410798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "f2b62644707695960bbe5842c39bf0941643231f",
"name": "Great Wall Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3464,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/106248445007066630052/photos\">Waylon Pearson</a>"
],
"photo_reference": "CmRZAAAAYrSY4fVpYY-T8QgxS2PHtzmiPnLwZPNPyoB9OypzMy7XNYzvvVfEhcjmuAdQ3-IEBY-3hKUUPxBZ8kqsUWwJ6He3wGjl26OOBiT_1Uxrvpd3Ruy4Cq8ufQzQOre34KB6EhD79XZ3v6jL3geQ-xl4WUqbGhTdQqufNcZ2Q-LNs-dIF-X0KyTK3A",
"width": 4618
}
],
"place_id": "ChIJle9LyO5WrlQRyBqJqPmCLQI",
"plus_code": {
"compound_code": "HMRQ+J8 Boise, Idaho",
"global_code": "85M5HMRQ+J8"
},
"price_level": 2,
"rating": 3.5,
"reference": "ChIJle9LyO5WrlQRyBqJqPmCLQI",
"scope": "GOOGLE",
"types": [
"meal_delivery",
"meal_takeaway",
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "10398 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.590018,
"lng": -116.2418638
},
"viewport": {
"northeast": {
"lat": 43.59136677989272,
"lng": -116.2405863201073
},
"southwest": {
"lat": 43.58866712010728,
"lng": -116.2432859798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "2a3be484e3d636b3b1bb36a2cdb40d25e7768fe5",
"name": "Quik-Wok Restaurant",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 1080,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/114471324025049916480/photos\">Jared Scofield</a>"
],
"photo_reference": "CmRaAAAA4fEZrHq3PoLYvktUPPVxwaoH7Z-mUJ2x5c-bstI4sTQerKS6l9ZcrsDrePQdk5aUvPtBiwZR3C14riUPr1cYlE87Q4Eqk4o_PMyJXC-AXWG-1_S3NkfRFU8pDFD7hJfiEhDEcxy9L59nmZB8iLRUPdxxGhRvae5_DKMKPsiiCkpeIaHxx4rXdQ",
"width": 1920
}
],
"place_id": "ChIJbRshL9JXrlQRDa61aB-RK2E",
"plus_code": {
"compound_code": "HQR5+27 Boise, Idaho",
"global_code": "85M5HQR5+27"
},
"price_level": 1,
"rating": 3.2,
"reference": "ChIJbRshL9JXrlQRDa61aB-RK2E",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "4858 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6128923,
"lng": -116.2031389
},
"viewport": {
"northeast": {
"lat": 43.61431167989272,
"lng": -116.2017124701073
},
"southwest": {
"lat": 43.61161202010728,
"lng": -116.2044121298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "19d766843750bbddb9001eae857795d08938d66d",
"name": "Panda Express",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 2340,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/113887723557452427093/photos\">A Google User</a>"
],
"photo_reference": "CmRaAAAAlnqir_F45k889VdKz0e3tBqUwk6govj6OMqWgwVQ4YmEyCnUwbDhOgC0KG6ipHIWjsoMTmnoanLwuSEaPr6F4ar-Rn-zLcRvcMBPWcJogmQXHFMQhAhkFcYpdZV35qiGEhBTbvjcCnFsexkAKeWzODK2GhRK_vBP7ccu-MXL0DmKGN6r7lJ-1w",
"width": 4160
}
],
"place_id": "ChIJkevL-_r4rlQRdCX2O8ZXDsM",
"plus_code": {
"compound_code": "JQ7W+5P Boise, Idaho",
"global_code": "85M5JQ7W+5P"
},
"price_level": 1,
"rating": 3.7,
"reference": "ChIJkevL-_r4rlQRdCX2O8ZXDsM",
"scope": "GOOGLE",
"types": [
"meal_takeaway",
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "601 W Front St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6149188,
"lng": -116.2839412
},
"viewport": {
"northeast": {
"lat": 43.61633502989272,
"lng": -116.2825102701073
},
"southwest": {
"lat": 43.61363537010728,
"lng": -116.2852099298927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "94659697197e48daedd43bbf53d68e40e1baf476",
"name": "Panda Express",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 4032,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/117642557751992879553/photos\">Vincenzo Andrea Pennisi</a>"
],
"photo_reference": "CmRaAAAA4OZlV9cKAkT7RF4u46mGw7RvQdlbjEKi2d7zpS71unqYd6WxXjSkxQ8JlBwPofHeVH1FKd1JAzCPs9Lg-jXjk4WLZYAsjGWwJue7OHNHvrvK7JcfOUHM2KUEByiusBGfEhApQ_s22C1tcLMzBJ8GXs9EGhS6zk1ABftmXT3sxOKXFXdLw0eHDQ",
"width": 3024
}
],
"place_id": "ChIJbdZEMQ1WrlQRcVfVf-Hl8g8",
"plus_code": {
"compound_code": "JP78+XC Boise, Idaho",
"global_code": "85M5JP78+XC"
},
"price_level": 1,
"rating": 4,
"reference": "ChIJbdZEMQ1WrlQRcVfVf-Hl8g8",
"scope": "GOOGLE",
"types": [
"meal_takeaway",
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "1124 N Milwaukee St, Boise"
},
{
"geometry": {
"location": {
"lat": 43.58963800000001,
"lng": -116.216586
},
"viewport": {
"northeast": {
"lat": 43.59098947989272,
"lng": -116.2151422201073
},
"southwest": {
"lat": 43.58828982010728,
"lng": -116.2178418798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "fb20fca1a524f40bfea6af8ba52d1bd870bb8e7a",
"name": "Panda Garden",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3024,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/103973203825542789666/photos\">Ben Studer</a>"
],
"photo_reference": "CmRaAAAAkqA32pNV-ZAVEyPC358rzMdmlB-6YfwKIsgFAwnn-tFnk79HsiJZ4kYuZElEWKFWYBKQoE_tj875ahmxb6UGSD5M9tOLWVMIqPasU65NXDnNI_EouhHIVNKnjVQkQEX8EhBmI0GBTxqSZoC8NBDkjDfgGhTRocp3dEuBcNpPQ9gtYEAbh1YFqg",
"width": 4032
}
],
"place_id": "ChIJ2VoTQ3D4rlQR3UKIMJlQFeI",
"plus_code": {
"compound_code": "HQQM+V9 Boise, Idaho",
"global_code": "85M5HQQM+V9"
},
"price_level": 2,
"rating": 3.8,
"reference": "ChIJ2VoTQ3D4rlQR3UKIMJlQFeI",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "2801 W Overland Rd, Boise"
},
{
"geometry": {
"location": {
"lat": 43.6073482,
"lng": -116.2787701
},
"viewport": {
"northeast": {
"lat": 43.60863472989272,
"lng": -116.2775668201073
},
"southwest": {
"lat": 43.60593507010728,
"lng": -116.2802664798927
}
}
},
"icon": "https://maps.gstatic.com/mapfiles/place_api/icons/restaurant-71.png",
"id": "2ed6cc10f5ade931ec1fecc1f07910be5091b074",
"name": "Panda Express",
"opening_hours": {
"open_now": true
},
"photos": [
{
"height": 3024,
"html_attributions": [
"<a href=\"https://maps.google.com/maps/contrib/104961771114871630072/photos\">Bobbie Owen</a>"
],
"photo_reference": "CmRaAAAAsK7jaAAxIU3gojPeblal4rwthIMegDLCvBWi__CjGkQz2HI049udYXXCFJzAx732DghwIFUETzk3WKEUwGN_kzCHum03rkgP7iCu9AKOeFwlS9BrdUqkqTR1dq-6YTAOEhDVdmZTfvCU-OoLIy0Goh_-GhR0jur5-zaba-S0XpoZE6jBvJqJHA",
"width": 4032
}
],
"place_id": "ChIJfYBLkWpWrlQRjig0x7oVFfk",
"plus_code": {
"compound_code": "JP4C+WF Boise, Idaho",
"global_code": "85M5JP4C+WF"
},
"price_level": 1,
"rating": 3,
"reference": "ChIJfYBLkWpWrlQRjig0x7oVFfk",
"scope": "GOOGLE",
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"vicinity": "350 N Milwaukee St, Boise"
}
],
"status": "OK"
}
# Print the name and address of the first restaurant that appears
print(places_data["results"][0]["name"])
print(places_data["results"][0]["vicinity"])Yen Ching Restaurant
305 N 9th St, Boise
</code>
|
{
"repository": "arinmuk/python_apis",
"path": "3/Activities/02-Ins_Google_Places/Solved/Google_Places.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 55348,
"hexsha": "cba77e01439aa4a4ba9a3376bcd3b7fcec2427a0",
"max_line_length": 509,
"avg_line_length": 48.8077601411,
"alphanum_fraction": 0.3587844186
}
|
# Notebook from pentagramswheel/DataX15
Path: Final Project/Archives/Manual_annotation_ESG.ipynb
# **0) Imports**_____no_output_____
<code>
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re
import pathlib
import glob
import os
!git clone https://github.com/loier13/IEOR235.git
# set option below so Pandas dataframe can output readable text, not truncated
pd.set_option('display.max_colwidth', 0)Cloning into 'IEOR235'...
remote: Enumerating objects: 40, done.[K
remote: Counting objects: 100% (40/40), done.[K
remote: Compressing objects: 100% (38/38), done.[K
remote: Total 40 (delta 10), reused 0 (delta 0), pack-reused 0[K
Unpacking objects: 100% (40/40), done.
</code>
# **1) Open reviews**_____no_output_____
<code>
all_reviews = pd.read_csv('IEOR235/all_reviews/all_reviews.csv', sep = ';')
def clean_reviews(data):
data['pros'] = data['pros'].astype(str).apply(lambda x: '. '.join(x.split('\n')))
data['cons'] = data['cons'].astype(str).apply(lambda x: '. '.join(x.split('\n')))
data.drop_duplicates(inplace = True)
data = data[['pros']].reset_index()
data.columns = ['Id', 'text']
return data
all_reviews = clean_reviews(all_reviews)
display(all_reviews.head())
display(all_reviews.info())_____no_output_____
</code>
# **2) Naive topic detection**_____no_output_____Topic detection by keywords._____no_output_____
<code>
E_keywords = ['renewable', 'recycl', 'reuse', 'compost', 'recovery', 'ecocide', 'bio', 'carbon', 'forest',
'sustainable', 'renewable', 'pollut', 'emissions', 'green', 'co2', 'ch4', 'n2o',
'hfcs', 'pfcs', 'sf6', 'nf3', 'cfc-11', 'nox', 'sox', 'warming', 'climate',
'waste', 'garbage', 'trash', 'disposal', 'landfill', 'chemicals', 'acidification', 'fossil',
'eutrophication', 'environmental', 'consumption', 'water', 'resource', 'ecosystem', 'ecology', 'incineration',
'ozone', 'natural', 'solar', 'biomass', 'air', 'soil', 'dioxide', 'footprint', 'geoengineering']
S_keywords = ['labor', 'health', 'safe', 'human', 'standards', 'quality', 'life', 'privacy', 'private', 'responsib', 'insur', 'risk', 'care', 'opportunit', 'resource']
G_keywords = ['corrupt', 'management', 'board', 'pay', 'fair', 'owner', 'account', 'ethics', 'competit', 'practice', 'stable', 'stabilit', 'system', 'transparen']
def naive_topic_detection(data, topic, keywords):
"""
For this prototype we have a naive topic detection algorithms by keywords. We may have a suboptimal precision and recall.
"""
output = data.copy()
output[f'{topic}_naive'] = data['text'].apply(lambda x: any([x.lower().find(word) >=0 for word in keywords])).astype(int)
return output
all_reviews_E = naive_topic_detection(all_reviews, 'E', E_keywords)
all_reviews_S = naive_topic_detection(all_reviews, 'S', S_keywords)
all_reviews_G = naive_topic_detection(all_reviews, 'G', G_keywords)
display(all_reviews_E.head())_____no_output_____
</code>
# **3) Manual annotation**_____no_output_____We voluntarily separate E, S and G instead of multiclass labeling in order to implement different better trained classifiers on these overlapping classes. So we proceed to E, S and G labelling in the following sections._____no_output_____
<code>
%%capture --no-display
!pip install superintendent
from superintendent.distributed import ClassLabeller_____no_output_____
</code>
### **a. Environment**_____no_output_____
<code>
all_reviews_E['E'] = np.nan
active_E = all_reviews_E[all_reviews_E.E_naive == 1]
widget_E = ClassLabeller(
features=active_E[active_E['E'].isnull()]['text'].tolist(),
options=[
"E", "Non-E"
]
)
widget_E_____no_output_____active_E['E'] = active_E['E'].map({"E":1, "non-E": 0})
active_E.head()_____no_output_____non_E = all_reviews_E[all_reviews_E.E_naive == 0]
non_E['E'] = 0
final_E = pd.concat([non_E, active_E])
final_E.to_csv('reviews_E_labeled.csv', sep = ';')/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code>
The final dataset final_E is saved and will be used in the next notebook for classification purposes._____no_output_____### **b. Social**_____no_output_____
<code>
all_reviews_S['S'] = np.nan
active_S = all_reviews_S[all_reviews_S.S_naive == 1]
widget_S = ClassLabeller(
features=active_S[active_S['S'].isnull()]['text'].tolist(),
options=[
"S", "Non-S"
]
)
widget_S_____no_output_____active_S['S'] = active_S['S'].map({"S":1, "non-S": 0})
active_S.head()_____no_output_____non_S = all_reviews_S[all_reviews_S.S_naive == 0]
non_S['S'] = 0
final_S = pd.concat([non_S, active_S])
final_S.to_csv('reviews_S_labeled.csv', sep = ';')/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code>
### **c. Governance**_____no_output_____
<code>
all_reviews_G['G'] = np.nan
active_G = all_reviews_G[all_reviews_G.G_naive == 1]
widget_G = ClassLabeller(
features=active_G[active_G['G'].isnull()]['text'].tolist(),
options=[
"G", "Non-G"
]
)
widget_G_____no_output_____active_G['G'] = active_G['G'].map({"G":1, "non-G": 0})
active_G.head()_____no_output_____non_G = all_reviews_G[all_reviews_G.G_naive == 0]
non_G['G'] = 0
final_G = pd.concat([non_G, active_G])
final_G.to_csv('reviews_G_labeled.csv', sep = ';')/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code>
|
{
"repository": "pentagramswheel/DataX15",
"path": "Final Project/Archives/Manual_annotation_ESG.ipynb",
"matched_keywords": [
"ecology"
],
"stars": null,
"size": 254073,
"hexsha": "cba78787764288821127679c7e0ff720f8a1c422",
"max_line_length": 329,
"avg_line_length": 35.8202453123,
"alphanum_fraction": 0.4763906436
}
|
# Notebook from CQCL/pytket
Path: examples/ucc_vqe.ipynb
# VQE for Unitary Coupled Cluster using tket_____no_output_____In this tutorial, we will focus on:<br>
- building parameterised ansätze for variational algorithms;<br>
- compilation tools for UCC-style ansätze._____no_output_____This example assumes the reader is familiar with the Variational Quantum Eigensolver and its application to electronic structure problems through the Unitary Coupled Cluster approach.<br>
<br>
To run this example, you will need `pytket` and `pytket-qiskit`, as well as `openfermion`, `scipy`, and `sympy`.<br>
<br>
We will start with a basic implementation and then gradually modify it to make it faster, more general, and less noisy. The final solution is given in full at the bottom of the notebook.<br>
<br>
Suppose we have some electronic configuration problem, expressed via a physical Hamiltonian. (The Hamiltonian and excitations in this example were obtained using `qiskit-aqua` version 0.5.2 and `pyscf` for H2, bond length 0.75A, sto3g basis, Jordan-Wigner encoding, with no qubit reduction or orbital freezing.)_____no_output_____
<code>
from openfermion import QubitOperator_____no_output_____hamiltonian = (
-0.8153001706270075 * QubitOperator("")
+ 0.16988452027940318 * QubitOperator("Z0")
+ -0.21886306781219608 * QubitOperator("Z1")
+ 0.16988452027940323 * QubitOperator("Z2")
+ -0.2188630678121961 * QubitOperator("Z3")
+ 0.12005143072546047 * QubitOperator("Z0 Z1")
+ 0.16821198673715723 * QubitOperator("Z0 Z2")
+ 0.16549431486978672 * QubitOperator("Z0 Z3")
+ 0.16549431486978672 * QubitOperator("Z1 Z2")
+ 0.1739537877649417 * QubitOperator("Z1 Z3")
+ 0.12005143072546047 * QubitOperator("Z2 Z3")
+ 0.04544288414432624 * QubitOperator("X0 X1 X2 X3")
+ 0.04544288414432624 * QubitOperator("X0 X1 Y2 Y3")
+ 0.04544288414432624 * QubitOperator("Y0 Y1 X2 X3")
+ 0.04544288414432624 * QubitOperator("Y0 Y1 Y2 Y3")
)
nuclear_repulsion_energy = 0.70556961456_____no_output_____
</code>
We would like to define our ansatz for arbitrary parameter values. For simplicity, let's start with a Hardware Efficient Ansatz._____no_output_____
<code>
from pytket import Circuit_____no_output_____
</code>
Hardware efficient ansatz:_____no_output_____
<code>
def hea(params):
ansatz = Circuit(4)
for i in range(4):
ansatz.Ry(params[i], i)
for i in range(3):
ansatz.CX(i, i + 1)
for i in range(4):
ansatz.Ry(params[4 + i], i)
return ansatz_____no_output_____
</code>
We can use this to build the objective function for our optimisation._____no_output_____
<code>
from pytket.extensions.qiskit import AerBackend
from pytket.utils import expectation_from_counts_____no_output_____backend = AerBackend()_____no_output_____
</code>
Naive objective function:_____no_output_____
<code>
def objective(params):
energy = 0
for term, coeff in hamiltonian.terms.items():
if not term:
energy += coeff
continue
circ = hea(params)
circ.add_c_register("c", len(term))
for i, (q, pauli) in enumerate(term):
if pauli == "X":
circ.H(q)
elif pauli == "Y":
circ.V(q)
circ.Measure(q, i)
backend.compile_circuit(circ)
counts = backend.run_circuit(circ, n_shots=4000).get_counts()
energy += coeff * expectation_from_counts(counts)
return energy + nuclear_repulsion_energy_____no_output_____
</code>
This objective function is then run through a classical optimiser to find the set of parameter values that minimise the energy of the system. For the sake of example, we will just run this with a single parameter value._____no_output_____
<code>
arg_values = [
-7.31158201e-02,
-1.64514836e-04,
1.12585591e-03,
-2.58367544e-03,
1.00006068e00,
-1.19551357e-03,
9.99963988e-01,
2.53283285e-03,
]_____no_output_____energy = objective(arg_values)
print(energy)_____no_output_____
</code>
The HEA is designed to cram as many orthogonal degrees of freedom into a small circuit as possible to be able to explore a large region of the Hilbert space whilst the circuits themselves can be run with minimal noise. These ansätze give virtually-optimal circuits by design, but suffer from an excessive number of variational parameters making convergence slow, barren plateaus where the classical optimiser fails to make progress, and spanning a space where most states lack a physical interpretation. These drawbacks can necessitate adding penalties and may mean that the ansatz cannot actually express the true ground state.<br>
<br>
The UCC ansatz, on the other hand, is derived from the electronic configuration. It sacrifices efficiency of the circuit for the guarantee of physical states and the variational parameters all having some meaningful effect, which helps the classical optimisation to converge.<br>
<br>
This starts by defining the terms of our single and double excitations. These would usually be generated using the orbital configurations, so we will just use a hard-coded example here for the purposes of demonstration._____no_output_____
<code>
from pytket.pauli import Pauli, QubitPauliString
from pytket.circuit import Qubit_____no_output_____q = [Qubit(i) for i in range(4)]
xyii = QubitPauliString([q[0], q[1]], [Pauli.X, Pauli.Y])
yxii = QubitPauliString([q[0], q[1]], [Pauli.Y, Pauli.X])
iixy = QubitPauliString([q[2], q[3]], [Pauli.X, Pauli.Y])
iiyx = QubitPauliString([q[2], q[3]], [Pauli.Y, Pauli.X])
xxxy = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.X, Pauli.Y])
xxyx = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.Y, Pauli.X])
xyxx = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.X, Pauli.X])
yxxx = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.X, Pauli.X])
yyyx = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.Y, Pauli.X])
yyxy = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.X, Pauli.Y])
yxyy = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.Y, Pauli.Y])
xyyy = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.Y, Pauli.Y])_____no_output_____singles_a = {xyii: 1.0, yxii: -1.0}
singles_b = {iixy: 1.0, iiyx: -1.0}
doubles = {
xxxy: 0.25,
xxyx: -0.25,
xyxx: 0.25,
yxxx: -0.25,
yyyx: -0.25,
yyxy: 0.25,
yxyy: -0.25,
xyyy: 0.25,
}_____no_output_____
</code>
Building the ansatz circuit itself is often done naively by defining the map from each term down to basic gates and then applying it to each term._____no_output_____
<code>
def add_operator_term(circuit: Circuit, term: QubitPauliString, angle: float):
qubits = []
for q, p in term.map.items():
if p != Pauli.I:
qubits.append(q)
if p == Pauli.X:
circuit.H(q)
elif p == Pauli.Y:
circuit.V(q)
for i in range(len(qubits) - 1):
circuit.CX(i, i + 1)
circuit.Rz(angle, len(qubits) - 1)
for i in reversed(range(len(qubits) - 1)):
circuit.CX(i, i + 1)
for q, p in term.map.items():
if p == Pauli.X:
circuit.H(q)
elif p == Pauli.Y:
circuit.Vdg(q)_____no_output_____
</code>
Unitary Coupled Cluster Singles & Doubles ansatz:_____no_output_____
<code>
def ucc(params):
ansatz = Circuit(4)
# Set initial reference state
ansatz.X(1).X(3)
# Evolve by excitations
for term, coeff in singles_a.items():
add_operator_term(ansatz, term, coeff * params[0])
for term, coeff in singles_b.items():
add_operator_term(ansatz, term, coeff * params[1])
for term, coeff in doubles.items():
add_operator_term(ansatz, term, coeff * params[2])
return ansatz_____no_output_____
</code>
This is already quite verbose, but `pytket` has a neat shorthand construction for these operator terms using the `PauliExpBox` construction. We can then decompose these into basic gates using the `DecomposeBoxes` compiler pass._____no_output_____
<code>
from pytket.circuit import PauliExpBox
from pytket.passes import DecomposeBoxes_____no_output_____def add_excitation(circ, term_dict, param):
for term, coeff in term_dict.items():
qubits, paulis = zip(*term.map.items())
pbox = PauliExpBox(paulis, coeff * param)
circ.add_pauliexpbox(pbox, qubits)_____no_output_____
</code>
UCC ansatz with syntactic shortcuts:_____no_output_____
<code>
def ucc(params):
ansatz = Circuit(4)
ansatz.X(1).X(3)
add_excitation(ansatz, singles_a, params[0])
add_excitation(ansatz, singles_b, params[1])
add_excitation(ansatz, doubles, params[2])
DecomposeBoxes().apply(ansatz)
return ansatz_____no_output_____
</code>
The objective function can also be simplified using a utility method for constructing the measurement circuits and processing for expectation value calculations._____no_output_____
<code>
from pytket.utils.operators import QubitPauliOperator
from pytket.utils import get_operator_expectation_value_____no_output_____hamiltonian_op = QubitPauliOperator.from_OpenFermion(hamiltonian)_____no_output_____
</code>
Simplified objective function using utilities:_____no_output_____
<code>
def objective(params):
circ = ucc(params)
return (
get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000)
+ nuclear_repulsion_energy
)_____no_output_____arg_values = [-3.79002933e-05, 2.42964799e-05, 4.63447157e-01]_____no_output_____energy = objective(arg_values)
print(energy)_____no_output_____
</code>
This is now the simplest form that this operation can take, but it isn't necessarily the most effective. When we decompose the ansatz circuit into basic gates, it is still very expensive. We can employ some of the circuit simplification passes available in `pytket` to reduce its size and improve fidelity in practice.<br>
<br>
A good example is to decompose each `PauliExpBox` into basic gates and then apply `FullPeepholeOptimise`, which defines a compilation strategy utilising all of the simplifications in `pytket` that act locally on small regions of a circuit. We can examine the effectiveness by looking at the number of two-qubit gates before and after simplification, which tends to be a good indicator of fidelity for near-term systems where these gates are often slow and inaccurate._____no_output_____
<code>
from pytket import OpType
from pytket.passes import FullPeepholeOptimise_____no_output_____test_circuit = ucc(arg_values)_____no_output_____print("CX count before", test_circuit.n_gates_of_type(OpType.CX))
print("CX depth before", test_circuit.depth_by_type(OpType.CX))_____no_output_____FullPeepholeOptimise().apply(test_circuit)_____no_output_____print("CX count after FPO", test_circuit.n_gates_of_type(OpType.CX))
print("CX depth after FPO", test_circuit.depth_by_type(OpType.CX))_____no_output_____
</code>
These simplification techniques are very general and are almost always beneficial to apply to a circuit if you want to eliminate local redundancies. But UCC ansätze have extra structure that we can exploit further. They are defined entirely out of exponentiated tensors of Pauli matrices, giving the regular structure described by the `PauliExpBox`es. Under many circumstances, it is more efficient to not synthesise these constructions individually, but simultaneously in groups. The `PauliSimp` pass finds the description of a given circuit as a sequence of `PauliExpBox`es and resynthesises them (by default, in groups of commuting terms). This can cause great change in the overall structure and shape of the circuit, enabling the identification and elimination of non-local redundancy._____no_output_____
<code>
from pytket.passes import PauliSimp_____no_output_____test_circuit = ucc(arg_values)_____no_output_____print("CX count before", test_circuit.n_gates_of_type(OpType.CX))
print("CX depth before", test_circuit.depth_by_type(OpType.CX))_____no_output_____PauliSimp().apply(test_circuit)_____no_output_____print("CX count after PS", test_circuit.n_gates_of_type(OpType.CX))
print("CX depth after PS", test_circuit.depth_by_type(OpType.CX))_____no_output_____FullPeepholeOptimise().apply(test_circuit)_____no_output_____print("CX count after PS+FPO", test_circuit.n_gates_of_type(OpType.CX))
print("CX depth after PS+FPO", test_circuit.depth_by_type(OpType.CX))_____no_output_____
</code>
To include this into our routines, we can just add the simplification passes to the objective function. The `get_operator_expectation_value` utility handles compiling to meet the requirements of the backend, so we don't have to worry about that here._____no_output_____Objective function with circuit simplification:_____no_output_____
<code>
def objective(params):
circ = ucc(params)
PauliSimp().apply(circ)
FullPeepholeOptimise().apply(circ)
return (
get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000)
+ nuclear_repulsion_energy
)_____no_output_____
</code>
These circuit simplification techniques have tried to preserve the exact unitary of the circuit, but there are ways to change the unitary whilst preserving the correctness of the algorithm as a whole.<br>
<br>
For example, the excitation terms are generated by trotterisation of the excitation operator, and the order of the terms does not change the unitary in the limit of many trotter steps, so in this sense we are free to sequence the terms how we like and it is sensible to do this in a way that enables efficient synthesis of the circuit. Prioritising collecting terms into commuting sets is a very beneficial heuristic for this and can be performed using the `gen_term_sequence_circuit` method to group the terms together into collections of `PauliExpBox`es and the `GuidedPauliSimp` pass to utilise these sets for synthesis._____no_output_____
<code>
from pytket.passes import GuidedPauliSimp
from pytket.utils import gen_term_sequence_circuit_____no_output_____def ucc(params):
singles_params = {qps: params[0] * coeff for qps, coeff in singles.items()}
doubles_params = {qps: params[1] * coeff for qps, coeff in doubles.items()}
excitation_op = QubitPauliOperator({**singles_params, **doubles_params})
reference_circ = Circuit(4).X(1).X(3)
ansatz = gen_term_sequence_circuit(excitation_op, reference_circ)
GuidedPauliSimp().apply(ansatz)
FullPeepholeOptimise().apply(ansatz)
return ansatz_____no_output_____
</code>
Adding these simplification routines doesn't come for free. Compiling and simplifying the circuit to achieve the best results possible can be a difficult task, which can take some time for the classical computer to perform.<br>
<br>
During a VQE run, we will call this objective function many times and run many measurement circuits within each, but the circuits that are run on the quantum computer are almost identical, having the same gate structure but with different gate parameters and measurements. We have already exploited this within the body of the objective function by simplifying the ansatz circuit before we call `get_operator_expectation_value`, so it is only done once per objective calculation rather than once per measurement circuit.<br>
<br>
We can go even further by simplifying it once outside of the objective function, and then instantiating the simplified ansatz with the parameter values needed. For this, we will construct the UCC ansatz circuit using symbolic (parametric) gates._____no_output_____
<code>
from sympy import symbols_____no_output_____
</code>
Symbolic UCC ansatz generation:_____no_output_____
<code>
syms = symbols("p0 p1 p2")
singles_a_syms = {qps: syms[0] * coeff for qps, coeff in singles_a.items()}
singles_b_syms = {qps: syms[1] * coeff for qps, coeff in singles_b.items()}
doubles_syms = {qps: syms[2] * coeff for qps, coeff in doubles.items()}
excitation_op = QubitPauliOperator({**singles_a_syms, **singles_b_syms, **doubles_syms})
ucc_ref = Circuit(4).X(1).X(3)
ucc = gen_term_sequence_circuit(excitation_op, ucc_ref)
GuidedPauliSimp().apply(ucc)
FullPeepholeOptimise().apply(ucc)_____no_output_____
</code>
Objective function using the symbolic ansatz:_____no_output_____
<code>
def objective(params):
circ = ucc.copy()
sym_map = dict(zip(syms, params))
circ.symbol_substitution(sym_map)
return (
get_operator_expectation_value(circ, hamiltonian_op, backend, n_shots=4000)
+ nuclear_repulsion_energy
)_____no_output_____
</code>
We have now got some very good use of `pytket` for simplifying each individual circuit used in our experiment and for minimising the amount of time spent compiling, but there is still more we can do in terms of reducing the amount of work the quantum computer has to do. Currently, each (non-trivial) term in our measurement hamiltonian is measured by a different circuit within each expectation value calculation. Measurement reduction techniques exist for identifying when these observables commute and hence can be simultaneously measured, reducing the number of circuits required for the full expectation value calculation.<br>
<br>
This is built in to the `get_operator_expectation_value` method and can be applied by specifying a way to partition the measuremrnt terms. `PauliPartitionStrat.CommutingSets` can greatly reduce the number of measurement circuits by combining any number of terms that mutually commute. However, this involves potentially adding an arbitrary Clifford circuit to change the basis of the measurements which can be costly on NISQ devices, so `PauliPartitionStrat.NonConflictingSets` trades off some of the reduction in circuit number to guarantee that only single-qubit gates are introduced._____no_output_____
<code>
from pytket.partition import PauliPartitionStrat_____no_output_____
</code>
Objective function using measurement reduction:_____no_output_____
<code>
def objective(params):
circ = ucc.copy()
sym_map = dict(zip(syms, params))
circ.symbol_substitution(sym_map)
return (
get_operator_expectation_value(
circ,
operator,
backend,
n_shots=4000,
partition_strat=PauliPartitionStrat.CommutingSets,
)
+ nuclear_repulsion_energy
)_____no_output_____
</code>
At this point, we have completely transformed how our VQE objective function works, improving its resilience to noise, cutting the number of circuits run, and maintaining fast runtimes. In doing this, we have explored a number of the features `pytket` offers that are beneficial to VQE and the UCC method:<br>
- high-level syntactic constructs for evolution operators;<br>
- utility methods for easy expectation value calculations;<br>
- both generic and domain-specific circuit simplification methods;<br>
- symbolic circuit compilation;<br>
- measurement reduction for expectation value calculations._____no_output_____For the sake of completeness, the following gives the full code for the final solution, including passing the objective function to a classical optimiser to find the ground state:_____no_output_____
<code>
from openfermion import QubitOperator
from scipy.optimize import minimize
from sympy import symbols_____no_output_____from pytket.extensions.qiskit import AerBackend
from pytket.circuit import Circuit, Qubit
from pytket.partition import PauliPartitionStrat
from pytket.passes import GuidedPauliSimp, FullPeepholeOptimise
from pytket.pauli import Pauli, QubitPauliString
from pytket.utils import get_operator_expectation_value, gen_term_sequence_circuit
from pytket.utils.operators import QubitPauliOperator_____no_output_____
</code>
Obtain electronic Hamiltonian:_____no_output_____
<code>
hamiltonian = (
-0.8153001706270075 * QubitOperator("")
+ 0.16988452027940318 * QubitOperator("Z0")
+ -0.21886306781219608 * QubitOperator("Z1")
+ 0.16988452027940323 * QubitOperator("Z2")
+ -0.2188630678121961 * QubitOperator("Z3")
+ 0.12005143072546047 * QubitOperator("Z0 Z1")
+ 0.16821198673715723 * QubitOperator("Z0 Z2")
+ 0.16549431486978672 * QubitOperator("Z0 Z3")
+ 0.16549431486978672 * QubitOperator("Z1 Z2")
+ 0.1739537877649417 * QubitOperator("Z1 Z3")
+ 0.12005143072546047 * QubitOperator("Z2 Z3")
+ 0.04544288414432624 * QubitOperator("X0 X1 X2 X3")
+ 0.04544288414432624 * QubitOperator("X0 X1 Y2 Y3")
+ 0.04544288414432624 * QubitOperator("Y0 Y1 X2 X3")
+ 0.04544288414432624 * QubitOperator("Y0 Y1 Y2 Y3")
)
nuclear_repulsion_energy = 0.70556961456_____no_output_____hamiltonian_op = QubitPauliOperator.from_OpenFermion(hamiltonian)_____no_output_____
</code>
Obtain terms for single and double excitations:_____no_output_____
<code>
q = [Qubit(i) for i in range(4)]
xyii = QubitPauliString([q[0], q[1]], [Pauli.X, Pauli.Y])
yxii = QubitPauliString([q[0], q[1]], [Pauli.Y, Pauli.X])
iixy = QubitPauliString([q[2], q[3]], [Pauli.X, Pauli.Y])
iiyx = QubitPauliString([q[2], q[3]], [Pauli.Y, Pauli.X])
xxxy = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.X, Pauli.Y])
xxyx = QubitPauliString(q, [Pauli.X, Pauli.X, Pauli.Y, Pauli.X])
xyxx = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.X, Pauli.X])
yxxx = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.X, Pauli.X])
yyyx = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.Y, Pauli.X])
yyxy = QubitPauliString(q, [Pauli.Y, Pauli.Y, Pauli.X, Pauli.Y])
yxyy = QubitPauliString(q, [Pauli.Y, Pauli.X, Pauli.Y, Pauli.Y])
xyyy = QubitPauliString(q, [Pauli.X, Pauli.Y, Pauli.Y, Pauli.Y])_____no_output_____
</code>
Symbolic UCC ansatz generation:_____no_output_____
<code>
syms = symbols("p0 p1 p2")
singles_syms = {xyii: syms[0], yxii: -syms[0], iixy: syms[1], iiyx: -syms[1]}
doubles_syms = {
xxxy: 0.25 * syms[2],
xxyx: -0.25 * syms[2],
xyxx: 0.25 * syms[2],
yxxx: -0.25 * syms[2],
yyyx: -0.25 * syms[2],
yyxy: 0.25 * syms[2],
yxyy: -0.25 * syms[2],
xyyy: 0.25 * syms[2],
}
excitation_op = QubitPauliOperator({**singles_syms, **doubles_syms})
ucc_ref = Circuit(4).X(0).X(2)
ucc = gen_term_sequence_circuit(excitation_op, ucc_ref)_____no_output_____
</code>
Circuit simplification:_____no_output_____
<code>
GuidedPauliSimp().apply(ucc)
FullPeepholeOptimise().apply(ucc)_____no_output_____
</code>
Connect to a simulator/device:_____no_output_____
<code>
backend = AerBackend()_____no_output_____
</code>
Objective function:_____no_output_____
<code>
def objective(params):
circ = ucc.copy()
sym_map = dict(zip(syms, params))
circ.symbol_substitution(sym_map)
return (
get_operator_expectation_value(
circ,
hamiltonian_op,
backend,
n_shots=4000,
partition_strat=PauliPartitionStrat.CommutingSets,
)
+ nuclear_repulsion_energy
).real_____no_output_____
</code>
Optimise against the objective function:_____no_output_____
<code>
initial_params = [1e-4, 1e-4, 4e-1]
result = minimize(objective, initial_params, method="Nelder-Mead")
print("Final parameter values", result.x)
print("Final energy value", result.fun)_____no_output_____
</code>
Exercises:<br>
- Replace the `get_operator_expectation_value` call with its implementation and use this to pull the analysis for measurement reduction outside of the objective function, so our circuits can be fully determined and compiled once. This means that the `symbol_substitution` method will need to be applied to each measurement circuit instead of just the state preparation circuit.<br>
- Use the `SpamCorrecter` class to add some mitigation of the measurement errors. Start by running the characterisation circuits first, before your main VQE loop, then apply the mitigation to each of the circuits run within the objective function.<br>
- Change the `backend` by passing in a `Qiskit` `NoiseModel` to simulate a noisy device. Compare the accuracy of the objective function both with and without the circuit simplification. Try running a classical optimiser over the objective function and compare the convergence rates with different noise models. If you have access to a QPU, try changing the `backend` to connect to that and compare the results to the simulator._____no_output_____
|
{
"repository": "CQCL/pytket",
"path": "examples/ucc_vqe.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 249,
"size": 30798,
"hexsha": "cba829638052d16d8640ef93bd7c28016b443ea3",
"max_line_length": 30798,
"avg_line_length": 30798,
"alphanum_fraction": 0.6771543607
}
|
# Notebook from arpit1920/Machine-Learning-all-Algorithms
Path: NLP/natural_language_processing.ipynb
# Natural Language Processing_____no_output_____## Importing the libraries_____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd_____no_output_____
</code>
## Importing the dataset_____no_output_____
<code>
dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)_____no_output_____
</code>
## Cleaning the texts_____no_output_____
<code>
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
corpus = []
for i in range(0, 1000):
review = re.sub('[^a-zA-Z]', ' ', dataset['Review'][i])
review = review.lower()
review = review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
review = [ps.stem(word) for word in review if not word in set(all_stopwords)]
review = ' '.join(review)
corpus.append(review)[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
print(corpus)['wow love place', 'crust not good', 'not tasti textur nasti', 'stop late may bank holiday rick steve recommend love', 'select menu great price', 'get angri want damn pho', 'honeslti tast fresh', 'potato like rubber could tell made ahead time kept warmer', 'fri great', 'great touch', 'servic prompt', 'would not go back', 'cashier care ever say still end wayyy overpr', 'tri cape cod ravoli chicken cranberri mmmm', 'disgust pretti sure human hair', 'shock sign indic cash', 'highli recommend', 'waitress littl slow servic', 'place not worth time let alon vega', 'not like', 'burritto blah', 'food amaz', 'servic also cute', 'could care less interior beauti', 'perform', 'right red velvet cake ohhh stuff good', 'never brought salad ask', 'hole wall great mexican street taco friendli staff', 'took hour get food tabl restaur food luke warm sever run around like total overwhelm', 'worst salmon sashimi', 'also combo like burger fri beer decent deal', 'like final blow', 'found place accid could not happier', 'seem like good quick place grab bite familiar pub food favor look elsewher', 'overal like place lot', 'redeem qualiti restaur inexpens', 'ampl portion good price', 'poor servic waiter made feel like stupid everi time came tabl', 'first visit hiro delight', 'servic suck', 'shrimp tender moist', 'not deal good enough would drag establish', 'hard judg whether side good gross melt styrofoam want eat fear get sick', 'posit note server attent provid great servic', 'frozen puck disgust worst peopl behind regist', 'thing like prime rib dessert section', 'bad food damn gener', 'burger good beef cook right', 'want sandwich go firehous', 'side greek salad greek dress tasti pita hummu refresh', 'order duck rare pink tender insid nice char outsid', 'came run us realiz husband left sunglass tabl', 'chow mein good', 'horribl attitud toward custom talk one custom enjoy food', 'portion huge', 'love friendli server great food wonder imagin menu', 'heart attack grill downtown vega absolut flat line excus restaur', 'not much seafood like string pasta bottom', 'salad right amount sauc not power scallop perfectli cook', 'rip banana not rip petrifi tasteless', 'least think refil water struggl wave minut', 'place receiv star appet', 'cocktail handmad delici', 'definit go back', 'glad found place', 'great food servic huge portion give militari discount', 'alway great time do gringo', 'updat went back second time still amaz', 'got food appar never heard salt batter fish chewi', 'great way finish great', 'deal includ tast drink jeff went beyond expect', 'realli realli good rice time', 'servic meh', 'took min get milkshak noth chocol milk', 'guess known place would suck insid excalibur use common sens', 'scallop dish quit appal valu well', 'time bad custom servic', 'sweet potato fri good season well', 'today second time lunch buffet pretti good', 'much good food vega feel cheat wast eat opportun go rice compani', 'come like experienc underwhelm relationship parti wait person ask break', 'walk place smell like old greas trap other eat', 'turkey roast beef bland', 'place', 'pan cake everyon rave tast like sugari disast tailor palat six year old', 'love pho spring roll oh yummi tri', 'poor batter meat ratio made chicken tender unsatisfi', 'say food amaz', 'omelet die', 'everyth fresh delici', 'summari larg disappoint dine experi', 'like realli sexi parti mouth outrag flirt hottest person parti', 'never hard rock casino never ever step forward', 'best breakfast buffet', 'say bye bye tip ladi', 'never go', 'back', 'food arriv quickli', 'not good', 'side cafe serv realli good food', 'server fantast found wife love roast garlic bone marrow ad extra meal anoth marrow go', 'good thing waiter help kept bloddi mari come', 'best buffet town price cannot beat', 'love mussel cook wine reduct duck tender potato dish delici', 'one better buffet', 'went tigerlilli fantast afternoon', 'food delici bartend attent person got great deal', 'ambienc wonder music play', 'go back next trip', 'sooooo good', 'real sushi lover let honest yama not good', 'least min pass us order food arriv busi', 'realli fantast thai restaur definit worth visit', 'nice spici tender', 'good price', 'check', 'pretti gross', 'better atmospher', 'kind hard mess steak', 'although much like look sound place actual experi bit disappoint', 'know place manag serv blandest food ever eaten prepar indian cuisin', 'worst servic boot least worri', 'servic fine waitress friendli', 'guy steak steak love son steak best worst place said best steak ever eaten', 'thought ventur away get good sushi place realli hit spot night', 'host staff lack better word bitch', 'bland not like place number reason want wast time bad review leav', 'phenomen food servic ambianc', 'return', 'definit worth ventur strip pork belli return next time vega', 'place way overpr mediocr food', 'penn vodka excel', 'good select food includ massiv meatloaf sandwich crispi chicken wrap delish tuna melt tasti burger', 'manag rude', 'delici nyc bagel good select cream chees real lox caper even', 'great subway fact good come everi subway not meet expect', 'serious solid breakfast', 'one best bar food vega', 'extrem rude realli mani restaur would love dine weekend vega', 'drink never empti made realli great menu suggest', '', 'waiter help friendli rare check us', 'husband ate lunch disappoint food servic', 'red curri much bamboo shoot tasti', 'nice blanket moz top feel like done cover subpar food', 'bathroom clean place well decor', 'menu alway chang food qualiti go servic extrem slow', 'servic littl slow consid serv peopl server food come slow pace', 'give thumb', 'watch waiter pay lot attent tabl ignor us', 'fianc came middl day greet seat right away', 'great restaur mandalay bay', 'wait forti five minut vain', 'crostini came salad stale', 'highlight great qualiti nigiri', 'staff friendli joint alway clean', 'differ cut piec day still wonder tender well well flavor', 'order voodoo pasta first time realli excel pasta sinc go gluten free sever year ago', 'place good', 'unfortun must hit bakeri leftov day everyth order stale', 'came back today sinc reloc still not impress', 'seat immedi', 'menu divers reason price', 'avoid cost', 'restaur alway full never wait', 'delici', 'place hand one best place eat phoenix metro area', 'go look good food', 'never treat bad', 'bacon hella salti', 'also order spinach avocado salad ingredi sad dress liter zero tast', 'realli vega fine dine use right menu hand ladi price list', 'waitress friendli', 'lordi khao soi dish not miss curri lover', 'everyth menu terrif also thrill made amaz accommod vegetarian daughter', 'perhap caught night judg review not inspir go back', 'servic leav lot desir', 'atmospher modern hip maintain touch cozi', 'not weekli haunt definit place come back everi', 'liter sat minut one ask take order', 'burger absolut flavor meat total bland burger overcook charcoal flavor', 'also decid not send back waitress look like verg heart attack', 'dress treat rude', 'probabl dirt', 'love place hit spot want someth healthi not lack quantiti flavor', 'order lemon raspberri ice cocktail also incred', 'food suck expect suck could imagin', 'interest decor', 'realli like crepe station', 'also serv hot bread butter home made potato chip bacon bit top origin good', 'watch prepar delici food', 'egg roll fantast', 'order arriv one gyro miss', 'salad wing ice cream dessert left feel quit satisfi', 'not realli sure joey vote best hot dog valley reader phoenix magazin', 'best place go tasti bowl pho', 'live music friday total blow', 'never insult felt disrespect', 'friendli staff', 'worth drive', 'heard good thing place exceed everi hope could dream', 'food great serivc', 'warm beer help', 'great brunch spot', 'servic friendli invit', 'good lunch spot', 'live sinc first last time step foot place', 'worst experi ever', 'must night place', 'side delish mix mushroom yukon gold pure white corn beateou', 'bug never show would given sure side wall bug climb kitchen', 'minut wait salad realiz come time soon', 'friend love salmon tartar', 'go back', 'extrem tasti', 'waitress good though', 'soggi not good', 'jamaican mojito delici', 'small not worth price', 'food rich order accordingli', 'shower area outsid rins not take full shower unless mind nude everyon see', 'servic bit lack', 'lobster bisqu bussel sprout risotto filet need salt pepper cours none tabl', 'hope bode go busi someon cook come', 'either cold not enough flavor bad', 'love bacon wrap date', 'unbeliev bargain', 'folk otto alway make us feel welcom special', 'main also uninspir', 'place first pho amaz', 'wonder experi made place must stop whenev town', 'food bad enough enjoy deal world worst annoy drunk peopl', 'fun chef', 'order doubl cheeseburg got singl patti fall apart pictur upload yeah still suck', 'great place coupl drink watch sport event wall cover tv', 'possibl give zero star', 'descript said yum yum sauc anoth said eel sauc yet anoth said spici mayo well none roll sauc', 'say would hardest decis honestli dish tast suppos tast amaz', 'not roll eye may stay not sure go back tri', 'everyon attent provid excel custom servic', 'horribl wast time money', 'dish quit flavour', 'time side restaur almost empti excus', 'busi either also build freez cold', 'like review said pay eat place', 'drink took close minut come one point', 'serious flavor delight folk', 'much better ayc sushi place went vega', 'light dark enough set mood', 'base sub par servic receiv effort show gratitud busi go back', 'owner realli great peopl', 'noth privileg work eat', 'greek dress creami flavor', 'overal think would take parent place made similar complaint silent felt', 'pizza good peanut sauc tasti', 'tabl servic pretti fast', 'fantast servic', 'well would given godfath zero star possibl', 'know make', 'tough short flavor', 'hope place stick around', 'bar vega not ever recal charg tap water', 'restaur atmospher exquisit', 'good servic clean inexpens boot', 'seafood fresh gener portion', 'plu buck', 'servic not par either', 'thu far visit twice food absolut delici time', 'good year ago', 'self proclaim coffe cafe wildli disappoint', 'veggitarian platter world', 'cant go wrong food', 'beat', 'stop place madison ironman friendli kind staff', 'chef friendli good job', 'better not dedic boba tea spot even jenni pho', 'like patio servic outstand', 'goat taco skimp meat wow flavor', 'think not', 'mac salad pretti bland not get', 'went bachi burger friend recommend not disappoint', 'servic stink', 'wait wait', 'place not qualiti sushi not qualiti restaur', 'would definit recommend wing well pizza', 'great pizza salad', 'thing went wrong burn saganaki', 'wait hour breakfast could done time better home', 'place amaz', 'hate disagre fellow yelper husband disappoint place', 'wait hour never got either pizza mani around us came later', 'know slow', 'staff great food delish incred beer select', 'live neighborhood disappoint back conveni locat', 'know pull pork could soooo delici', 'get incred fresh fish prepar care', 'go gave star rate pleas know third time eat bachi burger write review', 'love fact everyth menu worth', 'never dine place', 'food excel servic good', 'good beer drink select good food select', 'pleas stay away shrimp stir fri noodl', 'potato chip order sad could probabl count mani chip box probabl around', 'food realli bore', 'good servic check', 'greedi corpor never see anoth dime', 'never ever go back', 'much like go back get pass atroci servic never return', 'summer dine charm outdoor patio delight', 'not expect good', 'fantast food', 'order toast english muffin came untoast', 'food good', 'never go back', 'great food price high qualiti hous made', 'bu boy hand rude', 'point friend basic figur place joke mind make publicli loudli known', 'back good bbq lighter fare reason price tell public back old way', 'consid two us left full happi go wrong', 'bread made hous', 'downsid servic', 'also fri without doubt worst fri ever', 'servic except food good review', 'coupl month later return amaz meal', 'favorit place town shawarrrrrrma', 'black eye pea sweet potato unreal', 'disappoint', 'could serv vinaigrett may make better overal dish still good', 'go far mani place never seen restaur serv egg breakfast especi', 'mom got home immedi got sick bite salad', 'server not pleasant deal alway honor pizza hut coupon', 'truli unbeliev good glad went back', 'fantast servic pleas atmospher', 'everyth gross', 'love place', 'great servic food', 'first bathroom locat dirti seat cover not replenish plain yucki', 'burger got gold standard burger kind disappoint', 'omg food delicioso', 'noth authent place', 'spaghetti noth special whatsoev', 'dish salmon best great', 'veget fresh sauc feel like authent thai', 'worth drive tucson', 'select probabl worst seen vega none', 'pretti good beer select', 'place like chipotl better', 'classi warm atmospher fun fresh appet succul steak basebal steak', 'star brick oven bread app', 'eaten multipl time time food delici', 'sat anoth ten minut final gave left', 'terribl', 'everyon treat equal special', 'take min pancak egg', 'delici', 'good side staff genuin pleasant enthusiast real treat', 'sadli gordon ramsey steak place shall sharpli avoid next trip vega', 'alway even wonder food delici', 'best fish ever life', 'bathroom next door nice', 'buffet small food offer bland', 'outstand littl restaur best food ever tast', 'pretti cool would say', 'definit turn doubt back unless someon els buy', 'server great job handl larg rowdi tabl', 'find wast food despic food', 'wife lobster bisqu soup lukewarm', 'would come back sushi crave vega', 'staff great ambianc great', 'deserv star', 'left stomach ach felt sick rest day', 'drop ball', 'dine space tini elegantli decor comfort', 'custom order way like usual eggplant green bean stir fri love', 'bean rice mediocr best', 'best taco town far', 'took back money got outta', 'interest part town place amaz', 'rude inconsider manag', 'staff not friendli wait time serv horribl one even say hi first minut', 'back', 'great dinner', 'servic outshin definit recommend halibut', 'food terribl', 'never ever go back told mani peopl happen', 'recommend unless car break front starv', 'come back everi time vega', 'place deserv one star food', 'disgrac', 'def come back bowl next time', 'want healthi authent ethic food tri place', 'continu come ladi night andddd date night highli recommend place anyon area', 'sever time past experi alway great', 'walk away stuf happi first vega buffet experi', 'servic excel price pretti reason consid vega locat insid crystal shop mall aria', 'summar food incred nay transcend noth bring joy quit like memori pneumat condiment dispens', 'probabl one peopl ever go ian not like', 'kid pizza alway hit lot great side dish option kiddo', 'servic perfect famili atmospher nice see', 'cook perfect servic impecc', 'one simpli disappoint', 'overal disappoint qualiti food bouchon', 'account know get screw', 'great place eat remind littl mom pop shop san francisco bay area', 'today first tast buldogi gourmet hot dog tell ever thought possibl', 'left frustrat', 'definit soon', 'food realli good got full petti fast', 'servic fantast', 'total wast time', 'know kind best ice tea', 'come hungri leav happi stuf', 'servic give star', 'assur disappoint', 'take littl bad servic food suck', 'gave tri eat crust teeth still sore', 'complet gross', 'realli enjoy eat', 'first time go think quickli becom regular', 'server nice even though look littl overwhelm need stay profession friendli end', 'dinner companion told everyth fresh nice textur tast', 'ground right next tabl larg smear step track everywher pile green bird poop', 'furthermor even find hour oper websit', 'tri like place time think done', 'mistak', 'complaint', 'serious good pizza expert connisseur topic', 'waiter jerk', 'strike want rush', 'nicest restaur owner ever come across', 'never come', 'love biscuit', 'servic quick friendli', 'order appet took minut pizza anoth minut', 'absolutley fantast', 'huge awkward lb piec cow th gristl fat', 'definit come back', 'like steiner dark feel like bar', 'wow spici delici', 'not familiar check', 'take busi dinner dollar elsewher', 'love go back', 'anyway fs restaur wonder breakfast lunch', 'noth special', 'day week differ deal delici', 'not mention combin pear almond bacon big winner', 'not back', 'sauc tasteless', 'food delici spici enough sure ask spicier prefer way', 'ribey steak cook perfectli great mesquit flavor', 'think go back anytim soon', 'food gooodd', 'far sushi connoisseur definit tell differ good food bad food certainli bad food', 'insult', 'last time lunch bad', 'chicken wing contain driest chicken meat ever eaten', 'food good enjoy everi mouth enjoy relax venu coupl small famili group etc', 'nargil think great', 'best tater tot southwest', 'love place', 'definit not worth paid', 'vanilla ice cream creami smooth profiterol choux pastri fresh enough', 'im az time new spot', 'manag worst', 'insid realli quit nice clean', 'food outstand price reason', 'think run back carli anytim soon food', 'due fact took minut acknowledg anoth minut get food kept forget thing', 'love margarita', 'first vega buffet not disappoint', 'good though', 'one note ventil could use upgrad', 'great pork sandwich', 'wast time', 'total letdown would much rather go camelback flower shop cartel coffe', 'third chees friend burger cold', 'enjoy pizza brunch', 'steak well trim also perfectli cook', 'group claim would handl us beauti', 'love', 'ask bill leav without eat bring either', 'place jewel la vega exactli hope find nearli ten year live', 'seafood limit boil shrimp crab leg crab leg definit not tast fresh', 'select food not best', 'delici absolut back', 'small famili restaur fine dine establish', 'toro tartar cavier extraordinari like thinli slice wagyu white truffl', 'dont think back long time', 'attach ga station rare good sign', 'awesom', 'back mani time soon', 'menu much good stuff could not decid', 'wors humili worker right front bunch horribl name call', 'conclus fill meal', 'daili special alway hit group', 'tragedi struck', 'pancak also realli good pretti larg', 'first crawfish experi delici', 'monster chicken fri steak egg time favorit', 'waitress sweet funni', 'also tast mom multi grain pumpkin pancak pecan butter amaz fluffi delici', 'rather eat airlin food serious', 'cant say enough good thing place', 'ambianc incred', 'waitress manag friendli', 'would not recommend place', 'overal impress noca', 'gyro basic lettuc', 'terribl servic', 'thoroughli disappoint', 'much pasta love homemad hand made pasta thin pizza', 'give tri happi', 'far best cheesecurd ever', 'reason price also', 'everyth perfect night', 'food good typic bar food', 'drive get', 'first glanc love bakeri cafe nice ambianc clean friendli staff', 'anyway not think go back', 'point finger item menu order disappoint', 'oh thing beauti restaur', 'gone go', 'greasi unhealthi meal', 'first time might last', 'burger amaz', 'similarli deliveri man not say word apolog food minut late', 'way expens', 'sure order dessert even need pack go tiramisu cannoli die', 'first time wait next', 'bartend also nice', 'everyth good tasti', 'place two thumb way', 'best place vega breakfast check sat sun', 'love authent mexican food want whole bunch interest yet delici meat choos need tri place', 'terribl manag', 'excel new restaur experienc frenchman', 'zero star would give zero star', 'great steak great side great wine amaz dessert', 'worst martini ever', 'steak shrimp opinion best entre gc', 'opportun today sampl amaz pizza', 'wait thirti minut seat although vacant tabl folk wait', 'yellowtail carpaccio melt mouth fresh', 'tri go back even empti', 'go eat potato found stranger hair', 'spici enough perfect actual', 'last night second time dine happi decid go back', 'not even hello right', 'dessert bit strang', 'boyfriend came first time recent trip vega could not pleas qualiti food servic', 'realli recommend place go wrong donut place', 'nice ambianc', 'would recommend save room', 'guess mayb went night disgrac', 'howev recent experi particular locat not good', 'know not like restaur someth', 'avoid establish', 'think restaur suffer not tri hard enough', 'tapa dish delici', 'heart place', 'salad bland vinegrett babi green heart palm', 'two felt disgust', 'good time', 'believ place great stop huge belli hanker sushi', 'gener portion great tast', 'never go back place never ever recommend place anyon', 'server went back forth sever time not even much help', 'food delici', 'hour serious', 'consid theft', 'eew locat need complet overhaul', 'recent wit poor qualiti manag toward guest well', 'wait wait wait', 'also came back check us regularli excel servic', 'server super nice check us mani time', 'pizza tast old super chewi not good way', 'swung give tri deepli disappoint', 'servic good compani better', 'staff also friendli effici', 'servic fan quick serv nice folk', 'boy sucker dri', 'rate', 'look authent thai food go els', 'steak recommend', 'pull car wait anoth minut acknowledg', 'great food great servic clean friendli set', 'assur back', 'hate thing much cheap qualiti black oliv', 'breakfast perpar great beauti present giant slice toast lightli dust powder sugar', 'kid play area nasti', 'great place fo take eat', 'waitress friendli happi accomod vegan veggi option', 'omg felt like never eaten thai food dish', 'extrem crumbi pretti tasteless', 'pale color instead nice char flavor', 'crouton also tast homemad extra plu', 'got home see driest damn wing ever', 'regular stop trip phoenix', 'realli enjoy crema caf expand even told friend best breakfast', 'not good money', 'miss wish one philadelphia', 'got sit fairli fast end wait minut place order anoth minut food arriv', 'also best chees crisp town', 'good valu great food great servic', 'ask satisfi meal', 'food good', 'awesom', 'want leav', 'made drive way north scottsdal not one bit disappoint', 'not eat', 'owner realli realli need quit soooooo cheap let wrap freak sandwich two paper not one', 'check place coupl year ago not impress', 'chicken got definit reheat ok wedg cold soggi', 'sorri not get food anytim soon', 'absolut must visit', 'cow tongu cheek taco amaz', 'friend not like bloodi mari', 'despit hard rate busi actual rare give star', 'realli want make experi good one', 'not return', 'chicken pho tast bland', 'disappoint', 'grill chicken tender yellow saffron season', 'drive thru mean not want wait around half hour food somehow end go make us wait wait', 'pretti awesom place', 'ambienc perfect', 'best luck rude non custom servic focus new manag', 'grandmoth make roast chicken better one', 'ask multipl time wine list time ignor went hostess got one', 'staff alway super friendli help especi cool bring two small boy babi', 'four star food guy blue shirt great vibe still let us eat', 'roast beef sandwich tast realli good', 'even drastic sick', 'high qualiti chicken chicken caesar salad', 'order burger rare came done', 'promptli greet seat', 'tri go lunch madhous', 'proven dead wrong sushi bar not qualiti great servic fast food impecc', 'wait hour seat not greatest mood', 'good joint', 'macaron insan good', 'not eat', 'waiter attent friendli inform', 'mayb cold would somewhat edibl', 'place lot promis fail deliv', 'bad experi', 'mistak', 'food averag best', 'great food', 'go back anytim soon', 'disappoint order big bay plater', 'great place relax awesom burger beer', 'perfect sit famili meal get togeth friend', 'not much flavor poorli construct', 'patio seat comfort', 'fri rice dri well', 'hand favorit italian restaur', 'scream legit book somethat also pretti rare vega', 'not fun experi', 'atmospher great love duo violinist play song request', 'person love hummu pita baklava falafel baba ganoush amaz eggplant', 'conveni sinc stay mgm', 'owner super friendli staff courteou', 'great', 'eclect select', 'sweet potato tot good onion ring perfect close', 'staff attent', 'chef gener time even came around twice take pictur', 'owner use work nobu place realli similar half price', 'googl mediocr imagin smashburg pop', 'dont go', 'promis disappoint', 'sushi lover avoid place mean', 'great doubl cheeseburg', 'awesom servic food', 'fantast neighborhood gem', 'wait go back', 'plantain worst ever tast', 'great place highli recommend', 'servic slow not attent', 'gave star give star', 'staff spend time talk', 'dessert panna cotta amaz', 'good food great atmospher', 'damn good steak', 'total brunch fail', 'price reason flavor spot sauc home made slaw not drench mayo', 'decor nice piano music soundtrack pleasant', 'steak amaz rge fillet relleno best seafood plate ever', 'good food good servic', 'absolut amaz', 'probabl back honest', 'definit back', 'sergeant pepper beef sandwich auju sauc excel sandwich well', 'hawaiian breez mango magic pineappl delight smoothi tri far good', 'went lunch servic slow', 'much say place walk expect amaz quickli disappoint', 'mortifi', 'needless say never back', 'anyway food definit not fill price pay expect', 'chip came drip greas mostli not edibl', 'realli impress strip steak', 'go sinc everi meal awesom', 'server nice attent serv staff', 'cashier friendli even brought food', 'work hospit industri paradis valley refrain recommend cibo longer', 'atmospher fun', 'would not recommend other', 'servic quick even go order like like', 'mean realli get famou fish chip terribl', 'said mouth belli still quit pleas', 'not thing', 'thumb', 'read pleas go', 'love grill pizza remind legit italian pizza', 'pro larg seat area nice bar area great simpl drink menu best brick oven pizza homemad dough', 'realli nice atmospher', 'tonight elk filet special suck', 'one bite hook', 'order old classic new dish go time sore disappoint everyth', 'cute quaint simpl honest', 'chicken delici season perfect fri outsid moist chicken insid', 'food great alway compliment chef', 'special thank dylan recommend order yummi tummi', 'awesom select beer', 'great food awesom servic', 'one nice thing ad gratuiti bill sinc parti larger expect tip', 'fli appl juic fli', 'han nan chicken also tasti', 'servic thought good', 'food bare lukewarm must sit wait server bring us', 'ryan bar definit one edinburgh establish revisit', 'nicest chines restaur', 'overal like food servic', 'also serv indian naan bread hummu spici pine nut sauc world', 'probabl never come back recommend', 'friend pasta also bad bare touch', 'tri airport experi tasti food speedi friendli servic', 'love decor chines calligraphi wall paper', 'never anyth complain', 'restaur clean famili restaur feel', 'way fri', 'not sure long stood long enough begin feel awkwardli place', 'open sandwich impress not good way', 'not back', 'warm feel servic felt like guest special treat', 'extens menu provid lot option breakfast', 'alway order vegetarian menu dinner wide array option choos', 'watch price inflat portion get smaller manag attitud grow rapidli', 'wonder lil tapa ambienc made feel warm fuzzi insid', 'got enjoy seafood salad fabul vinegrett', 'wonton thin not thick chewi almost melt mouth', 'level spici perfect spice whelm soup', 'sat right time server get go fantast', 'main thing enjoy crowd older crowd around mid', 'side town definit spot hit', 'wait minut get drink longer get arepa', 'great place eat', 'jalapeno bacon soooo good', 'servic poor that nice', 'food good servic good price good', 'place not clean food oh stale', 'chicken dish ok beef like shoe leather', 'servic beyond bad', 'happi', 'tast like dirt', 'one place phoenix would defin go back', 'block amaz', 'close hous low key non fanci afford price good food', 'hot sour egg flower soup absolut star', 'sashimi poor qualiti soggi tasteless', 'great time famili dinner sunday night', 'food not tasti not say real tradit hunan style', 'bother slow servic', 'flair bartend absolut amaz', 'frozen margarita way sugari tast', 'good order twice', 'nutshel restaraunt smell like combin dirti fish market sewer', 'girlfriend veal bad', 'unfortun not good', 'pretti satifi experi', 'join club get awesom offer via email', 'perfect someon like beer ice cold case even colder', 'bland flavorless good way describ bare tepid meat', 'chain fan beat place easili', 'nacho must', 'not come back', 'mani word say place everyth pretti well', 'staff super nice quick even crazi crowd downtown juri lawyer court staff', 'great atmospher friendli fast servic', 'receiv pita huge lot meat thumb', 'food arriv meh', 'pay hot dog fri look like came kid meal wienerschnitzel not idea good meal', 'classic main lobster roll fantast', 'brother law work mall ate day guess sick night', 'good go review place twice herea tribut place tribut event held last night', 'chip salsa realli good salsa fresh', 'place great', 'mediocr food', 'get insid impress place', 'super pissd', 'servic super friendli', 'sad littl veget overcook', 'place nice surpris', 'golden crispi delici', 'high hope place sinc burger cook charcoal grill unfortun tast fell flat way flat', 'could eat bruschetta day devin', 'not singl employe came see ok even need water refil final serv us food', 'lastli mozzarella stick best thing order', 'first time ever came amaz experi still tell peopl awesom duck', 'server neglig need made us feel unwelcom would not suggest place', 'servic terribl though', 'place overpr not consist boba realli overpr', 'pack', 'love place', 'say dessert yummi', 'food terribl', 'season fruit fresh white peach pure', 'kept get wors wors offici done', 'place honestli blown', 'definit would not eat', 'not wast money', 'love put food nice plastic contain oppos cram littl paper takeout box', 'cr pe delic thin moist', 'aw servic', 'ever go', 'food qualiti horribl', 'price think place would much rather gone', 'servic fair best', 'love sushi found kabuki price hip servic', 'favor stay away dish', 'poor servic', 'one tabl thought food averag worth wait', 'best servic food ever maria server good friendli made day', 'excel', 'paid bill not tip felt server terribl job', 'lunch great experi', 'never bland food surpris consid articl read focus much spice flavor', 'food way overpr portion fuck small', 'recent tri caballero back everi week sinc', 'buck head realli expect better food', 'food came good pace', 'ate twice last visit especi enjoy salmon salad', 'back', 'could not believ dirti oyster', 'place deserv star', 'would not recommend place', 'fact go round star awesom', 'disbelief dish qualifi worst version food ever tast', 'bad day not low toler rude custom servic peopl job nice polit wash dish otherwis', 'potato great biscuit', 'probabl would not go', 'flavor perfect amount heat', 'price reason servic great', 'wife hate meal coconut shrimp friend realli not enjoy meal either', 'fella got huevo ranchero look appeal', 'went happi hour great list wine', 'may say buffet pricey think get pay place get quit lot', 'probabl come back', 'worst food servic', 'place pretti good nice littl vibe restaur', 'talk great custom servic cours back', 'hot dish not hot cold dish close room temp watch staff prepar food bare hand glove everyth deep fri oil', 'love fri bean', 'alway pleasur deal', 'plethora salad sandwich everyth tri get seal approv', 'place awesom want someth light healthi summer', 'sushi strip place go', 'servic great even manag came help tabl', 'feel dine room colleg cook cours high class dine servic slow best', 'start review two star edit give one', 'worst sushi ever eat besid costco', 'excel restaur highlight great servic uniqu menu beauti set', 'boyfriend sat bar complet delight experi', 'weird vibe owner', 'hardli meat', 'better bagel groceri store', 'go place gyro', 'love owner chef one authent japanes cool dude', 'burger good pizza use amaz doughi flavorless', 'found six inch long piec wire salsa', 'servic terribl food mediocr', 'defin enjoy', 'order albondiga soup warm tast like tomato soup frozen meatbal', 'three differ occas ask well done medium well three time got bloodiest piec meat plate', 'two bite refus eat anymor', 'servic extrem slow', 'minut wait got tabl', 'serious killer hot chai latt', 'allergi warn menu waitress absolut clue meal not contain peanut', 'boyfriend tri mediterranean chicken salad fell love', 'rotat beer tap also highlight place', 'price bit concern mellow mushroom', 'worst thai ever', 'stay vega must get breakfast least', 'want first say server great perfect servic', 'pizza select good', 'strawberri tea good', 'highli unprofession rude loyal patron', 'overal great experi', 'spend money elsewher', 'regular toast bread equal satisfi occasion pat butter mmmm', 'buffet bellagio far anticip', 'drink weak peopl', 'order not correct', 'also feel like chip bought not made hous', 'disappoint dinner went elsewher dessert', 'chip sal amaz', 'return', 'new fav vega buffet spot', 'serious cannot believ owner mani unexperienc employe run around like chicken head cut', 'sad', 'felt insult disrespect could talk judg anoth human like', 'call steakhous properli cook steak understand', 'not impress concept food', 'thing crazi guacamol like pur ed', 'realli noth postino hope experi better', 'got food poison buffet', 'brought fresh batch fri think yay someth warm', 'hilari yummi christma eve dinner rememb biggest fail entir trip us', 'needless say go back anytim soon', 'place disgust', 'everi time eat see care teamwork profession degre', 'ri style calamari joke', 'howev much garlic fondu bare edibl', 'could bare stomach meal complain busi lunch', 'bad lost heart finish', 'also took forev bring us check ask', 'one make scene restaur get definit lost love one', 'disappoint experi', 'food par denni say not good', 'want wait mediocr food downright terribl servic place', 'waaaaaayyyyyyyyyi rate say', 'go back', 'place fairli clean food simpli worth', 'place lack style', 'sangria half glass wine full ridicul', 'bother come', 'meat pretti dri slice brisket pull pork', 'build seem pretti neat bathroom pretti trippi eat', 'equal aw', 'probabl not hurri go back', 'slow seat even reserv', 'not good stretch imagin', 'cashew cream sauc bland veget undercook', 'chipolt ranch dip saus tasteless seem thin water heat', 'bit sweet not realli spici enough lack flavor', 'disappoint', 'place horribl way overpr', 'mayb vegetarian fare twice thought averag best', 'busi know', 'tabl outsid also dirti lot time worker not alway friendli help menu', 'ambianc not feel like buffet set douchey indoor garden tea biscuit', 'con spotti servic', 'fri not hot neither burger', 'came back cold', 'food came disappoint ensu', 'real disappoint waiter', 'husband said rude not even apolog bad food anyth', 'reason eat would fill night bing drink get carb stomach', 'insult profound deuchebaggeri go outsid smoke break serv solidifi', 'someon order two taco think may part custom servic ask combo ala cart', 'quit disappoint although blame need place door', 'rave review wait eat disappoint', 'del taco pretti nasti avoid possibl', 'not hard make decent hamburg', 'like', 'hell go back', 'gotten much better servic pizza place next door servic receiv restaur', 'know big deal place back ya', 'immedi said want talk manag not want talk guy shot firebal behind bar', 'ambianc much better', 'unfortun set us disapppoint entre', 'food good', 'server suck wait correct server heimer suck', 'happen next pretti put', 'bad caus know famili own realli want like place', 'overpr get', 'vomit bathroom mid lunch', 'kept look time soon becom minut yet still food', 'place eat circumst would ever return top list', 'start tuna sashimi brownish color obvious fresh', 'food averag', 'sure beat nacho movi would expect littl bit come restaur', 'ha long bay bit flop', 'problem charg sandwich bigger subway sub offer better amount veget', 'shrimp unwrap live mile brushfir liter ice cold', 'lack flavor seem undercook dri', 'realli impress place close', 'would avoid place stay mirag', 'refri bean came meal dri crusti food bland', 'spend money time place els', 'ladi tabl next us found live green caterpillar salad', 'present food aw', 'tell disappoint', 'think food flavor textur lack', 'appetit instantli gone', 'overal not impress would not go back', 'whole experi underwhelm think go ninja sushi next time', 'wast enough life pour salt wound draw time took bring check']
</code>
## Creating the Bag of Words model_____no_output_____
<code>
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 1500)
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, -1].values_____no_output_____
</code>
## Splitting the dataset into the Training set and Test set_____no_output_____
<code>
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)_____no_output_____
</code>
## Training the Naive Bayes model on the Training set_____no_output_____
<code>
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)_____no_output_____
</code>
## Predicting the Test set results_____no_output_____
<code>
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))[[1 0]
[1 0]
[1 0]
[0 0]
[0 0]
[1 0]
[1 1]
[1 0]
[1 0]
[1 1]
[1 1]
[1 1]
[1 0]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 1]
[1 1]
[1 0]
[1 0]
[0 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 0]
[0 0]
[1 0]
[1 1]
[1 1]
[1 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 0]
[1 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[1 0]
[0 0]
[1 0]
[1 0]
[1 1]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 0]
[1 1]
[0 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[1 1]
[1 0]
[0 0]
[1 1]
[1 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 0]
[1 1]
[1 0]
[1 1]
[1 1]
[1 0]
[0 1]
[1 1]
[1 1]
[1 0]
[0 1]
[1 0]
[1 1]
[1 1]
[0 0]
[0 1]
[0 1]
[1 1]
[0 0]
[1 0]
[1 1]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[1 0]
[0 0]
[1 1]
[1 0]
[1 1]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[1 0]
[0 1]
[1 1]
[1 1]
[0 0]
[1 0]
[0 0]
[1 0]
[1 1]
[1 1]
[1 1]
[1 1]
[0 1]
[1 1]
[1 1]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[1 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[1 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[1 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[1 0]
[1 0]
[1 1]
[0 0]
[1 1]
[1 1]
[1 0]
[1 1]]
</code>
## Making the Confusion Matrix_____no_output_____
<code>
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)[[55 42]
[12 91]]
print(X_test)[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
</code>
## Predicting if a single review is positive or negative_____no_output_____### Positive review_____no_output_____Use our model to predict if the following review:
"I love this restaurant so much"
is positive or negative._____no_output_____**Solution:** We just repeat the same text preprocessing process we did before, but this time with a single review._____no_output_____
<code>
new_review = 'I love this restaurant so much'
new_review = re.sub('[^a-zA-Z]', ' ', new_review)
new_review = new_review.lower()
new_review = new_review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)]
new_review = ' '.join(new_review)
new_corpus = [new_review]
new_X_test = cv.transform(new_corpus).toarray()
new_y_pred = classifier.predict(new_X_test)
print(new_y_pred)[1]
</code>
The review was correctly predicted as positive by our model._____no_output_____### Negative review_____no_output_____Use our model to predict if the following review:
"I hate this restaurant so much"
is positive or negative._____no_output_____**Solution:** We just repeat the same text preprocessing process we did before, but this time with a single review._____no_output_____
<code>
new_review = 'I hate this restaurant so much'
new_review = re.sub('[^a-zA-Z]', ' ', new_review)
new_review = new_review.lower()
new_review = new_review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
new_review = [ps.stem(word) for word in new_review if not word in set(all_stopwords)]
new_review = ' '.join(new_review)
new_corpus = [new_review]
new_X_test = cv.transform(new_corpus).toarray()
new_y_pred = classifier.predict(new_X_test)
print(new_y_pred)[0]
</code>
The review was correctly predicted as negative by our model._____no_output_____
<code>
_____no_output_____
</code>
|
{
"repository": "arpit1920/Machine-Learning-all-Algorithms",
"path": "NLP/natural_language_processing.ipynb",
"matched_keywords": [
"STAR",
"Salmon"
],
"stars": 2,
"size": 54625,
"hexsha": "cbaafedb8c1a49df0ef96f08e8fafdcea76338c5",
"max_line_length": 36277,
"avg_line_length": 73.2238605898,
"alphanum_fraction": 0.6284668192
}
|
# Notebook from ShepherdCode/ShepherdML
Path: Workshop/GRU_212e.ipynb
# GRU 212
* Operate on 16000 GenCode 34 seqs.
* 5-way cross validation. Save best model per CV.
* Report mean accuracy from final re-validation with best 5.
* Use Adam with a learn rate decay schdule._____no_output_____
<code>
NC_FILENAME='ncRNA.gc34.processed.fasta'
PC_FILENAME='pcRNA.gc34.processed.fasta'
DATAPATH=""
try:
from google.colab import drive
IN_COLAB = True
PATH='/content/drive/'
drive.mount(PATH)
DATAPATH=PATH+'My Drive/data/' # must end in "/"
NC_FILENAME = DATAPATH+NC_FILENAME
PC_FILENAME = DATAPATH+PC_FILENAME
except:
IN_COLAB = False
DATAPATH=""
EPOCHS=200
SPLITS=1
K=3
VOCABULARY_SIZE=4**K+1 # e.g. K=3 => 64 DNA K-mers + 'NNN'
EMBED_DIMEN=16
FILENAME='GRU212'
NEURONS=64
DROP=0.0
ACT="tanh"Mounted at /content/drive/
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
from tensorflow import keras
from keras.wrappers.scikit_learn import KerasRegressor
from keras.models import Sequential
from keras.layers import Bidirectional
from keras.layers import GRU
from keras.layers import Dense
from keras.layers import LayerNormalization
import time
dt='float32'
tf.keras.backend.set_floatx(dt)_____no_output_____
</code>
## Build model_____no_output_____
<code>
def compile_model(model):
adam_default_learn_rate = 0.001
schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate = adam_default_learn_rate*10,
#decay_steps=100000, decay_rate=0.96, staircase=True)
decay_steps=10000, decay_rate=0.99, staircase=True)
# learn rate = initial_learning_rate * decay_rate ^ (step / decay_steps)
alrd = tf.keras.optimizers.Adam(learning_rate=schedule)
bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
print("COMPILE...")
#model.compile(loss=bc, optimizer=alrd, metrics=["accuracy"])
model.compile(loss=bc, optimizer="adam", metrics=["accuracy"])
print("...COMPILED")
return model
def build_model():
embed_layer = keras.layers.Embedding(
#VOCABULARY_SIZE, EMBED_DIMEN, input_length=1000, input_length=1000, mask_zero=True)
#input_dim=[None,VOCABULARY_SIZE], output_dim=EMBED_DIMEN, mask_zero=True)
input_dim=VOCABULARY_SIZE, output_dim=EMBED_DIMEN, mask_zero=True)
#rnn1_layer = keras.layers.Bidirectional(
rnn1_layer = keras.layers.GRU(NEURONS, return_sequences=True,
input_shape=[1000,EMBED_DIMEN], activation=ACT, dropout=DROP) #)#bi
#rnn2_layer = keras.layers.Bidirectional(
rnn2_layer = keras.layers.GRU(NEURONS, return_sequences=False,
activation=ACT, dropout=DROP) #)#bi
dense1_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
#drop1_layer = keras.layers.Dropout(DROP)
dense2_layer = keras.layers.Dense(NEURONS, activation=ACT,dtype=dt)
#drop2_layer = keras.layers.Dropout(DROP)
output_layer = keras.layers.Dense(1, activation="sigmoid", dtype=dt)
mlp = keras.models.Sequential()
mlp.add(embed_layer)
mlp.add(rnn1_layer)
mlp.add(rnn2_layer)
mlp.add(dense1_layer)
#mlp.add(drop1_layer)
mlp.add(dense2_layer)
#mlp.add(drop2_layer)
mlp.add(output_layer)
mlpc = compile_model(mlp)
return mlpc_____no_output_____
</code>
## Load and partition sequences_____no_output_____
<code>
# Assume file was preprocessed to contain one line per seq.
# Prefer Pandas dataframe but df does not support append.
# For conversion to tensor, must avoid python lists.
def load_fasta(filename,label):
DEFLINE='>'
labels=[]
seqs=[]
lens=[]
nums=[]
num=0
with open (filename,'r') as infile:
for line in infile:
if line[0]!=DEFLINE:
seq=line.rstrip()
num += 1 # first seqnum is 1
seqlen=len(seq)
nums.append(num)
labels.append(label)
seqs.append(seq)
lens.append(seqlen)
df1=pd.DataFrame(nums,columns=['seqnum'])
df2=pd.DataFrame(labels,columns=['class'])
df3=pd.DataFrame(seqs,columns=['sequence'])
df4=pd.DataFrame(lens,columns=['seqlen'])
df=pd.concat((df1,df2,df3,df4),axis=1)
return df
def separate_X_and_y(data):
y= data[['class']].copy()
X= data.drop(columns=['class','seqnum','seqlen'])
return (X,y)
_____no_output_____
</code>
## Make K-mers_____no_output_____
<code>
def make_kmer_table(K):
npad='N'*K
shorter_kmers=['']
for i in range(K):
longer_kmers=[]
for mer in shorter_kmers:
longer_kmers.append(mer+'A')
longer_kmers.append(mer+'C')
longer_kmers.append(mer+'G')
longer_kmers.append(mer+'T')
shorter_kmers = longer_kmers
all_kmers = shorter_kmers
kmer_dict = {}
kmer_dict[npad]=0
value=1
for mer in all_kmers:
kmer_dict[mer]=value
value += 1
return kmer_dict
KMER_TABLE=make_kmer_table(K)
def strings_to_vectors(data,uniform_len):
all_seqs=[]
for seq in data['sequence']:
i=0
seqlen=len(seq)
kmers=[]
while i < seqlen-K+1 -1: # stop at minus one for spaced seed
#kmer=seq[i:i+2]+seq[i+3:i+5] # SPACED SEED 2/1/2 for K=4
kmer=seq[i:i+K]
i += 1
value=KMER_TABLE[kmer]
kmers.append(value)
pad_val=0
while i < uniform_len:
kmers.append(pad_val)
i += 1
all_seqs.append(kmers)
pd2d=pd.DataFrame(all_seqs)
return pd2d # return 2D dataframe, uniform dimensions_____no_output_____def make_kmers(MAXLEN,train_set):
(X_train_all,y_train_all)=separate_X_and_y(train_set)
X_train_kmers=strings_to_vectors(X_train_all,MAXLEN)
# From pandas dataframe to numpy to list to numpy
num_seqs=len(X_train_kmers)
tmp_seqs=[]
for i in range(num_seqs):
kmer_sequence=X_train_kmers.iloc[i]
tmp_seqs.append(kmer_sequence)
X_train_kmers=np.array(tmp_seqs)
tmp_seqs=None
labels=y_train_all.to_numpy()
return (X_train_kmers,labels)_____no_output_____def make_frequencies(Xin):
Xout=[]
VOCABULARY_SIZE= 4**K + 1 # plus one for 'NNN'
for seq in Xin:
freqs =[0] * VOCABULARY_SIZE
total = 0
for kmerval in seq:
freqs[kmerval] += 1
total += 1
for c in range(VOCABULARY_SIZE):
freqs[c] = freqs[c]/total
Xout.append(freqs)
Xnum = np.asarray(Xout)
return (Xnum)
def make_slice(data_set,min_len,max_len):
slice = data_set.query('seqlen <= '+str(max_len)+' & seqlen>= '+str(min_len))
return slice_____no_output_____
</code>
## Cross validation_____no_output_____
<code>
def do_cross_validation(X,y,given_model):
cv_scores = []
fold=0
splitter = ShuffleSplit(n_splits=SPLITS, test_size=0.1, random_state=37863)
for train_index,valid_index in splitter.split(X):
fold += 1
X_train=X[train_index] # use iloc[] for dataframe
y_train=y[train_index]
X_valid=X[valid_index]
y_valid=y[valid_index]
# Avoid continually improving the same model.
model = compile_model(keras.models.clone_model(given_model))
bestname=DATAPATH+FILENAME+".cv."+str(fold)+".best"
mycallbacks = [keras.callbacks.ModelCheckpoint(
filepath=bestname, save_best_only=True,
monitor='val_accuracy', mode='max')]
print("FIT")
start_time=time.time()
history=model.fit(X_train, y_train, # batch_size=10, default=32 works nicely
epochs=EPOCHS, verbose=1, # verbose=1 for ascii art, verbose=0 for none
callbacks=mycallbacks,
validation_data=(X_valid,y_valid) )
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
best_model=keras.models.load_model(bestname)
scores = best_model.evaluate(X_valid, y_valid, verbose=0)
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
cv_scores.append(scores[1] * 100)
print()
print("%d-way Cross Validation mean %.2f%% (+/- %.2f%%)" % (fold, np.mean(cv_scores), np.std(cv_scores)))_____no_output_____
</code>
## Train on RNA lengths 200-1Kb_____no_output_____
<code>
MINLEN=200
MAXLEN=1000
print("Load data from files.")
nc_seq=load_fasta(NC_FILENAME,0)
pc_seq=load_fasta(PC_FILENAME,1)
train_set=pd.concat((nc_seq,pc_seq),axis=0)
nc_seq=None
pc_seq=None
print("Ready: train_set")
#train_set
subset=make_slice(train_set,MINLEN,MAXLEN)# One array to two: X and y
print ("Data reshape")
(X_train,y_train)=make_kmers(MAXLEN,subset)
#print ("Data prep")
#X_train=make_frequencies(X_train)Load data from files.
Ready: train_set
Data reshape
print ("Compile the model")
model=build_model()
print ("Summarize the model")
print(model.summary()) # Print this only once
model.save(DATAPATH+FILENAME+'.model')
Compile the model
COMPILE...
...COMPILED
Summarize the model
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 16) 1040
_________________________________________________________________
gru (GRU) (None, None, 64) 15744
_________________________________________________________________
gru_1 (GRU) (None, 64) 24960
_________________________________________________________________
dense (Dense) (None, 64) 4160
_________________________________________________________________
dense_1 (Dense) (None, 64) 4160
_________________________________________________________________
dense_2 (Dense) (None, 1) 65
=================================================================
Total params: 50,129
Trainable params: 50,129
Non-trainable params: 0
_________________________________________________________________
None
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.model/assets
print ("Cross valiation")
do_cross_validation(X_train,y_train,model)
print ("Done")Cross valiation
COMPILE...
...COMPILED
FIT
Epoch 1/200
453/453 [==============================] - ETA: 0s - loss: 0.6305 - accuracy: 0.6491INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 53s 116ms/step - loss: 0.6305 - accuracy: 0.6491 - val_loss: 0.6030 - val_accuracy: 0.6710
Epoch 2/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6304 - accuracy: 0.6566 - val_loss: 0.6514 - val_accuracy: 0.6530
Epoch 3/200
453/453 [==============================] - 34s 76ms/step - loss: 0.6557 - accuracy: 0.6397 - val_loss: 0.6451 - val_accuracy: 0.6530
Epoch 4/200
453/453 [==============================] - 34s 76ms/step - loss: 0.6547 - accuracy: 0.6397 - val_loss: 0.6449 - val_accuracy: 0.6530
Epoch 5/200
453/453 [==============================] - 35s 78ms/step - loss: 0.6540 - accuracy: 0.6397 - val_loss: 0.6452 - val_accuracy: 0.6530
Epoch 6/200
453/453 [==============================] - 35s 77ms/step - loss: 0.6542 - accuracy: 0.6397 - val_loss: 0.6470 - val_accuracy: 0.6530
Epoch 7/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6536 - accuracy: 0.6397 - val_loss: 0.6447 - val_accuracy: 0.6530
Epoch 8/200
453/453 [==============================] - 35s 77ms/step - loss: 0.6541 - accuracy: 0.6397 - val_loss: 0.6464 - val_accuracy: 0.6530
Epoch 9/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6543 - accuracy: 0.6397 - val_loss: 0.6470 - val_accuracy: 0.6530
Epoch 10/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6538 - accuracy: 0.6397 - val_loss: 0.6448 - val_accuracy: 0.6530
Epoch 11/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6541 - accuracy: 0.6397 - val_loss: 0.6446 - val_accuracy: 0.6530
Epoch 12/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6541 - accuracy: 0.6397 - val_loss: 0.6512 - val_accuracy: 0.6530
Epoch 13/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6535 - accuracy: 0.6397 - val_loss: 0.6458 - val_accuracy: 0.6530
Epoch 14/200
453/453 [==============================] - 34s 76ms/step - loss: 0.6535 - accuracy: 0.6397 - val_loss: 0.6461 - val_accuracy: 0.6530
Epoch 15/200
453/453 [==============================] - 34s 75ms/step - loss: 0.6535 - accuracy: 0.6397 - val_loss: 0.6453 - val_accuracy: 0.6530
Epoch 16/200
453/453 [==============================] - 34s 74ms/step - loss: 0.6534 - accuracy: 0.6397 - val_loss: 0.6445 - val_accuracy: 0.6530
Epoch 17/200
453/453 [==============================] - 33s 74ms/step - loss: 0.6536 - accuracy: 0.6384 - val_loss: 0.6424 - val_accuracy: 0.6530
Epoch 18/200
453/453 [==============================] - ETA: 0s - loss: 0.6275 - accuracy: 0.6565INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.6275 - accuracy: 0.6565 - val_loss: 0.5141 - val_accuracy: 0.7523
Epoch 19/200
453/453 [==============================] - ETA: 0s - loss: 0.4795 - accuracy: 0.7749INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.4795 - accuracy: 0.7749 - val_loss: 0.4184 - val_accuracy: 0.8088
Epoch 20/200
453/453 [==============================] - 34s 74ms/step - loss: 0.4271 - accuracy: 0.8062 - val_loss: 0.4316 - val_accuracy: 0.8001
Epoch 21/200
453/453 [==============================] - 34s 75ms/step - loss: 0.4119 - accuracy: 0.8127 - val_loss: 0.4248 - val_accuracy: 0.8063
Epoch 22/200
453/453 [==============================] - ETA: 0s - loss: 0.4114 - accuracy: 0.8132INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.4114 - accuracy: 0.8132 - val_loss: 0.3721 - val_accuracy: 0.8287
Epoch 23/200
453/453 [==============================] - ETA: 0s - loss: 0.3924 - accuracy: 0.8270INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.3924 - accuracy: 0.8270 - val_loss: 0.3867 - val_accuracy: 0.8305
Epoch 24/200
453/453 [==============================] - ETA: 0s - loss: 0.3887 - accuracy: 0.8266INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 107ms/step - loss: 0.3887 - accuracy: 0.8266 - val_loss: 0.3563 - val_accuracy: 0.8405
Epoch 25/200
453/453 [==============================] - ETA: 0s - loss: 0.3774 - accuracy: 0.8343INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.3774 - accuracy: 0.8343 - val_loss: 0.3595 - val_accuracy: 0.8411
Epoch 26/200
453/453 [==============================] - ETA: 0s - loss: 0.3718 - accuracy: 0.8362INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 107ms/step - loss: 0.3718 - accuracy: 0.8362 - val_loss: 0.3457 - val_accuracy: 0.8436
Epoch 27/200
453/453 [==============================] - ETA: 0s - loss: 0.3660 - accuracy: 0.8410INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.3660 - accuracy: 0.8410 - val_loss: 0.3410 - val_accuracy: 0.8529
Epoch 28/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3611 - accuracy: 0.8404 - val_loss: 0.3596 - val_accuracy: 0.8380
Epoch 29/200
453/453 [==============================] - ETA: 0s - loss: 0.3505 - accuracy: 0.8467INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.3505 - accuracy: 0.8467 - val_loss: 0.3121 - val_accuracy: 0.8672
Epoch 30/200
453/453 [==============================] - ETA: 0s - loss: 0.3368 - accuracy: 0.8553INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 109ms/step - loss: 0.3368 - accuracy: 0.8553 - val_loss: 0.3052 - val_accuracy: 0.8734
Epoch 31/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3421 - accuracy: 0.8494 - val_loss: 0.3112 - val_accuracy: 0.8647
Epoch 32/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3339 - accuracy: 0.8542 - val_loss: 0.3210 - val_accuracy: 0.8529
Epoch 33/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3401 - accuracy: 0.8492 - val_loss: 0.3341 - val_accuracy: 0.8560
Epoch 34/200
453/453 [==============================] - ETA: 0s - loss: 0.3219 - accuracy: 0.8641INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 49s 108ms/step - loss: 0.3219 - accuracy: 0.8641 - val_loss: 0.3067 - val_accuracy: 0.8746
Epoch 35/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3449 - accuracy: 0.8519 - val_loss: 0.3140 - val_accuracy: 0.8634
Epoch 36/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3309 - accuracy: 0.8594 - val_loss: 0.3475 - val_accuracy: 0.8461
Epoch 37/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3172 - accuracy: 0.8615 - val_loss: 0.3681 - val_accuracy: 0.8386
Epoch 38/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3476 - accuracy: 0.8499 - val_loss: 0.3101 - val_accuracy: 0.8634
Epoch 39/200
453/453 [==============================] - ETA: 0s - loss: 0.3095 - accuracy: 0.8737INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 50s 110ms/step - loss: 0.3095 - accuracy: 0.8737 - val_loss: 0.2870 - val_accuracy: 0.8839
Epoch 40/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3713 - accuracy: 0.8374 - val_loss: 0.3475 - val_accuracy: 0.8386
Epoch 41/200
453/453 [==============================] - 35s 76ms/step - loss: 0.3605 - accuracy: 0.8436 - val_loss: 0.3717 - val_accuracy: 0.8218
Epoch 42/200
453/453 [==============================] - 35s 78ms/step - loss: 0.3540 - accuracy: 0.8450 - val_loss: 0.3486 - val_accuracy: 0.8430
Epoch 43/200
453/453 [==============================] - 35s 78ms/step - loss: 0.3446 - accuracy: 0.8503 - val_loss: 0.3320 - val_accuracy: 0.8485
Epoch 44/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3424 - accuracy: 0.8507 - val_loss: 0.3320 - val_accuracy: 0.8492
Epoch 45/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3459 - accuracy: 0.8512 - val_loss: 0.3092 - val_accuracy: 0.8647
Epoch 46/200
453/453 [==============================] - 35s 76ms/step - loss: 0.3276 - accuracy: 0.8586 - val_loss: 0.3103 - val_accuracy: 0.8603
Epoch 47/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3166 - accuracy: 0.8651 - val_loss: 0.2928 - val_accuracy: 0.8715
Epoch 48/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3025 - accuracy: 0.8718 - val_loss: 0.2806 - val_accuracy: 0.8839
Epoch 49/200
453/453 [==============================] - 34s 76ms/step - loss: 0.2887 - accuracy: 0.8815 - val_loss: 0.4335 - val_accuracy: 0.8169
Epoch 50/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3924 - accuracy: 0.8261 - val_loss: 0.3667 - val_accuracy: 0.8392
Epoch 51/200
453/453 [==============================] - 34s 75ms/step - loss: 0.3546 - accuracy: 0.8445 - val_loss: 0.3396 - val_accuracy: 0.8516
Epoch 52/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3439 - accuracy: 0.8499 - val_loss: 0.3307 - val_accuracy: 0.8504
Epoch 53/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3394 - accuracy: 0.8508 - val_loss: 0.3624 - val_accuracy: 0.8417
Epoch 54/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3337 - accuracy: 0.8575 - val_loss: 0.3265 - val_accuracy: 0.8572
Epoch 55/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3231 - accuracy: 0.8619 - val_loss: 0.3096 - val_accuracy: 0.8659
Epoch 56/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3105 - accuracy: 0.8689 - val_loss: 0.3047 - val_accuracy: 0.8579
Epoch 57/200
453/453 [==============================] - 34s 76ms/step - loss: 0.3025 - accuracy: 0.8732 - val_loss: 0.3227 - val_accuracy: 0.8579
Epoch 58/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2857 - accuracy: 0.8819 - val_loss: 0.3030 - val_accuracy: 0.8721
Epoch 59/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2783 - accuracy: 0.8870 - val_loss: 0.2879 - val_accuracy: 0.8771
Epoch 60/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3084 - accuracy: 0.8690 - val_loss: 0.3569 - val_accuracy: 0.8361
Epoch 61/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3175 - accuracy: 0.8633 - val_loss: 0.3282 - val_accuracy: 0.8436
Epoch 62/200
453/453 [==============================] - 35s 76ms/step - loss: 0.2931 - accuracy: 0.8770 - val_loss: 0.2875 - val_accuracy: 0.8802
Epoch 63/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2902 - accuracy: 0.8766 - val_loss: 0.2899 - val_accuracy: 0.8783
Epoch 64/200
453/453 [==============================] - 35s 76ms/step - loss: 0.3087 - accuracy: 0.8695 - val_loss: 0.3627 - val_accuracy: 0.8436
Epoch 65/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3440 - accuracy: 0.8500 - val_loss: 0.3475 - val_accuracy: 0.8399
Epoch 66/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3396 - accuracy: 0.8539 - val_loss: 0.3981 - val_accuracy: 0.8231
Epoch 67/200
453/453 [==============================] - 35s 76ms/step - loss: 0.3372 - accuracy: 0.8532 - val_loss: 0.3304 - val_accuracy: 0.8523
Epoch 68/200
453/453 [==============================] - 35s 78ms/step - loss: 0.3248 - accuracy: 0.8582 - val_loss: 0.3560 - val_accuracy: 0.8355
Epoch 69/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3168 - accuracy: 0.8610 - val_loss: 0.3528 - val_accuracy: 0.8436
Epoch 70/200
453/453 [==============================] - 35s 77ms/step - loss: 0.3150 - accuracy: 0.8675 - val_loss: 0.3184 - val_accuracy: 0.8616
Epoch 71/200
453/453 [==============================] - 35s 78ms/step - loss: 0.3072 - accuracy: 0.8677 - val_loss: 0.3153 - val_accuracy: 0.8678
Epoch 72/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2988 - accuracy: 0.8706 - val_loss: 0.3103 - val_accuracy: 0.8572
Epoch 73/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2896 - accuracy: 0.8780 - val_loss: 0.3042 - val_accuracy: 0.8703
Epoch 74/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2802 - accuracy: 0.8859 - val_loss: 0.2927 - val_accuracy: 0.8833
Epoch 75/200
453/453 [==============================] - 36s 78ms/step - loss: 0.2743 - accuracy: 0.8864 - val_loss: 0.3052 - val_accuracy: 0.8628
Epoch 76/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2680 - accuracy: 0.8912 - val_loss: 0.3050 - val_accuracy: 0.8752
Epoch 77/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2835 - accuracy: 0.8822 - val_loss: 0.2949 - val_accuracy: 0.8796
Epoch 78/200
453/453 [==============================] - ETA: 0s - loss: 0.2718 - accuracy: 0.8874INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 52s 115ms/step - loss: 0.2718 - accuracy: 0.8874 - val_loss: 0.2880 - val_accuracy: 0.8883
Epoch 79/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2535 - accuracy: 0.8969 - val_loss: 0.2920 - val_accuracy: 0.8827
Epoch 80/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2525 - accuracy: 0.8976 - val_loss: 0.2817 - val_accuracy: 0.8858
Epoch 81/200
453/453 [==============================] - 36s 78ms/step - loss: 0.2512 - accuracy: 0.8977 - val_loss: 0.3178 - val_accuracy: 0.8703
Epoch 82/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2686 - accuracy: 0.8871 - val_loss: 0.2972 - val_accuracy: 0.8821
Epoch 83/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2434 - accuracy: 0.9011 - val_loss: 0.3296 - val_accuracy: 0.8616
Epoch 84/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2540 - accuracy: 0.8939 - val_loss: 0.3162 - val_accuracy: 0.8734
Epoch 85/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2414 - accuracy: 0.9013 - val_loss: 0.3167 - val_accuracy: 0.8759
Epoch 86/200
453/453 [==============================] - ETA: 0s - loss: 0.2234 - accuracy: 0.9106INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 50s 111ms/step - loss: 0.2234 - accuracy: 0.9106 - val_loss: 0.2865 - val_accuracy: 0.8914
Epoch 87/200
453/453 [==============================] - 35s 77ms/step - loss: 0.2158 - accuracy: 0.9151 - val_loss: 0.3179 - val_accuracy: 0.8790
Epoch 88/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2099 - accuracy: 0.9189 - val_loss: 0.3026 - val_accuracy: 0.8734
Epoch 89/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1972 - accuracy: 0.9232 - val_loss: 0.3006 - val_accuracy: 0.8889
Epoch 90/200
453/453 [==============================] - ETA: 0s - loss: 0.2301 - accuracy: 0.9071INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 51s 113ms/step - loss: 0.2301 - accuracy: 0.9071 - val_loss: 0.2787 - val_accuracy: 0.8970
Epoch 91/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2134 - accuracy: 0.9159 - val_loss: 0.2908 - val_accuracy: 0.8870
Epoch 92/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2234 - accuracy: 0.9120 - val_loss: 0.2900 - val_accuracy: 0.8852
Epoch 93/200
453/453 [==============================] - ETA: 0s - loss: 0.2158 - accuracy: 0.9150INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 51s 113ms/step - loss: 0.2158 - accuracy: 0.9150 - val_loss: 0.2766 - val_accuracy: 0.8982
Epoch 94/200
453/453 [==============================] - ETA: 0s - loss: 0.2190 - accuracy: 0.9152INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 50s 111ms/step - loss: 0.2190 - accuracy: 0.9152 - val_loss: 0.2702 - val_accuracy: 0.9032
Epoch 95/200
453/453 [==============================] - ETA: 0s - loss: 0.1976 - accuracy: 0.9240INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 51s 112ms/step - loss: 0.1976 - accuracy: 0.9240 - val_loss: 0.2666 - val_accuracy: 0.9081
Epoch 96/200
453/453 [==============================] - ETA: 0s - loss: 0.1906 - accuracy: 0.9247INFO:tensorflow:Assets written to: /content/drive/My Drive/data/GRU212.cv.1.best/assets
453/453 [==============================] - 51s 113ms/step - loss: 0.1906 - accuracy: 0.9247 - val_loss: 0.2488 - val_accuracy: 0.9131
Epoch 97/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1839 - accuracy: 0.9316 - val_loss: 0.2976 - val_accuracy: 0.8895
Epoch 98/200
453/453 [==============================] - 37s 82ms/step - loss: 0.2136 - accuracy: 0.9178 - val_loss: 0.2602 - val_accuracy: 0.9069
Epoch 99/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1820 - accuracy: 0.9310 - val_loss: 0.2842 - val_accuracy: 0.8876
Epoch 100/200
453/453 [==============================] - 36s 78ms/step - loss: 0.1960 - accuracy: 0.9235 - val_loss: 0.3218 - val_accuracy: 0.8678
Epoch 101/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2146 - accuracy: 0.9147 - val_loss: 0.2916 - val_accuracy: 0.8945
Epoch 102/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2237 - accuracy: 0.9078 - val_loss: 0.3195 - val_accuracy: 0.8771
Epoch 103/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1974 - accuracy: 0.9213 - val_loss: 0.3016 - val_accuracy: 0.8901
Epoch 104/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1770 - accuracy: 0.9325 - val_loss: 0.3146 - val_accuracy: 0.8932
Epoch 105/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1754 - accuracy: 0.9334 - val_loss: 0.2833 - val_accuracy: 0.9038
Epoch 106/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1679 - accuracy: 0.9342 - val_loss: 0.2737 - val_accuracy: 0.9112
Epoch 107/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1566 - accuracy: 0.9399 - val_loss: 0.2631 - val_accuracy: 0.9131
Epoch 108/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1474 - accuracy: 0.9441 - val_loss: 0.2628 - val_accuracy: 0.9094
Epoch 109/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1965 - accuracy: 0.9195 - val_loss: 0.4364 - val_accuracy: 0.8287
Epoch 110/200
453/453 [==============================] - 36s 81ms/step - loss: 0.2141 - accuracy: 0.9149 - val_loss: 0.2839 - val_accuracy: 0.9007
Epoch 111/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1750 - accuracy: 0.9334 - val_loss: 0.2764 - val_accuracy: 0.9063
Epoch 112/200
453/453 [==============================] - 38s 83ms/step - loss: 0.1762 - accuracy: 0.9314 - val_loss: 0.3089 - val_accuracy: 0.8821
Epoch 113/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1637 - accuracy: 0.9353 - val_loss: 0.2825 - val_accuracy: 0.8982
Epoch 114/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1476 - accuracy: 0.9421 - val_loss: 0.2990 - val_accuracy: 0.9081
Epoch 115/200
453/453 [==============================] - 37s 81ms/step - loss: 0.2025 - accuracy: 0.9165 - val_loss: 0.3348 - val_accuracy: 0.8672
Epoch 116/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1929 - accuracy: 0.9227 - val_loss: 0.3170 - val_accuracy: 0.8790
Epoch 117/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1680 - accuracy: 0.9345 - val_loss: 0.2959 - val_accuracy: 0.8920
Epoch 118/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1579 - accuracy: 0.9387 - val_loss: 0.2751 - val_accuracy: 0.9081
Epoch 119/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1576 - accuracy: 0.9409 - val_loss: 0.3208 - val_accuracy: 0.8908
Epoch 120/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1950 - accuracy: 0.9229 - val_loss: 0.3422 - val_accuracy: 0.8653
Epoch 121/200
453/453 [==============================] - 37s 83ms/step - loss: 0.2065 - accuracy: 0.9162 - val_loss: 0.3388 - val_accuracy: 0.8672
Epoch 122/200
453/453 [==============================] - 37s 82ms/step - loss: 0.1932 - accuracy: 0.9217 - val_loss: 0.3131 - val_accuracy: 0.8839
Epoch 123/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1916 - accuracy: 0.9235 - val_loss: 0.3457 - val_accuracy: 0.8796
Epoch 124/200
453/453 [==============================] - 37s 82ms/step - loss: 0.2029 - accuracy: 0.9197 - val_loss: 0.3701 - val_accuracy: 0.8566
Epoch 125/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1811 - accuracy: 0.9280 - val_loss: 0.3261 - val_accuracy: 0.8777
Epoch 126/200
453/453 [==============================] - 37s 81ms/step - loss: 0.1712 - accuracy: 0.9324 - val_loss: 0.3124 - val_accuracy: 0.8895
Epoch 127/200
453/453 [==============================] - 37s 81ms/step - loss: 0.2705 - accuracy: 0.8873 - val_loss: 0.3962 - val_accuracy: 0.8411
Epoch 128/200
453/453 [==============================] - 37s 82ms/step - loss: 0.2500 - accuracy: 0.8953 - val_loss: 0.3608 - val_accuracy: 0.8541
Epoch 129/200
453/453 [==============================] - 37s 81ms/step - loss: 0.2201 - accuracy: 0.9084 - val_loss: 0.3745 - val_accuracy: 0.8622
Epoch 130/200
453/453 [==============================] - 36s 78ms/step - loss: 0.1980 - accuracy: 0.9194 - val_loss: 0.3504 - val_accuracy: 0.8653
Epoch 131/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1870 - accuracy: 0.9263 - val_loss: 0.3177 - val_accuracy: 0.8827
Epoch 132/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1774 - accuracy: 0.9315 - val_loss: 0.3724 - val_accuracy: 0.8641
Epoch 133/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1773 - accuracy: 0.9278 - val_loss: 0.3447 - val_accuracy: 0.8783
Epoch 134/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2132 - accuracy: 0.9118 - val_loss: 0.3498 - val_accuracy: 0.8665
Epoch 135/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1837 - accuracy: 0.9264 - val_loss: 0.3311 - val_accuracy: 0.8883
Epoch 136/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1648 - accuracy: 0.9333 - val_loss: 0.3615 - val_accuracy: 0.8821
Epoch 137/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1433 - accuracy: 0.9433 - val_loss: 0.3443 - val_accuracy: 0.8889
Epoch 138/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1339 - accuracy: 0.9480 - val_loss: 0.3577 - val_accuracy: 0.8914
Epoch 139/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1544 - accuracy: 0.9405 - val_loss: 0.3535 - val_accuracy: 0.8901
Epoch 140/200
453/453 [==============================] - 36s 79ms/step - loss: 0.2321 - accuracy: 0.9037 - val_loss: 0.3755 - val_accuracy: 0.8485
Epoch 141/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2213 - accuracy: 0.9077 - val_loss: 0.3730 - val_accuracy: 0.8572
Epoch 142/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1932 - accuracy: 0.9202 - val_loss: 0.3686 - val_accuracy: 0.8665
Epoch 143/200
453/453 [==============================] - 36s 78ms/step - loss: 0.1769 - accuracy: 0.9282 - val_loss: 0.3813 - val_accuracy: 0.8603
Epoch 144/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1593 - accuracy: 0.9356 - val_loss: 0.3510 - val_accuracy: 0.8796
Epoch 145/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1533 - accuracy: 0.9387 - val_loss: 0.4014 - val_accuracy: 0.8628
Epoch 146/200
453/453 [==============================] - 36s 78ms/step - loss: 0.1472 - accuracy: 0.9428 - val_loss: 0.3949 - val_accuracy: 0.8641
Epoch 147/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1364 - accuracy: 0.9456 - val_loss: 0.3981 - val_accuracy: 0.8808
Epoch 148/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1389 - accuracy: 0.9435 - val_loss: 0.4436 - val_accuracy: 0.8616
Epoch 149/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1167 - accuracy: 0.9547 - val_loss: 0.4435 - val_accuracy: 0.8678
Epoch 150/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1413 - accuracy: 0.9436 - val_loss: 0.3964 - val_accuracy: 0.8541
Epoch 151/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1747 - accuracy: 0.9287 - val_loss: 0.4349 - val_accuracy: 0.8454
Epoch 152/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1432 - accuracy: 0.9431 - val_loss: 0.4872 - val_accuracy: 0.8498
Epoch 153/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1292 - accuracy: 0.9485 - val_loss: 0.5019 - val_accuracy: 0.8547
Epoch 154/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1134 - accuracy: 0.9543 - val_loss: 0.5625 - val_accuracy: 0.8461
Epoch 155/200
453/453 [==============================] - 36s 80ms/step - loss: 0.1017 - accuracy: 0.9607 - val_loss: 0.5504 - val_accuracy: 0.8448
Epoch 156/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1036 - accuracy: 0.9610 - val_loss: 0.5165 - val_accuracy: 0.8585
Epoch 157/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0951 - accuracy: 0.9643 - val_loss: 0.5832 - val_accuracy: 0.8529
Epoch 158/200
453/453 [==============================] - 36s 78ms/step - loss: 0.0944 - accuracy: 0.9641 - val_loss: 0.5291 - val_accuracy: 0.8616
Epoch 159/200
453/453 [==============================] - 36s 79ms/step - loss: 0.0957 - accuracy: 0.9651 - val_loss: 0.4683 - val_accuracy: 0.8703
Epoch 160/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1219 - accuracy: 0.9537 - val_loss: 0.4675 - val_accuracy: 0.8659
Epoch 161/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0962 - accuracy: 0.9636 - val_loss: 0.4884 - val_accuracy: 0.8752
Epoch 162/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0837 - accuracy: 0.9682 - val_loss: 0.4629 - val_accuracy: 0.8771
Epoch 163/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0901 - accuracy: 0.9642 - val_loss: 0.4772 - val_accuracy: 0.8771
Epoch 164/200
453/453 [==============================] - 36s 78ms/step - loss: 0.0837 - accuracy: 0.9671 - val_loss: 0.4851 - val_accuracy: 0.8727
Epoch 165/200
453/453 [==============================] - 36s 79ms/step - loss: 0.0920 - accuracy: 0.9649 - val_loss: 0.5216 - val_accuracy: 0.8672
Epoch 166/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0977 - accuracy: 0.9629 - val_loss: 0.3943 - val_accuracy: 0.8839
Epoch 167/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1709 - accuracy: 0.9314 - val_loss: 0.4033 - val_accuracy: 0.8696
Epoch 168/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1353 - accuracy: 0.9466 - val_loss: 0.4367 - val_accuracy: 0.8783
Epoch 169/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1083 - accuracy: 0.9572 - val_loss: 0.4595 - val_accuracy: 0.8690
Epoch 170/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0985 - accuracy: 0.9614 - val_loss: 0.4684 - val_accuracy: 0.8759
Epoch 171/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0886 - accuracy: 0.9648 - val_loss: 0.5465 - val_accuracy: 0.8659
Epoch 172/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1051 - accuracy: 0.9607 - val_loss: 0.3906 - val_accuracy: 0.8852
Epoch 173/200
453/453 [==============================] - 36s 79ms/step - loss: 0.0995 - accuracy: 0.9614 - val_loss: 0.4058 - val_accuracy: 0.8883
Epoch 174/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1028 - accuracy: 0.9606 - val_loss: 0.4578 - val_accuracy: 0.8727
Epoch 175/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1571 - accuracy: 0.9374 - val_loss: 0.4033 - val_accuracy: 0.8715
Epoch 176/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1502 - accuracy: 0.9404 - val_loss: 0.3933 - val_accuracy: 0.8783
Epoch 177/200
453/453 [==============================] - 34s 76ms/step - loss: 0.1322 - accuracy: 0.9488 - val_loss: 0.3913 - val_accuracy: 0.8746
Epoch 178/200
453/453 [==============================] - 35s 76ms/step - loss: 0.1337 - accuracy: 0.9476 - val_loss: 0.3742 - val_accuracy: 0.8696
Epoch 179/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1476 - accuracy: 0.9425 - val_loss: 0.3835 - val_accuracy: 0.8703
Epoch 180/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1339 - accuracy: 0.9469 - val_loss: 0.4159 - val_accuracy: 0.8591
Epoch 181/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1274 - accuracy: 0.9474 - val_loss: 0.4365 - val_accuracy: 0.8790
Epoch 182/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1022 - accuracy: 0.9605 - val_loss: 0.4043 - val_accuracy: 0.8814
Epoch 183/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0857 - accuracy: 0.9667 - val_loss: 0.4831 - val_accuracy: 0.8740
Epoch 184/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0812 - accuracy: 0.9689 - val_loss: 0.4784 - val_accuracy: 0.8678
Epoch 185/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0804 - accuracy: 0.9705 - val_loss: 0.4843 - val_accuracy: 0.8727
Epoch 186/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0965 - accuracy: 0.9624 - val_loss: 0.5097 - val_accuracy: 0.8616
Epoch 187/200
453/453 [==============================] - 36s 79ms/step - loss: 0.0793 - accuracy: 0.9694 - val_loss: 0.5210 - val_accuracy: 0.8703
Epoch 188/200
453/453 [==============================] - 35s 77ms/step - loss: 0.1031 - accuracy: 0.9600 - val_loss: 0.5172 - val_accuracy: 0.8659
Epoch 189/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0866 - accuracy: 0.9678 - val_loss: 0.4990 - val_accuracy: 0.8591
Epoch 190/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0883 - accuracy: 0.9677 - val_loss: 0.5327 - val_accuracy: 0.8690
Epoch 191/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0631 - accuracy: 0.9774 - val_loss: 0.5244 - val_accuracy: 0.8690
Epoch 192/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0523 - accuracy: 0.9810 - val_loss: 0.5622 - val_accuracy: 0.8684
Epoch 193/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0562 - accuracy: 0.9803 - val_loss: 0.5898 - val_accuracy: 0.8634
Epoch 194/200
453/453 [==============================] - 35s 78ms/step - loss: 0.0712 - accuracy: 0.9731 - val_loss: 0.5036 - val_accuracy: 0.8715
Epoch 195/200
453/453 [==============================] - 36s 79ms/step - loss: 0.0880 - accuracy: 0.9673 - val_loss: 0.4881 - val_accuracy: 0.8808
Epoch 196/200
453/453 [==============================] - 35s 77ms/step - loss: 0.0794 - accuracy: 0.9698 - val_loss: 0.5320 - val_accuracy: 0.8765
Epoch 197/200
453/453 [==============================] - 36s 78ms/step - loss: 0.1350 - accuracy: 0.9488 - val_loss: 0.3910 - val_accuracy: 0.8603
Epoch 198/200
453/453 [==============================] - 35s 78ms/step - loss: 0.2026 - accuracy: 0.9180 - val_loss: 0.3561 - val_accuracy: 0.8721
Epoch 199/200
453/453 [==============================] - 36s 79ms/step - loss: 0.1372 - accuracy: 0.9476 - val_loss: 0.3678 - val_accuracy: 0.8790
Epoch 200/200
453/453 [==============================] - 35s 78ms/step - loss: 0.1141 - accuracy: 0.9552 - val_loss: 0.4328 - val_accuracy: 0.8678
Fold 1, 200 epochs, 7380 sec
_____no_output_____
</code>
|
{
"repository": "ShepherdCode/ShepherdML",
"path": "Workshop/GRU_212e.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 119104,
"hexsha": "cbab61f15dcae403a01035619eb44697dfc161bc",
"max_line_length": 58862,
"avg_line_length": 120.672745694,
"alphanum_fraction": 0.7141741671
}
|
# Notebook from Nikoletos-K/QA-with-SBERT-for-CORD19
Path: SBERT_CORD19_QA_CrossEncoders.ipynb
<p align="center">
<img src="http://www.di.uoa.gr/themes/corporate_lite/logo_el.png" title="Department of Informatics and Telecommunications - University of Athens"/> </p>
---
<h1 align="center">
Artificial Intelligence
</h1>
<h1 align="center" >
Deep Learning for Natural Language Processing
</h1>
---
<h2 align="center">
<b>Konstantinos Nikoletos</b>
</h2>
<h3 align="center">
<b>Winter 2020-2021</b>
</h3>
---
---_____no_output_____
### __Task__
This exercise is about developing a document retrieval system to return titles of scientific
papers containing the answer to a given user question. You will use the first version of
the COVID-19 Open Research Dataset (CORD-19) in your work (articles in the folder
comm use subset).
For example, for the question “What are the coronaviruses?”, your system can return the
paper title “Distinct Roles for Sialoside and Protein Receptors in Coronavirus Infection”
since this paper contains the answer to the asked question.
To achieve the goal of this exercise, you will need first to read the paper Sentence-BERT:
Sentence Embeddings using Siamese BERT-Networks, in order to understand how you
can create sentence embeddings. In the related work of this paper, you will also find other
approaches for developing your model. For example, you can using Glove embeddings,
etc. In this link, you can find the extended versions of this dataset to test your model, if
you want. You are required to:
<ol type="a">
<li>Preprocess the provided dataset. You will decide which data of each paper is useful
to your model in order to create the appropriate embeddings. You need to explain
your decisions.</li>
<li>Implement at least 2 different sentence embedding approaches (see the related work
of the Sentence-BERT paper), in order for your model to retrieve the titles of the
papers related to a given question.</li>
<li>Compare your 2 models based on at least 2 different criteria of your choice. Explain
why you selected these criteria, your implementation choices, and the results. Some
questions you can pose are included here. You will need to provide the extra questions
you posed to your model and the results of all the questions as well.</li>
</ol>
### __Notebook__
Same implementation as Sentence Bert notebook but with adding CrossEncoders that I read that they perform even better
---
---_____no_output_______Import__ of essential libraries
_____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys # only needed to determine Python version number
import matplotlib # only needed to determine Matplotlib version
import nltk
from nltk.stem import WordNetLemmatizer
import pprint
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext import data
import logging
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
</code>
Selecting device (GPU - CUDA if available)_____no_output_____
<code>
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')Training on GPU.
</code>
# Loading data
---_____no_output_____
<code>
# Opening data file
import io
from google.colab import drive
from os import listdir
from os.path import isfile, join
import json
drive.mount('/content/drive',force_remount=True)Mounted at /content/drive
</code>
Loading the dictionary if it has been created_____no_output_____
<code>
#@title Select number of papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000","3000", "6000","9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)_____no_output_____
</code>
OR the summary of the papers_____no_output_____
<code>
#@title Select number of summarized papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000", "3000", "6000", "9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_Summarized_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)_____no_output_____
</code>
## Queries
---_____no_output_____
<code>
query_list = [
'What are the coronoviruses?',
'What was discovered in Wuhuan in December 2019?',
'What is Coronovirus Disease 2019?',
'What is COVID-19?',
'What is caused by SARS-COV2?', 'How is COVID-19 spread?',
'Where was COVID-19 discovered?','How does coronavirus spread?'
]
proposed_answers = [
'Coronaviruses (CoVs) are common human and animal pathogens that can transmit zoonotically and cause severe respiratory disease syndromes. ',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'Coronavirus Disease 2019 (COVID-19) is an emerging disease with a rapid increase in cases and deaths since its first identification in Wuhan, China, in December 2019.',
'COVID-19 is a viral respiratory illness caused by a new coronavirus called SARS-CoV-2.',
'Coronavirus disease (COVID-19) is caused by SARS-COV2 and represents the causative agent of a potentially fatal disease that is of great global public health concern.',
'First, although COVID-19 is spread by the airborne route, air disinfection of cities and communities is not known to be effective for disease control and needs to be stopped.',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'The new coronavirus was reported to spread via droplets, contact and natural aerosols from human-to-human.'
]
myquery_list = [
"How long can the coronavirus survive on surfaces?",
"What means COVID-19?",
"Is COVID19 worse than flue?",
"When the vaccine will be ready?",
"Whats the proteins that consist COVID-19?",
"Whats the symptoms of COVID-19?",
"How can I prevent COVID-19?",
"What treatments are available for COVID-19?",
"Is hand sanitizer effective against COVID-19?",
"Am I at risk for serious complications from COVID-19 if I smoke cigarettes?",
"Are there any FDA-approved drugs (medicines) for COVID-19?",
"How are people tested?",
"Why is the disease being called coronavirus disease 2019, COVID-19?",
"Am I at risk for COVID-19 from mail, packages, or products?",
"What is community spread?",
"How can I protect myself?",
"What is a novel coronavirus?",
"Was Harry Potter a good magician?"
]_____no_output_____
</code>
# Results dataframes_____no_output_____
<code>
resultsDf = pd.DataFrame(columns=['Number of papers','Embeddings creation time'])
queriesDf = pd.DataFrame(columns=['Query','Proposed_answer','Model_answer','Cosine_similarity'])
queriesDf['Query'] = query_list
queriesDf['Proposed_answer'] = proposed_answers
myQueriesDf = pd.DataFrame(columns=['Query','Model_answer','Cosine_similarity'])
myQueriesDf['Query'] = myquery_list
queriesDf_____no_output_____
</code>
# SBERT
---_____no_output_____
<code>
!pip install -U sentence-transformersRequirement already up-to-date: sentence-transformers in /usr/local/lib/python3.6/dist-packages (0.4.1.2)
Requirement already satisfied, skipping upgrade: torch>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (1.7.0+cu101)
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (1.19.5)
Requirement already satisfied, skipping upgrade: tqdm in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (4.41.1)
Requirement already satisfied, skipping upgrade: sentencepiece in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (0.1.95)
Requirement already satisfied, skipping upgrade: transformers<5.0.0,>=3.1.0 in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (4.3.2)
Requirement already satisfied, skipping upgrade: scikit-learn in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (0.22.2.post1)
Requirement already satisfied, skipping upgrade: nltk in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (3.2.5)
Requirement already satisfied, skipping upgrade: scipy in /usr/local/lib/python3.6/dist-packages (from sentence-transformers) (1.4.1)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from torch>=1.6.0->sentence-transformers) (0.16.0)
Requirement already satisfied, skipping upgrade: dataclasses in /usr/local/lib/python3.6/dist-packages (from torch>=1.6.0->sentence-transformers) (0.8)
Requirement already satisfied, skipping upgrade: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch>=1.6.0->sentence-transformers) (3.7.4.3)
Requirement already satisfied, skipping upgrade: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (0.0.43)
Requirement already satisfied, skipping upgrade: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (2019.12.20)
Requirement already satisfied, skipping upgrade: requests in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (2.23.0)
Requirement already satisfied, skipping upgrade: tokenizers<0.11,>=0.10.1 in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (0.10.1)
Requirement already satisfied, skipping upgrade: packaging in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (20.9)
Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (3.4.0)
Requirement already satisfied, skipping upgrade: filelock in /usr/local/lib/python3.6/dist-packages (from transformers<5.0.0,>=3.1.0->sentence-transformers) (3.0.12)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sentence-transformers) (1.0.0)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from nltk->sentence-transformers) (1.15.0)
Requirement already satisfied, skipping upgrade: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers<5.0.0,>=3.1.0->sentence-transformers) (7.1.2)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers<5.0.0,>=3.1.0->sentence-transformers) (3.0.4)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers<5.0.0,>=3.1.0->sentence-transformers) (1.24.3)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers<5.0.0,>=3.1.0->sentence-transformers) (2020.12.5)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers<5.0.0,>=3.1.0->sentence-transformers) (2.10)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers<5.0.0,>=3.1.0->sentence-transformers) (2.4.7)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->transformers<5.0.0,>=3.1.0->sentence-transformers) (3.4.0)
</code>
# Selecting transformer and Cross Encoder_____no_output_____
<code>
from sentence_transformers import SentenceTransformer, util, CrossEncoder
import torch
import time
encoder = SentenceTransformer('msmarco-distilbert-base-v2')
cross_encoder = CrossEncoder('cross-encoder/ms-marco-TinyBERT-L-6')_____no_output_____
</code>
# Initializing corpus_____no_output_____
<code>
corpus = list(CORD19Dictionary.keys())_____no_output_____
</code>
# Creating the embeddings_____no_output_____Encoding the papers_____no_output_____
<code>
%%time
corpus_embeddings = encoder.encode(corpus, convert_to_tensor=True, show_progress_bar=True,device='cuda')_____no_output_____
</code>
# Saving corpus as tensors to drive_____no_output_____
<code>
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
torch.save(corpus_embeddings,corpus_embeddings_path)_____no_output_____
</code>
# Loading embeddings if have been created and saved
---_____no_output_____
<code>
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
with open(corpus_embeddings_path, 'rb') as f:
corpus_embeddings = torch.load(f)_____no_output_____
</code>
# Evaluation
---
_____no_output_____
<code>
import re
from nltk import tokenize
from termcolor import colored
def paperTitle(answer,SentenceMap):
record = SentenceMap[answer]
print("Paper title:",record[1])
print("Paper id: ",record[0])
def evaluation(query_list,top_k,resultsDf):
query_answers = []
scores = []
for query in query_list:
#Encode the query using the bi-encoder and find potentially relevant corpus
start_time = time.time()
question_embedding = encoder.encode(query, convert_to_tensor=True,device='cuda')
hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)
hits = hits[0] # Get the hits for the first query
#Now, score all retrieved corpus with the cross_encoder
cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits]
cross_scores = cross_encoder.predict(cross_inp)
#Sort results by the cross-encoder scores
for idx in range(len(cross_scores)):
hits[idx]['cross-score'] = cross_scores[idx]
hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)
end_time = time.time()
#Output of top-5 hits
print("\n\n======================\n\n")
print("Query:",colored(query,'green') )
print("Results (after {:.3f} seconds):".format(end_time - start_time))
iter=0
for hit in hits[0:top_k]:
print("\n-> ",iter+1)
answer = ' '.join([re.sub(r"^\[.*\]", "", x) for x in corpus[hit['corpus_id']].split()])
if len(tokenize.word_tokenize(answer)) > 1:
print("Score: {:.4f}".format(hit['cross-score']))
paperTitle(corpus[hit['corpus_id']],CORD19Dictionary)
print("Anser size: ",len(tokenize.word_tokenize(answer)))
print("Anser: ")
if iter==0:
query_answers.append(answer)
scores.append(hit['cross-score'].item())
iter+=1
print(colored(answer,'yellow'))
resultsDf['Model_answer'] = query_answers
resultsDf['Cosine_similarity'] = scores
_____no_output_____top_k = 3
evaluation(query_list,top_k,queriesDf)
======================
Query: [32mWhat are the coronoviruses?[0m
Results (after 0.839 seconds):
-> 1
Score: 0.0639
Paper title: Citation: Interactions Between Enteroviruses and the Inflammasome: New Insights Into Viral Pathogenesis
Paper id: 423e1f15afb86012057acacc26d0766aa4bc582a
Anser size: 7
Anser:
[33mEnteroviruses are the members of Picornaviridae.[0m
-> 2
Score: 0.0185
Paper title: Full Genome Virus Detection in Fecal Samples Using Sensitive Nucleic Acid Preparation, Deep Sequencing, and a Novel Iterative Sequence Classification Algorithm
Paper id: ab98d1b125aa0704e63adef426b27abd32e935f0
Anser size: 14
Anser:
[33mCosavirus is a new genus in the Picornaviridae family first described in 2008 .[0m
-> 3
Score: 0.0073
Paper title: Identification of diverse viruses in upper respiratory samples in dromedary camels from United Arab Emirates
Paper id: 04b5f15cca91a7b810216682780f8ea6e1ab3046
Anser size: 2
Anser:
[33mOrthonairoviruses.[0m
======================
Query: [32mWhat was discovered in Wuhuan in December 2019?[0m
Results (after 0.525 seconds):
-> 1
Score: 0.7336
Paper title: Transmission routes of 2019-nCoV and controls in dental practice
Paper id: 9756bb3c608ed790d2306fc8db815a694eeca45f
Anser size: 16
Anser:
[33mAn emergent pneumonia outbreak originated in Wuhan City, in the late December 2019 1 .[0m
-> 2
Score: 0.0006
Paper title: Molecular Sciences Effects of AntagomiRs on Different Lung Diseases in Human, Cellular, and Animal Models
Paper id: 3aed588044335032787a5eb91ee61afadcd4a006
Anser size: 6
Anser:
[33m2019 or Liu et al.,[0m
-> 3
Score: 0.0002
Paper title: Estimated effectiveness of symptom and risk screening to prevent the spread of COVID-19
Paper id: a70e7c4d8ee484ce956e91c8700d0c9310bbdbbc
Anser size: 6
Anser:
[33m2020; Liu et al.,[0m
======================
Query: [32mWhat is Coronovirus Disease 2019?[0m
Results (after 0.523 seconds):
-> 1
Score: 0.6516
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
-> 2
Score: 0.2219
Paper title:
Paper id: 19ff77e874c0706f794908e9b6878314671d385a
Anser size: 9
Anser:
[33mNaming 2019-nCoV as SARS-CoV-2 is therefore truly misleading.[0m
-> 3
Score: 0.1593
Paper title:
Paper id: 82210c1cb5ac59acd1468cedcaf6fb8d951f4903
Anser size: 14
Anser:
[33mThe infective pathogen was later identified as a novel coronavirus, called 2019-nCoV .[0m
======================
Query: [32mWhat is COVID-19?[0m
Results (after 0.551 seconds):
-> 1
Score: 0.9631
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
-> 2
Score: 0.6902
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 3
Score: 0.1879
Paper title: Clinical Medicine Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data
Paper id: 210a892deb1c61577f6fba58505fd65356ce6636
Anser size: 16
Anser:
[33mIt remains to be seen if this will be the case for COVID-19 as well.[0m
======================
Query: [32mWhat is caused by SARS-COV2?[0m
Results (after 0.551 seconds):
-> 1
Score: 0.3216
Paper title:
Paper id: 0eb44c0cc59184754a0a2cd8ee3c8b2302a8927c
Anser size: 13
Anser:
[33mWe thus assumed that a SARS-related CoV is involved in the outbreak.[0m
-> 2
Score: 0.0962
Paper title: Middle East respiratory syndrome coronavirus infection: virus-host cell interactions and implications on pathogenesis
Paper id: 12f712c348c26e092759d804778defe2d2d4af6f
Anser size: 7
Anser:
[33mThe SARS-CoV can infect human macrophages.[0m
-> 3
Score: 0.0673
Paper title: Potential Factors Influencing Repeated SARS Outbreaks in China
Paper id: 655537fc8cc52bccf43cf7189ab060d3097caa7a
Anser size: 12
Anser:
[33mThe risk of SARS-CoV-2 infection will remain for a long time.[0m
======================
Query: [32mHow is COVID-19 spread?[0m
Results (after 0.547 seconds):
-> 1
Score: 0.9799
Paper title: The novel coronavirus outbreak in Wuhan, China
Paper id: 5ba8056230c17ec133169d79aacf61ed7d4b458b
Anser size: 14
Anser:
[33mThe COVID-19 has then rapidly spread to all over China and the world.[0m
-> 2
Score: 0.9631
Paper title: The novel coronavirus outbreak in Wuhan, China
Paper id: 5ba8056230c17ec133169d79aacf61ed7d4b458b
Anser size: 18
Anser:
[33mIt is found that the COVID-19 can be transmitted through droplets, contact, aerosol, etc.[0m
-> 3
Score: 0.2594
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
======================
Query: [32mWhere was COVID-19 discovered?[0m
Results (after 0.530 seconds):
-> 1
Score: 0.9490
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 17
Anser:
[33mThe epidemic of COVID-19 is caused by a novel virus first detected in Wuhan, China.[0m
-> 2
Score: 0.4963
Paper title: The novel coronavirus outbreak in Wuhan, China
Paper id: 5ba8056230c17ec133169d79aacf61ed7d4b458b
Anser size: 14
Anser:
[33mThe COVID-19 has then rapidly spread to all over China and the world.[0m
-> 3
Score: 0.0428
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
======================
Query: [32mHow does coronavirus spread?[0m
Results (after 0.548 seconds):
-> 1
Score: 0.1335
Paper title: Analysis of the codon usage pattern in Middle East Respiratory Syndrome Coronavirus
Paper id: 627ada5c21fb8d0e43b37999fa66bf41ca36c353
Anser size: 13
Anser:
[33mThis may hint that coronavirus does not spread so widely in humans.[0m
-> 2
Score: 0.0041
Paper title: Tropical Medicine and Infectious Disease Potential Intermediate Hosts for Coronavirus Transmission: No Evidence of Clade 2c Coronaviruses in Domestic Livestock from Ghana
Paper id: 95cc317541d97e3dbaa1662894fdbed842098910
Anser size: 2
Anser:
[33mcoronavirus.[0m
-> 3
Score: 0.0041
Paper title: Population genetics, community of parasites, and resistance to rodenticides in an urban brown rat (Rattus norvegicus) population
Paper id: c8d60caf44017989b3b9633350fc1d2efda570a5
Anser size: 2
Anser:
[33mCoronavirus.[0m
top_k = 3
evaluation(myquery_list,top_k,myQueriesDf)
======================
Query: [32mHow long can the coronavirus survive on surfaces?[0m
Results (after 0.537 seconds):
-> 1
Score: 0.9850
Paper title: Outbreak of Novel Coronavirus (SARS-Cov-2): First Evidences From International Scientific Literature and Pending Questions
Paper id: 7b7c71218f8d7ea1a1f8f702e4262b839bf7cc8a
Anser size: 15
Anser:
[33mOn inanimate surfaces, human coronaviruses can remain infectious for up to 9 days.[0m
-> 2
Score: 0.7655
Paper title: Characterisation of the canine faecal virome in healthy dogs and dogs with acute diarrhoea using shotgun metagenomics
Paper id: fcb1ba715b2516823fee057cbb0f8276c76d19d7
Anser size: 21
Anser:
[33mCanine coronavirus can be shed in faeces in high numbers for up to 156 days [44, 45] .[0m
-> 3
Score: 0.1069
Paper title: Human Coronaviruses: Insights into Environmental Resistance and Its Influence on the Development of New Antiseptic Strategies
Paper id: d171f82b892a2afafc2bc8a5458219dc04c8fd8d
Anser size: 21
Anser:
[33mHuman coronavirus infections occur mainly in winter, with a short incubation time [19, 23, 24] .[0m
======================
Query: [32mWhat means COVID-19?[0m
Results (after 0.534 seconds):
-> 1
Score: 0.9272
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
-> 2
Score: 0.7040
Paper title:
Paper id: 19ff77e874c0706f794908e9b6878314671d385a
Anser size: 13
Anser:
[33mThe new name is also not consistent with the disease name COVID-19.[0m
-> 3
Score: 0.5282
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
======================
Query: [32mIs COVID19 worse than flue?[0m
Results (after 0.521 seconds):
-> 1
Score: 0.0644
Paper title: Systematic Comparison of Two Animal-to-Human Transmitted Human Coronaviruses: SARS-CoV-2 and SARS-CoV
Paper id: f294f0df7468a8ac9e27776cc15fa20297a9f040
Anser size: 11
Anser:
[33mIn comparison, COVID-19 showed similar trends with SARS patients .[0m
-> 2
Score: 0.0359
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 3
Score: 0.0022
Paper title: The effect of corticosteroids on mortality of patients with influenza pneumonia: a systematic review and meta-analysis
Paper id: ff220214e91fabc8d302d1605cd9bac44fac507f
Anser size: 10
Anser:
[33mCorticosteroids could increase mortality in patients with influenza pneumonia.[0m
======================
Query: [32mWhen the vaccine will be ready?[0m
Results (after 0.536 seconds):
-> 1
Score: 0.2100
Paper title: Mast cells and influenza A virus: association with allergic responses and beyond
Paper id: 29621887690af716dac0c244eeb95bce74fa8755
Anser size: 10
Anser:
[33mCurrent vaccine strategies take approximately 6 months for production.[0m
-> 2
Score: 0.1330
Paper title: Rapid and simple colorimetric detection of multiple influenza viruses infecting humans using a reverse transcriptional loop- mediated isothermal amplification (RT-LAMP) diagnostic platform
Paper id: dd12c39ca963dca8336d7f30c8842d892ec8236c
Anser size: 15
Anser:
[33mHowever, vaccine production usually takes 6-12 months to prepare for newly emerging viruses.[0m
-> 3
Score: 0.1195
Paper title: Vaccination to Conserved Influenza Antigens in Mice Using a Novel Simian Adenovirus Vector, PanAd3, Derived from the Bonobo Pan paniscus
Paper id: 532e417a66dbe4822a3f8f9b496c105ccc7dd412
Anser size: 15
Anser:
[33mNew vaccines are often required, and take about 6 months to become available .[0m
======================
Query: [32mWhats the proteins that consist COVID-19?[0m
Results (after 0.532 seconds):
-> 1
Score: 0.0004
Paper title: Integrin b3 Is Required in Infection and Proliferation of Classical Swine Fever Virus
Paper id: e50473adb66bac4a176d80051d63f415d2dbd5a8
Anser size: 14
Anser:
[33mCSFV contains 4 structural proteins: C, Erns, E1 and E2.[0m
-> 2
Score: 0.0004
Paper title: Proteome and phosphoproteome analysis of honeybee (Apis mellifera) venom collected from electrical stimulation and manual extraction of the venom gland Proteome and phosphoproteome analysis of honeybee (Apis mellifera) venom collected from electrical stimulation and manual extraction of the venom gland
Paper id: 1c8a0fb2f60c243f71d16d5fefb5b51d7978869e
Anser size: 15
Anser:
[33mIn GV, 27 proteins were specifically expressed: 4 toxins and 23 non-toxins.[0m
-> 3
Score: 0.0002
Paper title: The Interplay between Dengue Virus and the Human Innate Immune System: A Game of Hide and Seek
Paper id: ac8b5e9b4a49a1062eddf4fc48a19778e66e9a78
Anser size: 3
Anser:
[33mThis proteins.[0m
======================
Query: [32mWhats the symptoms of COVID-19?[0m
Results (after 0.525 seconds):
-> 1
Score: 0.1345
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 2
Score: 0.0006
Paper title: Characterization of Host and Bacterial Contributions to Lung Barrier Dysfunction Following Co-infection with 2009 Pandemic Influenza and Methicillin Resistant Staphylococcus aureus
Paper id: edee1fd45587a0a71d88a5db58cc81342840e2f6
Anser size: 5
Anser:
[33mInitial signs and symptoms include[0m
-> 3
Score: 0.0002
Paper title: Effect of Pullet Vaccination on Development and Longevity of Immunity
Paper id: e192e65a6546583fe49086c4d3ac29a0620d5bd5
Anser size: 2
Anser:
[33mClinical Signs[0m
======================
Query: [32mHow can I prevent COVID-19?[0m
Results (after 0.531 seconds):
-> 1
Score: 0.2469
Paper title: Clinical Medicine Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data
Paper id: 210a892deb1c61577f6fba58505fd65356ce6636
Anser size: 16
Anser:
[33mIt remains to be seen if this will be the case for COVID-19 as well.[0m
-> 2
Score: 0.1603
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 3
Score: 0.0029
Paper title: Identify-Isolate-Inform: A Modified Tool for Initial Detection and Management of Middle East Respiratory Syndrome Patients in the Emergency Department
Paper id: e8ae9d6178f8322e2f9b2453ef13bb312427bd15
Anser size: 8
Anser:
[33mPrevention of MERS-CoV transmission involves avoiding exposure.[0m
======================
Query: [32mWhat treatments are available for COVID-19?[0m
Results (after 0.531 seconds):
-> 1
Score: 0.9569
Paper title: Systematic Comparison of Two Animal-to-Human Transmitted Human Coronaviruses: SARS-CoV-2 and SARS-CoV
Paper id: f294f0df7468a8ac9e27776cc15fa20297a9f040
Anser size: 17
Anser:
[33mAs effective drugs for SARS, hormones and interferons can also be used to treat COVID-19 .[0m
-> 2
Score: 0.0802
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 3
Score: 0.0433
Paper title: Clinical Medicine Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data
Paper id: 210a892deb1c61577f6fba58505fd65356ce6636
Anser size: 16
Anser:
[33mIt remains to be seen if this will be the case for COVID-19 as well.[0m
======================
Query: [32mIs hand sanitizer effective against COVID-19?[0m
Results (after 0.534 seconds):
-> 1
Score: 0.2058
Paper title: Respiratory viral infections in children with asthma: do they matter and can we prevent them?
Paper id: bbc2824ce7dff3d23d060b7abbe96cba28095fb8
Anser size: 15
Anser:
[33mThe use of alcohol-based hand sanitizers is also effective [54, 55] .[0m
-> 2
Score: 0.0120
Paper title: Cell Discovery Phase-adjusted estimation of the number of Coronavirus Disease 2019 cases in Wuhan, China
Paper id: 6abb30ae61aa5e41f16a28b9437940d5d76d745b
Anser size: 19
Anser:
[33mIn response to the outbreak of COVID-19, a series of prompt public health measures have been taken.[0m
-> 3
Score: 0.0036
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
======================
Query: [32mAm I at risk for serious complications from COVID-19 if I smoke cigarettes?[0m
Results (after 0.523 seconds):
-> 1
Score: 0.0151
Paper title: Systematic Comparison of Two Animal-to-Human Transmitted Human Coronaviruses: SARS-CoV-2 and SARS-CoV
Paper id: f294f0df7468a8ac9e27776cc15fa20297a9f040
Anser size: 16
Anser:
[33mreported that people who have not been exposed to SARS-CoV-2 are all susceptible to COVID-19 .[0m
-> 2
Score: 0.0042
Paper title: Clinical Medicine Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data
Paper id: 210a892deb1c61577f6fba58505fd65356ce6636
Anser size: 16
Anser:
[33mIt remains to be seen if this will be the case for COVID-19 as well.[0m
-> 3
Score: 0.0014
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
======================
Query: [32mAre there any FDA-approved drugs (medicines) for COVID-19?[0m
Results (after 0.519 seconds):
-> 1
Score: 0.0156
Paper title:
Paper id: 19ff77e874c0706f794908e9b6878314671d385a
Anser size: 13
Anser:
[33mThe new name is also not consistent with the disease name COVID-19.[0m
-> 2
Score: 0.0143
Paper title: Clinical Medicine Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data
Paper id: 210a892deb1c61577f6fba58505fd65356ce6636
Anser size: 16
Anser:
[33mIt remains to be seen if this will be the case for COVID-19 as well.[0m
-> 3
Score: 0.0028
Paper title: Human Ebola virus infection in West Africa: a review of available therapeutic agents that target different steps of the life cycle of Ebola virus-mutable host cell therapeutic targets for Ebola virus, Cocktail therapeutic intervention for RNA virus Multilingual abstract
Paper id: 06190bfcbc53a5d5d17e0a60a3a0f6488d8ae1db
Anser size: 11
Anser:
[33mThese medications are FDA-approved for the treatment of other diseases.[0m
======================
Query: [32mHow are people tested?[0m
Results (after 0.520 seconds):
-> 1
Score: 0.0092
Paper title: A Human DPP4-Knockin Mouse's Susceptibility to Infection by Authentic and Pseudotyped MERS-CoV
Paper id: 874e540a730ee1060365af8d2caa03f537508e33
Anser size: 11
Anser:
[33mStudent's t-tests were used to assess differences between groups.[0m
-> 2
Score: 0.0075
Paper title: Virology Journal A focus reduction neutralization assay for hepatitis C virus neutralizing antibodies
Paper id: ee8dca216514deeed4c9415bc2ad8a78dc3d9670
Anser size: 11
Anser:
[33mStudent's t-test was used to compare data between groups.[0m
-> 3
Score: 0.0006
Paper title: Ecohealth research in Southeast Asia: past, present and the way forward
Paper id: 1495c1fa93db3b9a5d12b5ae15ff0c8639b83452
Anser size: 8
Anser:
[33mHow can these be tested in practice?[0m
======================
Query: [32mWhy is the disease being called coronavirus disease 2019, COVID-19?[0m
Results (after 0.528 seconds):
-> 1
Score: 0.9373
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
-> 2
Score: 0.9238
Paper title: Cell Discovery Phase-adjusted estimation of the number of Coronavirus Disease 2019 cases in Wuhan, China
Paper id: 6abb30ae61aa5e41f16a28b9437940d5d76d745b
Anser size: 20
Anser:
[33mWorld Health Organization (WHO) now has named the disease Coronavirus Disease 2019 (COVID- 19) 3 .[0m
-> 3
Score: 0.8515
Paper title:
Paper id: 82210c1cb5ac59acd1468cedcaf6fb8d951f4903
Anser size: 14
Anser:
[33mThe infective pathogen was later identified as a novel coronavirus, called 2019-nCoV .[0m
======================
Query: [32mAm I at risk for COVID-19 from mail, packages, or products?[0m
Results (after 0.520 seconds):
-> 1
Score: 0.1850
Paper title: First two months of the 2019 Coronavirus Disease (COVID-19) epidemic in China: real- time surveillance and evaluation with a second derivative model
Paper id: 469ed0f00c09e2637351c9735c306f27acf3aace
Anser size: 8
Anser:
[33mThis is particularly true for the COVID-19.[0m
-> 2
Score: 0.0283
Paper title: Systematic Comparison of Two Animal-to-Human Transmitted Human Coronaviruses: SARS-CoV-2 and SARS-CoV
Paper id: f294f0df7468a8ac9e27776cc15fa20297a9f040
Anser size: 16
Anser:
[33mreported that people who have not been exposed to SARS-CoV-2 are all susceptible to COVID-19 .[0m
-> 3
Score: 0.0033
Paper title:
Paper id: af000c5a8e181550fd16291e5d4f0f70ca9161a1
Anser size: 12
Anser:
[33mCOVID-19: coronavirus disease 2019; PPE: personal protective equipment.[0m
======================
Query: [32mWhat is community spread?[0m
Results (after 0.526 seconds):
-> 1
Score: 0.2350
Paper title: An Opportunistic Pathogen Afforded Ample Opportunities: Middle East Respiratory Syndrome Coronavirus
Paper id: 32da24606ad160166f08cf05349eaadd580ccff0
Anser size: 9
Anser:
[33mCommunity spread and subclinical transmission need more attention.[0m
-> 2
Score: 0.0135
Paper title: People at Risk of Influenza Pandemics: The Evolution of Perception and Behavior
Paper id: 51f8792fd26cd2c094c1b2d0e5539902fb6221da
Anser size: 15
Anser:
[33mEfforts were then put into preventing spread of the disease at the community level.[0m
-> 3
Score: 0.0002
Paper title: Comparative Analysis of the Effectiveness of Three Immunization Strategies in Controlling Disease Outbreaks in Realistic Social Networks
Paper id: b16b23aad25d88c3af9ccd50b754cd4d9e8762fe
Anser size: 3
Anser:
[33mCommunity-Bridge Immunization.[0m
======================
Query: [32mHow can I protect myself?[0m
Results (after 0.529 seconds):
-> 1
Score: 0.0502
Paper title: BMC Public Health Healthcare workers' attitudes to working during pandemic influenza: a qualitative study
Paper id: c337fa83ebb25e4600c0f9333ee0cb0fa938e947
Anser size: 8
Anser:
[33mGet as much protection as you can.[0m
-> 2
Score: 0.0354
Paper title: Need of surveillance response systems to combat Ebola outbreaks and other emerging infectious diseases in African countries
Paper id: 70f3c90a651224f9292378da905af4ec635d5f43
Anser size: 18
Anser:
[33mMoreover, people who don't have the knowledge should be educated on how to protect themselves.[0m
-> 3
Score: 0.0019
Paper title: a Stakeholder Survey on live Bird market closures policy for controlling Highly pathogenic avian influenza in Vietnam
Paper id: 17fe16cf66ebbe693a2e75dda11d14513fec7519
Anser size: 13
Anser:
[33mTo mitigate these risks, the following safeguards were put in place.[0m
======================
Query: [32mWhat is a novel coronavirus?[0m
Results (after 0.520 seconds):
-> 1
Score: 0.1118
Paper title: Population genetics, community of parasites, and resistance to rodenticides in an urban brown rat (Rattus norvegicus) population
Paper id: c8d60caf44017989b3b9633350fc1d2efda570a5
Anser size: 2
Anser:
[33mCoronavirus.[0m
-> 2
Score: 0.1118
Paper title: Tropical Medicine and Infectious Disease Potential Intermediate Hosts for Coronavirus Transmission: No Evidence of Clade 2c Coronaviruses in Domestic Livestock from Ghana
Paper id: 95cc317541d97e3dbaa1662894fdbed842098910
Anser size: 2
Anser:
[33mcoronavirus.[0m
-> 3
Score: 0.0265
Paper title: Retargeting of Viruses to Generate Oncolytic Agents
Paper id: bd44d72a9c41b1c382bd180da10a1f7ef38d2d56
Anser size: 2
Anser:
[33mCoronaviruses.[0m
======================
Query: [32mWas Harry Potter a good magician?[0m
Results (after 0.515 seconds):
-> 1
Score: 0.0002
Paper title: Immunoproteomic analysis of bacterial proteins of Actinobacillus pleuropneumoniae serotype 1
Paper id: 47fb645312a069e65bb7557c58204244c3c92953
Anser size: 12
Anser:
[33mThe vaccines elicited humoral immune responses and protective efficacy in mice .[0m
-> 2
Score: 0.0002
Paper title:
Paper id: fa137f1562d599f03605b83bc68f91e5105110d9
Anser size: 5
Anser:
[33mEhrlich's magic bullet.[0m
-> 3
Score: 0.0002
Paper title: Knowledge and attitudes of university students toward pandemic influenza: a cross-sectional study from Turkey
Paper id: 545def8771357b4cb2875f5795a0760e97534cc9
Anser size: 12
Anser:
[33mSurprisingly, a higher proportion believed that herbal remedies were effective.[0m
</code>
# Overall results_____no_output_____## 6000 papers with no summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user 13min 10s
- sys: 5min 40s
- total: 18min 51s
- Wall time: 18min 26s
### Remarks
Best results among the notebooks so far, almost 5/7 questions are answered and from mine 7/17. I expected better results since Cross Encoders enhance much the performance of Sentence Bert.
__Top-k__
Top-2 and 3 have lots of answers, as I noticed that are better that the first one. Also good results and with some tunning would be nearly to the wanted.
_____no_output_____### Results_____no_output_____
<code>
with pd.option_context('display.max_colwidth', None):
display(queriesDf)_____no_output_____with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)_____no_output_____
</code>
## 9000 papers with no summarization
---
Session crashed due to RAM
_____no_output_____## 6000 papers with paraphrase-distilroberta-base-v1 model and summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 18s
- sys: 22.8 s
- total: 1min 37s
- Wall time: 1min 37s
### Remarks
Not good results. From these results I think that the BERT summarizer parameters were not the appropriate and I should experiment with them. I shouldn't have so strict summarization and I may over summarized the papers.
__Top-k__
Not good.
_____no_output_____### Results_____no_output_____
<code>
with pd.option_context('display.max_colwidth', None):
display(queriesDf)_____no_output_____with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)_____no_output_____
</code>
## 9000 papers with summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 48s
- sys: 32.6 s
- total: 2min 20s
- Wall time: 2min 16s
### Remarks
Again not good results and this is due my summarization tunning.
** Again I didn't have the time to re run and process again.
_____no_output_____### Results_____no_output_____
<code>
with pd.option_context('display.max_colwidth', None):
display(queriesDf)_____no_output_____with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)_____no_output_____
</code>
# References
[1] https://colab.research.google.com/drive/1l6stpYdRMmeDBK_vw0L5NitdiAuhdsAr?usp=sharing#scrollTo=D_hDi8KzNgMM
[2] https://www.sbert.net/docs/package_reference/cross_encoder.html_____no_output_____
|
{
"repository": "Nikoletos-K/QA-with-SBERT-for-CORD19",
"path": "SBERT_CORD19_QA_CrossEncoders.ipynb",
"matched_keywords": [
"RNA",
"metagenomics",
"virology",
"population genetics",
"evolution"
],
"stars": 3,
"size": 119754,
"hexsha": "cbad0b9b9194e07b141564fe28a0e07fd6958ed5",
"max_line_length": 334,
"avg_line_length": 40.1320375335,
"alphanum_fraction": 0.5380947609
}
|
# Notebook from nealcaren/textminingwithpython
Path: content/getting_started/setup.ipynb
# Setup
Before attending the workshp you should set up a scientific Python computing environment using the [Anaconda python distribution by Continuum Analytics](https://www.continuum.io/downloads). This page describes how. If this doesn't work, let [me](mailto:[email protected]) know and I will set you up with a virtual environment you can use on my server.
_____no_output_____
## Why Python?
As is true in human language, there are hundreds of computer programming languages. While each has its own merit, the major languages for scientific computing are C, C++, R, MATLAB, Python, Java, and Fortran. MATLAB and Python are similar in syntax and typically read as if they were written in plain english. This makes both languages a useful tool for teaching but they are also very powerful languages and are very actively used in real-life research. MATLAB is proprietary while Python is open source. A benefit of being open source is that anyone can write and release Python packages. For science, there are many wonderful community-driven packages such as NumPy, SciPy, scikit-image, and Pandas just to name a few.
_____no_output_____## Installing Python 3.7 with Anaconda
There are several scientific Python distributions available for MacOS, Windows, and Linux. The most popular, [Anaconda](https://www.continuum.io/why-anaconda), is specifically designed for scientific computing and data science work. For this course, we will use the Anaconda Python 3.7 distribution. To install the correct version, follow the instructions below.
1. Navigate to the [Anaconda download page](https://www.anaconda.com/distribution/) and download the Python 3.7 graphical installer.
2. Launch the installer and follow the onscreen instructions.
3. Congratulations! You now have the beginnings of a scientific Python distribution._____no_output_____## What is a Jupyter notebook?
[Jupyter](http://jupyter.org/) is a browser-based system to write code, math, and text in the same document so you can clearly explain the concepts and practices used in your program. Jupyter is not only for Python, but can be used with R, Juila, MATLAB, and about 35 other languages as of this writing. All files are saved as a [JSON](http://www.json.org/) formatted text file with the extension `.ipynb`.
_____no_output_____## How to launch the notebook
A Jupyter Notebook server can either be launched from the command line or from a GUI program installed along with anaconda called Navigator.
_____no_output_____### Launching from the Anaconda Navigator
Installing Python 3 from Anaconda should also install a GUI application called [Anaconda Navigator](https://docs.continuum.io/anaconda/navigator). From here, you can launch several applications such as a QTconsole, the Spyder IDE, and a data visualization software called GlueViz. We are interested in the Jupyter Notebook application tab, which is shown boxed in red below:

By clicking on 'Launch', you will instantiate a Jupyter notebook server which should open in a new window.
_____no_output_____### Launching from the terminal
To launch a notebook server from the command line, simply open a terminal emulator (Terminal.app on OSX or gitbash on windows) and navigate to the directory you would like to set up a server by typing `cd path/to/folder`
Once you are in the correct folder, you can launch a notebook server by typing:
```
jupyter notebook
```
This will open a screen in your default internet browser with a server containing your notebooks. Its address will be [`http://localhost:8888`](http://localhost:8888/) and is only available on your computer. **Note that once you start a server, you must keep the terminal window open.** This is where the 'guts' of the python kernel is.
_____no_output_____## Interacting with the notebook
If everything launched correctly, you should be able to see a screen which looks something like this:

To start a new python window, click on the right-hand side of the application window and select `New`. This will give you a bunch of options for new notebook kernels. In the above screen shot, there are two available Python kernels and one Matlab kernel. When starting a notebook, you should choose `Python 3` if it is available. If you have just a tab that says "Python", choose that one.
Once you start a new notebook, you will be brought to the following screen.

Welcome to the Jupyter notebook! There are many available buttons for you to click. However, the three most important components of the notebook are highlighted in colored boxes. In blue is the name of the notebook. By clicking this, you can rename the notebook. In red is the cell formatting assignment. By default, it is registered as code, but it can also be set to markdown as described later.
Finally, in purple, is the code cell. In this cell, you can type an execute Python code as well as text that will be formatted in a nicely readable format._____no_output_____## Writing code
All code you write in the notebook will be in the code cell. You can write single lines, to entire loops, to complete functions. As an example, we can write and evaluate a print statement in a code cell, as is shown below. To exectue the code, we can simply hit `shift + enter` while our cursor is in the code cell.
_____no_output_____
<code>
# This is a comment and is not read by Python
print('Hello! This is the print function. Python will print this line below')_____no_output_____
</code>
The box with the gray background contains the python code while the output is in the box with the white background.
_____no_output_____## Next Steps
Now that you have a Python environment up and running, proceed to the [Python] notebook to learn the basics of the language. _____no_output_____*Note: This is a modified version of Griffin Chure's [Setting Up Python For Scientific Computing for Bi 1 - Principles of Biology](http://bi1.caltech.edu/code/t0a_setting_up_python.html). This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).*_____no_output_____
|
{
"repository": "nealcaren/textminingwithpython",
"path": "content/getting_started/setup.ipynb",
"matched_keywords": [
"biology"
],
"stars": 1,
"size": 8290,
"hexsha": "cbad0e43d59902adee21ba0c21447b08d1b541f9",
"max_line_length": 729,
"avg_line_length": 45.5494505495,
"alphanum_fraction": 0.6724969843
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.