The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'text', 'dom'}) and 2 missing columns ({'content', 'lang'}).
This happened while the json dataset builder was generating data using
/tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz, [/tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_10.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_10.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_11.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_11.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_12.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_12.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_13.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_13.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_14.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_14.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_15.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_15.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_16.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_16.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_3.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_3.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_4.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_4.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_5.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_5.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_6.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_6.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_7.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_7.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_8.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_8.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_9.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_9.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_3.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_3.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_4.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_4.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_5.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_5.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_6.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_6.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_7.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_7.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_8.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_8.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_9.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_9.jsonl.gz)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
text: string
ex: string
lsh: string
dom: string
to
{'content': Value('string'), 'lang': Value('string'), 'ex': Value('string'), 'lsh': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'text', 'dom'}) and 2 missing columns ({'content', 'lang'}).
This happened while the json dataset builder was generating data using
/tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz, [/tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/code/indro_v56_logic_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_10.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_10.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_11.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_11.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_12.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_12.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_13.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_13.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_14.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_14.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_15.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_15.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_16.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_16.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_3.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_3.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_4.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_4.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_5.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_5.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_6.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_6.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_7.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_7.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_8.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_8.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_9.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/data/indro_v52_zenith_9.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_1.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_1.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_2.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_2.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_3.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_3.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_4.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_4.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_5.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_5.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_6.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_6.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_7.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_7.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_8.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_8.jsonl.gz), /tmp/hf-datasets-cache/medium/datasets/37368469037966-config-parquet-and-info-abhinav337463-indro-web-d-5acafcec/hub/datasets--abhinav337463--indro-web-data/snapshots/af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_9.jsonl.gz (origin=hf://datasets/abhinav337463/indro-web-data@af3e965e3eebf40e80aac24973ed514955d5a6a5/math/indro_math_v61_9.jsonl.gz)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
content
string | lang
string | ex
string | lsh
string |
|---|---|---|---|
<reponame>MTES-MCT/sparte
from rest_framework_gis import serializers
from rest_framework import serializers as s
from .models import (
Artificialisee2015to2018,
Artificielle2018,
CommunesSybarval,
CouvertureSol,
EnveloppeUrbaine2018,
Ocsge,
Renaturee2018to2015,
Sybarval,
Voirie2018,
ZonesBaties2018,
UsageSol,
)
def get_label(code="", label=""):
if code is None:
code = "-"
if label is None:
label = "inconnu"
return f"{code} {label[:30]}"
class Artificialisee2015to2018Serializer(serializers.GeoFeatureModelSerializer):
usage_2015 = s.SerializerMethodField()
usage_2018 = s.SerializerMethodField()
couverture_2015 = s.SerializerMethodField()
couverture_2018 = s.SerializerMethodField()
def get_usage_2015(self, obj):
return get_label(code=obj.us_2015, label=obj.us_2015_label)
def get_usage_2018(self, obj):
return get_label(code=obj.us_2018, label=obj.us_2018_label)
def get_couverture_2015(self, obj):
return get_label(code=obj.cs_2015, label=obj.cs_2015_label)
def get_couverture_2018(self, obj):
return get_label(code=obj.cs_2018, label=obj.cs_2018_label)
class Meta:
fields = (
"id",
"surface",
"usage_2015",
"usage_2018",
"couverture_2015",
"couverture_2018",
)
geo_field = "mpoly"
model = Artificialisee2015to2018
class Artificielle2018Serializer(serializers.GeoFeatureModelSerializer):
couverture = s.SerializerMethodField()
def get_couverture(self, obj):
return get_label(code=obj.couverture, label=obj.couverture_label)
class Meta:
fields = (
"id",
"surface",
"couverture",
)
geo_field = "mpoly"
model = Artificielle2018
class CommunesSybarvalSerializer(serializers.GeoFeatureModelSerializer):
"""Marker GeoJSON serializer."""
class Meta:
"""Marker serializer meta class."""
fields = (
"nom",
"code_insee",
"surface",
)
geo_field = "mpoly"
model = CommunesSybarval
class EnveloppeUrbaine2018Serializer(serializers.GeoFeatureModelSerializer):
couverture = s.SerializerMethodField()
def get_couverture(self, obj):
return get_label(code=obj.couverture, label=obj.couverture_label)
class Meta:
fields = (
"id",
"couverture",
"surface",
)
geo_field = "mpoly"
model = EnveloppeUrbaine2018
class OcsgeSerializer(serializers.GeoFeatureModelSerializer):
couverture = s.SerializerMethodField()
usage = s.SerializerMethodField()
def get_couverture(self, obj):
return get_label(code=obj.couverture, label=obj.couverture_label)
def get_usage(self, obj):
return get_label(code=obj.usage, label=obj.usage_label)
class Meta:
fields = (
"id",
"couverture",
"usage",
"millesime",
"map_color",
"year",
)
geo_field = "mpoly"
model = Ocsge
class Renaturee2018to2015Serializer(serializers.GeoFeatureModelSerializer):
usage_2015 = s.SerializerMethodField()
usage_2018 = s.SerializerMethodField()
couverture_2015 = s.SerializerMethodField()
couverture_2018 = s.SerializerMethodField()
def get_usage_2015(self, obj):
return get_label(code=obj.us_2015, label=obj.us_2015_label)
def get_usage_2018(self, obj):
return get_label(code=obj.us_2018, label=obj.us_2018_label)
def get_couverture_2015(self, obj):
return get_label(code=obj.cs_2015, label=obj.cs_2015_label)
def get_couverture_2018(self, obj):
return get_label(code=obj.cs_2018, label=obj.cs_2018_label)
class Meta:
fields = (
"id",
"surface",
"usage_2015",
"usage_2018",
"couverture_2015",
"couverture_2018",
)
geo_field = "mpoly"
model = Renaturee2018to2015
class SybarvalSerializer(serializers.GeoFeatureModelSerializer):
class Meta:
fields = (
"id",
"surface",
)
geo_field = "mpoly"
model = Sybarval
class Voirie2018Serializer(serializers.GeoFeatureModelSerializer):
couverture = s.SerializerMethodField()
usage = s.SerializerMethodField()
def get_couverture(self, obj):
return get_label(code=obj.couverture, label=obj.couverture_label)
def get_usage(self, obj):
return get_label(code=obj.usage, label=obj.usage_label)
class Meta:
fields = (
"id",
"surface",
"couverture",
"usage",
)
geo_field = "mpoly"
model = Voirie2018
class ZonesBaties2018Serializer(serializers.GeoFeatureModelSerializer):
couverture = s.SerializerMethodField()
usage = s.SerializerMethodField()
def get_couverture(self, obj):
return get_label(code=obj.couverture, label=obj.couverture_label)
def get_usage(self, obj):
return get_label(code=obj.usage, label=obj.usage_label)
class Meta:
fields = (
"id",
"couverture",
"usage",
"surface",
)
geo_field = "mpoly"
model = ZonesBaties2018
class CouvertureSolSerializer(serializers.ModelSerializer):
class Meta:
fields = (
"id",
"parent",
"code",
"label",
"is_artificial",
)
model = CouvertureSol
class UsageSolSerializer(serializers.ModelSerializer):
class Meta:
fields = (
"id",
"parent",
"code",
"label",
)
model = UsageSol
|
Python
|
1ce2433af85ae173a032577f5b822a563657cf0d361d83c82c0b3dec51a65a9a
|
0xac9f85bdbd7a80b5
|
import asyncio
import os
import tempfile
from contextlib import ExitStack
from typing import Text, Optional, List, Union, Dict
from rasa.importers.importer import TrainingDataImporter
from rasa import model
from rasa.model import FingerprintComparisonResult
from rasa.core.domain import Domain
from rasa.utils.common import TempDirectoryPath
from rasa.cli.utils import (
print_success,
print_warning,
print_error,
bcolors,
print_color,
)
from rasa.constants import DEFAULT_MODELS_PATH, DEFAULT_CORE_SUBDIRECTORY_NAME
def train(
domain: Text,
config: Text,
training_files: Union[Text, List[Text]],
output: Text = DEFAULT_MODELS_PATH,
force_training: bool = False,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
additional_arguments: Optional[Dict] = None,
loop: Optional[asyncio.AbstractEventLoop] = None,
) -> Optional[Text]:
if loop is None:
try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return loop.run_until_complete(
train_async(
domain=domain,
config=config,
training_files=training_files,
output_path=output,
force_training=force_training,
fixed_model_name=fixed_model_name,
persist_nlu_training_data=persist_nlu_training_data,
additional_arguments=additional_arguments,
)
)
async def train_async(
domain: Union[Domain, Text],
config: Dict[Text, Text],
training_files: Optional[Union[Text, List[Text]]],
output_path: Text = DEFAULT_MODELS_PATH,
force_training: bool = False,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
additional_arguments: Optional[Dict] = None,
) -> Optional[Text]:
"""Trains a Rasa model (Core and NLU).
Args:
domain: Path to the domain file.
config: Dict of paths to the config for Core and NLU. Keys are language codes
training_files: Paths to the training data for Core and NLU.
output_path: Output path.
force_training: If `True` retrain model even if data has not changed.
fixed_model_name: Name of model to be stored.
persist_nlu_training_data: `True` if the NLU training data should be persisted
with the model.
additional_arguments: Additional training parameters.
Returns:
Path of the trained model archive.
"""
# file_importer = TrainingDataImporter.load_from_config(
# config, domain, training_files
# )
with ExitStack() as stack:
train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))
# bf mod
from rasa_addons.importers import BotfrontFileImporter
file_importer = BotfrontFileImporter(config, domain, training_files)
# domain = await file_importer.get_domain()
# if domain.is_empty():
# return await handle_domain_if_not_exists(
# file_importer, output_path, fixed_model_name
# )
# /bf mod
return await _train_async_internal(
file_importer,
train_path,
output_path,
force_training,
fixed_model_name,
persist_nlu_training_data,
additional_arguments,
)
async def handle_domain_if_not_exists(
file_importer: TrainingDataImporter, output_path, fixed_model_name
):
nlu_model_only = await _train_nlu_with_validated_data(
file_importer, output=output_path, fixed_model_name=fixed_model_name
)
print_warning(
"Core training was skipped because no valid domain file was found. Only an nlu-model was created."
"Please specify a valid domain using '--domain' argument or check if the provided domain file exists."
)
return nlu_model_only
async def _train_async_internal(
file_importer: TrainingDataImporter,
train_path: Text,
output_path: Text,
force_training: bool,
fixed_model_name: Optional[Text],
persist_nlu_training_data: bool,
additional_arguments: Optional[Dict],
) -> Optional[Text]:
"""Trains a Rasa model (Core and NLU). Use only from `train_async`.
Args:
file_importer: `TrainingDataImporter` which supplies the training data.
train_path: Directory in which to train the model.
output_path: Output path.
force_training: If `True` retrain model even if data has not changed.
persist_nlu_training_data: `True` if the NLU training data should be persisted
with the model.
fixed_model_name: Name of model to be stored.
additional_arguments: Additional training parameters.
Returns:
Path of the trained model archive.
"""
stories, nlu_data = await asyncio.gather(
file_importer.get_stories(), file_importer.get_nlu_data()
)
# if stories.is_empty() and nlu_data.is_empty():
# print_error(
# "No training data given. Please provide stories and NLU data in "
# "order to train a Rasa model using the '--data' argument."
# )
# return
# if nlu_data.is_empty():
# print_warning("No NLU data present. Just a Rasa Core model will be trained.")
# return await _train_core_with_validated_data(
# file_importer,
# output=output_path,
# fixed_model_name=fixed_model_name,
# additional_arguments=additional_arguments,
# )
new_fingerprint = await model.model_fingerprint(file_importer)
old_model = model.get_latest_model(output_path)
fingerprint_comparison = FingerprintComparisonResult(force_training=force_training)
if not force_training:
fingerprint_comparison = model.should_retrain(
new_fingerprint, old_model, train_path
)
# bf mod >
if fingerprint_comparison.nlu == True: # replace True with list of all langs
fingerprint_comparison.nlu = list(new_fingerprint.get("nlu-config", {}).keys())
domain = await file_importer.get_domain()
core_untrainable = domain.is_empty() or stories.is_empty()
nlu_untrainable = [l for l, d in nlu_data.items() if d.is_empty()]
fingerprint_comparison.core = fingerprint_comparison.core and not core_untrainable
fingerprint_comparison.nlu = [l for l in fingerprint_comparison.nlu if l not in nlu_untrainable]
if core_untrainable:
print_color("Skipping Core training since domain or stories are empty.", color=bcolors.OKBLUE)
for lang in nlu_untrainable:
print_color("No NLU data found for language <{}>, skipping training...".format(lang), color=bcolors.OKBLUE)
# </ bf mod
if fingerprint_comparison.is_training_required():
await _do_training(
file_importer,
output_path=output_path,
train_path=train_path,
fingerprint_comparison_result=fingerprint_comparison,
fixed_model_name=fixed_model_name,
persist_nlu_training_data=persist_nlu_training_data,
additional_arguments=additional_arguments,
)
return model.package_model(
fingerprint=new_fingerprint,
output_directory=output_path,
train_path=train_path,
fixed_model_name=fixed_model_name,
)
print_success(
"Nothing changed. You can use the old model stored at '{}'."
"".format(os.path.abspath(old_model))
)
return old_model
async def _do_training(
file_importer: TrainingDataImporter,
output_path: Text,
train_path: Text,
fingerprint_comparison_result: Optional[FingerprintComparisonResult] = None,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
additional_arguments: Optional[Dict] = None,
):
if not fingerprint_comparison_result:
fingerprint_comparison_result = FingerprintComparisonResult()
if fingerprint_comparison_result.should_retrain_core():
await _train_core_with_validated_data(
file_importer,
output=output_path,
train_path=train_path,
fixed_model_name=fixed_model_name,
additional_arguments=additional_arguments,
)
elif fingerprint_comparison_result.should_retrain_nlg():
print_color(
"Core stories/configuration did not change. "
"Only the templates section has been changed. A new model with "
"the updated templates will be created.",
color=bcolors.OKBLUE,
)
await model.update_model_with_new_domain(file_importer, train_path)
else:
print_color(
"Core stories/configuration did not change. No need to retrain Core model.",
color=bcolors.OKBLUE,
)
if fingerprint_comparison_result.should_retrain_nlu():
await _train_nlu_with_validated_data(
file_importer,
output=output_path,
train_path=train_path,
fixed_model_name=fixed_model_name,
retrain_nlu=fingerprint_comparison_result.nlu,
persist_nlu_training_data=persist_nlu_training_data,
)
else:
print_color(
"NLU data/configuration did not change. No need to retrain NLU model.",
color=bcolors.OKBLUE,
)
def train_core(
domain: Union[Domain, Text],
config: Text,
stories: Text,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
additional_arguments: Optional[Dict] = None,
) -> Optional[Text]:
loop = asyncio.get_event_loop()
return loop.run_until_complete(
train_core_async(
domain=domain,
config=config,
stories=stories,
output=output,
train_path=train_path,
fixed_model_name=fixed_model_name,
additional_arguments=additional_arguments,
)
)
async def train_core_async(
domain: Union[Domain, Text],
config: Text,
stories: Text,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
additional_arguments: Optional[Dict] = None,
) -> Optional[Text]:
"""Trains a Core model.
Args:
domain: Path to the domain file.
config: Path to the config file for Core.
stories: Path to the Core training data.
output: Output path.
train_path: If `None` the model will be trained in a temporary
directory, otherwise in the provided directory.
fixed_model_name: Name of model to be stored.
uncompress: If `True` the model will not be compressed.
additional_arguments: Additional training parameters.
Returns:
If `train_path` is given it returns the path to the model archive,
otherwise the path to the directory with the trained model files.
"""
file_importer = TrainingDataImporter.load_core_importer_from_config(
config, domain, [stories]
)
domain = await file_importer.get_domain()
if domain.is_empty():
print_error(
"Core training was skipped because no valid domain file was found. "
"Please specify a valid domain using '--domain' argument or check if the provided domain file exists."
)
return None
if not await file_importer.get_stories():
print_error(
"No stories given. Please provide stories in order to "
"train a Rasa Core model using the '--stories' argument."
)
return
return await _train_core_with_validated_data(
file_importer,
output=output,
train_path=train_path,
fixed_model_name=fixed_model_name,
additional_arguments=additional_arguments,
)
async def _train_core_with_validated_data(
file_importer: TrainingDataImporter,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
additional_arguments: Optional[Dict] = None,
) -> Optional[Text]:
"""Train Core with validated training and config data."""
import rasa.core.train
with ExitStack() as stack:
if train_path:
# If the train path was provided, do nothing on exit.
_train_path = train_path
else:
# Otherwise, create a temp train path and clean it up on exit.
_train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))
# normal (not compare) training
print_color("Training Core model...", color=bcolors.OKBLUE)
domain, config = await asyncio.gather(
file_importer.get_domain(), file_importer.get_config()
)
await rasa.core.train(
domain_file=domain,
training_resource=file_importer,
output_path=os.path.join(_train_path, DEFAULT_CORE_SUBDIRECTORY_NAME),
policy_config=config,
additional_arguments=additional_arguments,
)
print_color("Core model training completed.", color=bcolors.OKBLUE)
if train_path is None:
# Only Core was trained.
new_fingerprint = await model.model_fingerprint(file_importer)
return model.package_model(
fingerprint=new_fingerprint,
output_directory=output,
train_path=_train_path,
fixed_model_name=fixed_model_name,
model_prefix="core-",
)
return _train_path
def train_nlu(
config: Text,
nlu_data: Text,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
) -> Optional[Text]:
"""Trains an NLU model.
Args:
config: Path to the config file for NLU.
nlu_data: Path to the NLU training data.
output: Output path.
train_path: If `None` the model will be trained in a temporary
directory, otherwise in the provided directory.
fixed_model_name: Name of the model to be stored.
persist_nlu_training_data: `True` if the NLU training data should be persisted
with the model.
Returns:
If `train_path` is given it returns the path to the model archive,
otherwise the path to the directory with the trained model files.
"""
loop = asyncio.get_event_loop()
return loop.run_until_complete(
_train_nlu_async(
config,
nlu_data,
output,
train_path,
fixed_model_name,
persist_nlu_training_data,
)
)
async def _train_nlu_async(
config: Text,
nlu_data: Text,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
):
if not nlu_data:
print_error(
"No NLU data given. Please provide NLU data in order to train "
"a Rasa NLU model using the '--nlu' argument."
)
return
# training NLU only hence the training files still have to be selected
file_importer = TrainingDataImporter.load_nlu_importer_from_config(
config, training_data_paths=[nlu_data]
)
training_datas = await file_importer.get_nlu_data()
if training_datas.is_empty():
print_error(
f"Path '{nlu_data}' doesn't contain valid NLU data in it. "
"Please verify the data format. "
"The NLU model training will be skipped now."
)
return
return await _train_nlu_with_validated_data(
file_importer,
output=output,
train_path=train_path,
fixed_model_name=fixed_model_name,
persist_nlu_training_data=persist_nlu_training_data,
)
async def _train_nlu_with_validated_data(
file_importer: TrainingDataImporter,
output: Text,
train_path: Optional[Text] = None,
fixed_model_name: Optional[Text] = None,
persist_nlu_training_data: bool = False,
retrain_nlu: Union[bool, List[Text]] = True
) -> Optional[Text]:
"""Train NLU with validated training and config data."""
import rasa.nlu.train
with ExitStack() as stack:
models = {}
from rasa.nlu import config as cfg_loader
if train_path:
# If the train path was provided, do nothing on exit.
_train_path = train_path
else:
# Otherwise, create a temp train path and clean it up on exit.
_train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))
# bf mod
config = await file_importer.get_nlu_config(retrain_nlu)
for lang in config:
if config[lang]:
print_color("Start training {} NLU model ...".format(lang), color=bcolors.OKBLUE)
_, models[lang], _ = await rasa.nlu.train(
config[lang],
file_importer,
_train_path,
fixed_model_name="nlu-{}".format(lang),
persist_nlu_training_data=persist_nlu_training_data,
)
else:
print_color("NLU data for language <{}> didn't change, skipping training...".format(lang), color=bcolors.OKBLUE)
# /bf mod
print_color("NLU model training completed.", color=bcolors.OKBLUE)
if train_path is None:
# Only NLU was trained
new_fingerprint = await model.model_fingerprint(file_importer)
return model.package_model(
fingerprint=new_fingerprint,
output_directory=output,
train_path=_train_path,
fixed_model_name=fixed_model_name,
model_prefix="nlu-",
)
return _train_path
|
Python
|
19d1564042f06afd9ff6d3daf82c6f3356a7d9d8dfdf558a7cd1fa13be766c3c
|
0xef44ca738ed102d5
|
<gh_stars>1-10
class Solution:
def finalPrices(self, prices: List[int]) -> List[int]:
res = []
for i in range(len(prices)):
for j in range(i+1,len(prices)):
if prices[j]<=prices[i]:
res.append(prices[i]-prices[j])
break
if j==len(prices)-1:
res.append(prices[i])
res.append(prices[-1])
return res
|
Python
|
a7d86c06e5f0bd5932f94342ef1cd1419c14922f3e3bfeae0ce44b4dcda06eae
|
0xb8e2b66344669647
|
<filename>PyDSTool/core/context_managers.py
# -*- coding: utf-8 -*-
"""Context managers implemented for (mostly) internal use"""
import contextlib
import functools
from io import UnsupportedOperation
import os
import sys
__all__ = ["RedirectStdout", "RedirectStderr"]
@contextlib.contextmanager
def _stdchannel_redirected(stdchannel, dest_filename, mode="w"):
"""
A context manager to temporarily redirect stdout or stderr
Originally by <NAME>, 2013
(http://marc-abramowitz.com/archives/2013/07/19/python-context-manager-for-redirected-stdout-and-stderr/)
"""
oldstdchannel = None
dest_file = None
try:
if stdchannel is None:
yield iter([None])
else:
oldstdchannel = os.dup(stdchannel.fileno())
dest_file = open(dest_filename, mode)
os.dup2(dest_file.fileno(), stdchannel.fileno())
yield
except (UnsupportedOperation, AttributeError):
yield iter([None])
finally:
if oldstdchannel is not None:
os.dup2(oldstdchannel, stdchannel.fileno())
if dest_file is not None:
dest_file.close()
RedirectStdout = functools.partial(_stdchannel_redirected, sys.stdout)
RedirectStderr = functools.partial(_stdchannel_redirected, sys.stderr)
RedirectNoOp = functools.partial(_stdchannel_redirected, None, "")
|
Python
|
e9ce9f6797f684f4125af3c8f2f3ca8451fe24e473a3c87a799d7deb467d057e
|
0x5e686108d3d53801
|
<gh_stars>1-10
from keras import Model, Input
from keras.layers import Dense, concatenate, LSTM, Reshape, Permute, Embedding, Dropout, Convolution1D, Flatten
from keras.optimizers import Adam
from pypagai.models.base import KerasModel
class SimpleLSTM(KerasModel):
"""
Use a simple lstm neural network
"""
@staticmethod
def default_config():
config = KerasModel.default_config()
config['hidden'] = 32
return config
def __init__(self, cfg):
super().__init__(cfg)
self._cfg_ = cfg
def _create_network_(self):
hidden = self._cfg_['hidden']
story = Input((self._story_maxlen, ), name='story')
question = Input((self._query_maxlen, ), name='question')
conc = concatenate([story, question],)
conc = Reshape((1, int(conc.shape[1])))(conc)
conc = Permute((2, 1))(conc)
response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)
response = Dense(self._vocab_size, activation='softmax')(response)
self._model = Model(inputs=[story, question], outputs=response)
self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
class EmbedLSTM(KerasModel):
"""
Use a simple lstm neural network
"""
@staticmethod
def default_config():
config = KerasModel.default_config()
config['hidden'] = 32
return config
def __init__(self, cfg):
super().__init__(cfg)
self._cfg_ = cfg
def _create_network_(self):
hidden = self._cfg_['hidden']
story = Input((self._story_maxlen, ), name='story')
question = Input((self._query_maxlen, ), name='question')
eb_story = Embedding(self._vocab_size, 64)(story)
eb_story = Dropout(0.3)(eb_story)
eb_question = Embedding(self._vocab_size, 64)(question)
eb_question = Dropout(0.3)(eb_question)
conc = concatenate([eb_story, eb_question], axis=1)
response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)
response = Dense(self._vocab_size, activation='softmax')(response)
self._model = Model(inputs=[story, question], outputs=response)
self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
class ConvLSTM(KerasModel):
"""
Use a simple lstm neural network
"""
@staticmethod
def default_config():
config = KerasModel.default_config()
config['hidden'] = 32
return config
def __init__(self, model_cfg):
super().__init__(model_cfg)
self._cfg = model_cfg
def _create_network_(self):
hidden = self._cfg['hidden']
story = Input((self._story_maxlen, ), name='story')
question = Input((self._query_maxlen, ), name='question')
eb_story = Embedding(self._vocab_size, 64)(story)
eb_story = Convolution1D(64, 3, padding='same')(eb_story)
eb_story = Convolution1D(32, 3, padding='same')(eb_story)
eb_story = Convolution1D(16, 3, padding='same')(eb_story)
# eb_story = Flatten()(eb_story)
eb_question = Embedding(self._vocab_size, 64)(question)
eb_question = Convolution1D(64, 3, padding='same')(eb_question)
eb_question = Convolution1D(32, 3, padding='same')(eb_question)
eb_question = Convolution1D(16, 3, padding='same')(eb_question)
# eb_question = Flatten()(eb_question)
conc = concatenate([eb_story, eb_question], axis=1)
response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)
response = Dense(self._vocab_size, activation='softmax')(response)
self._model = Model(inputs=[story, question], outputs=response)
self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
|
Python
|
6ff5eb1643faabfe79e7cf12b3b77bdafbf223256b942b1d5db7a26437ee9a32
|
0x2b079c821b48eb17
|
#!/usr/bin/env python
# -*- coding:utf-8 -*-
# Author:
''' PNASNet in PyTorch.
Paper: Progressive Neural Architecture Search
'''
from easyai.base_name.block_name import NormalizationType, ActivationType
from easyai.base_name.backbone_name import BackboneName
from easyai.model.backbone.utility.base_backbone import *
from easyai.model.base_block.utility.utility_block import ConvBNActivationBlock
from easyai.model.base_block.cls.pnasnet_block import CellA, CellB
__all__ = ['pnasnet_A', 'pnasnet_B']
class PNASNet(BaseBackbone):
def __init__(self, data_channel=3, num_cells=6,
num_planes=44, block=CellA,
bnName=NormalizationType.BatchNormalize2d,
activationName=ActivationType.ReLU):
super().__init__()
self.set_name(BackboneName.PNASNetA)
self.data_channel = data_channel
self.num_cells = num_cells
self.block = block
self.activation_name = activationName
self.bn_name = bnName
self.first_output = num_planes
self.in_planes = self.first_output
self.create_block_list()
def create_block_list(self):
self.block_out_channels = []
self.index = 0
layer1 = ConvBNActivationBlock(in_channels=self.data_channel,
out_channels=self.first_output,
kernel_size=3,
stride=1,
padding=1,
bias=False,
bnName=self.bn_name,
activationName=self.activation_name)
self.add_block_list(layer1.get_name(), layer1, self.first_output)
self.make_layer(self.first_output, self.num_cells)
self.downsample(self.first_output * 2)
self.make_layer(self.first_output * 2, self.num_cells)
self.downsample(self.first_output * 4)
self.make_layer(self.first_output * 4, self.num_cells)
def make_layer(self, planes, num_cells):
for _ in range(num_cells):
temp_block = self.block(self.in_planes, planes, stride=1,
bn_name=self.bn_name, activation_name=self.activation_name)
self.add_block_list(temp_block.get_name(), temp_block, planes)
self.in_planes = planes
def downsample(self, planes):
down_block = self.block(self.in_planes, planes, stride=2,
bn_name=self.bn_name, activation_name=self.activation_name)
self.add_block_list(down_block.get_name(), down_block, planes)
self.in_planes = planes
def forward(self, x):
output_list = []
for block in self._modules.values():
x = block(x)
output_list.append(x)
return output_list
def pnasnet_A(data_channel):
model = PNASNet(data_channel=data_channel,
num_cells=6,
num_planes=44,
block=CellA)
model.set_name(BackboneName.PNASNetA)
return model
def pnasnet_B(data_channel):
model = PNASNet(data_channel=data_channel,
num_cells=6, num_planes=32,
block=CellB)
model.set_name(BackboneName.PNASNetB)
return model
|
Python
|
1e7ae79e5a84953e5230479e541e934f228111b32411a607f7b5903ae33da37f
|
0x20bb96e146c5b286
|
# -*- coding: utf-8 -*-
# coding=utf-8
import json
import os
import math
import logging
import requests
import time
from map_download.cmd.BaseDownloader import DownloadEngine, BaseDownloaderThread, latlng2tile_terrain, BoundBox
def get_access_token(token):
resp = None
request_count = 0
url = "https://api.cesium.com/v1/assets/1/endpoint"
while True:
if request_count > 4:
break
try:
request_count += 1
param = {'access_token': token}
resp = requests.get(url, params=param, timeout=2)
if resp.status_code != 200:
continue
break
except Exception as e:
resp = None
time.sleep(3)
if resp is None:
return None
resp_json = resp.json()
return resp_json.get('accessToken')
class TerrainDownloaderThread(BaseDownloaderThread):
URL = "https://assets.cesium.com/1/{z}/{x}/{y}.terrain?extensions=octvertexnormals-watermask&v=1.1.0"
def __init__(self, root_dir, bbox, token, task_q, logger=None, write_db=False):
super(TerrainDownloaderThread, self).__init__(
root_dir, bbox, task_q, logger, write_db=write_db, db_file_name='Terrain.db')
self.token = token
self._init_metadata(
format='terrain',
bounds='%f,%f,%f,%f' % (self.bbox.min_lng, self.bbox.min_lat, self.bbox.max_lng, self.bbox.max_lat))
def get_url(self, x, y, z):
return self.URL.format(x=x, y=y, z=z)
def _download(self, x, y, z):
file_path = '%s/%s/%i/%i/%i.%s' % (self.root_dir, 'Terrain', z, x, y, 'terrain')
if os.path.exists(file_path):
self._data2DB(x, y, z, file_path)
return 0
os.makedirs(os.path.dirname(file_path), exist_ok=True)
resp = None
requre_count = 0
_url = ''
access_token = get_access_token(self.token)
if access_token is None:
return -1
param = {'extensions': 'octvertexnormals-watermask', 'v': '1.1.0', 'access_token': access_token}
while True:
if requre_count > 4: break
try:
_url = self.get_url(x, y, z)
resp = requests.get(_url, params=param, stream=True, timeout=2)
break
except Exception as e:
resp = None
time.sleep(3)
requre_count += 1
if resp is None:
return -1
if resp.status_code != 200:
return -1
try:
with open(file_path, 'wb') as f:
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
except Exception as e:
return -1
self._data2DB(x, y, z, file_path)
return 1
class TerrainDownloadEngine(DownloadEngine):
root_dir = ''
def __init__(self, root_dir, bbox, token, thread_num, logger=None, write_db=False):
super(TerrainDownloadEngine, self).__init__(bbox, thread_num, logger, write_db=write_db)
self.root_dir = root_dir
self.token = token
def bbox2xyz(self, bbox, z):
min_x, min_y = latlng2tile_terrain(bbox.min_lat, bbox.min_lng, z)
max_x, max_y = latlng2tile_terrain(bbox.max_lat, bbox.max_lng, z)
return math.floor(min_x), math.floor(min_y), math.ceil(max_x) + 1, math.ceil(max_y) + 1
def generate_metadata(self):
try:
metadatas = {
"attribution": "© Analytical Graphics Inc., © CGIAR-CSI, Produced using Copernicus data and "
"information funded by the European Union - EU-DEM layers",
"available": [
[
{
"endX": 1,
"endY": 0,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 3,
"endY": 1,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 7,
"endY": 3,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 15,
"endY": 7,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 31,
"endY": 15,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 63,
"endY": 31,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 127,
"endY": 63,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 255,
"endY": 127,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 511,
"endY": 255,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 1023,
"endY": 511,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 2047,
"endY": 1023,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 4095,
"endY": 2047,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 8191,
"endY": 4095,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 16383,
"endY": 8191,
"startX": 0,
"startY": 0
}
],
[
{
"endX": 32767,
"endY": 16383,
"startX": 0,
"startY": 0
}
]
],
"bounds": [-180, -90, 180, 90, ],
"description": "STK World Terrain Premium Tileset, v1.3. 10m - 30m resolution CONUS, 30m resolution "
"SRTM between 60N and 60S, 30m Europe. Minimum global coverage of 1000m.",
"extensions": ["watermask", "vertexnormals", "octvertexnormals", ],
"format": "quantized-mesh-1.0",
"maxzoom": 13,
"minzoom": 0,
"name": "world",
"projection": "EPSG:4326",
"scheme": "tms",
"tilejson": "2.1.0",
"tiles": ["{z}/{x}/{y}.terrain?v={version}", ],
"version": "1.31376.0"
}
_dir = os.path.join(self.root_dir, 'Terrain')
os.makedirs(_dir, exist_ok=True)
metadatas_path = os.path.join(_dir, 'layer.json')
with open(metadatas_path, 'w') as f:
json.dump(metadatas, f)
except Exception as e:
if self.logger is not None:
self.logger.exception(e)
def run(self):
try:
self.generate_metadata()
count = 0
bboxs = self.cut_bbox()
for bbox in bboxs:
_count = self.get_task_count(bbox)
count += _count
self.division_done_signal.emit(count)
for bbox in bboxs:
while True:
if not self.running:
time.sleep(0.01)
else:
break
task_q = self.get_task_queue(bbox)
self.threads = []
for i in range(self.thread_num):
thread = TerrainDownloaderThread(self.root_dir, self.bbox, self.token, task_q, self.logger,
write_db=self.write_db)
thread.sub_progressBar_updated_signal.connect(self.sub_update_progressBar)
self.threads.append(thread)
for thread in self.threads:
thread.start()
for thread in self.threads:
thread.wait()
for t in self.threads:
t.stop()
t.quit()
self.threads = []
self.download_done_signal.emit()
except Exception as e:
if self.logger is not None:
self.logger.error(e)
if __name__ == '__main__':
if 1:
logger = logging.getLogger('down')
try:
root = r'/Users/cugxy/Documents/data/downloader'
formatter = logging.Formatter('%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
log_file = os.path.join(root, 'down.log')
file_hdlr = logging.FileHandler(log_file)
file_hdlr.setFormatter(formatter)
logger.addHandler(file_hdlr)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
min_lng = -180.0
max_lng = 180.0
min_lat = -90.0
max_lat = 90.0
start_zoom = 0
end_zoom = 5
bbox = BoundBox(max_lat, max_lng, min_lat, min_lng, start_zoom, end_zoom)
d = TerrainDownloadEngine(root, bbox, 8, logger)
d.start()
time.sleep(10000)
logger.error('main thread out')
except Exception as e:
logger.error(e)
if 0:
accessToken = get_access_token()
pass
|
Python
|
8ba343fa3627630456d4d8758a7ce2f4bc8e55f923f2ee88df259080c5cad8d4
|
0x427f14a482e52c56
|
<reponame>vahini01/electoral_rolls
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 10 23:28:58 2017
@author: dhingratul
"""
import urllib.request
import os
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import ssl
import requests
import wget
from PyPDF2 import PdfFileReader
def download_file(pdf_url, mdir, filename, flag=False):
if flag is True:
context = ssl._create_unverified_context()
response = urllib.request.urlopen(pdf_url, context=context)
else:
response = urllib.request.urlopen(pdf_url)
filename = mdir + filename
file = open(filename, 'wb')
file.write(response.read())
if os.stat(filename).st_size == 0:
flag = 0
else:
flag = 1
file.close()
return flag
def download_file_R(pdf_url, mdir, filename, file_out):
requests.packages.urllib3.disable_warnings()
while True: # Keep trying until the webpage successfully downloads
try:
r = requests.get(pdf_url, verify=False, timeout=10)
break # If it downloads, get out and get on with life
# If it doesn't download after the timeout period, an exceptions is thrown, and we try again
except requests.exceptions.RequestException as e:
with open(file_out, "a") as myfile:
myfile.write(pdf_url + '\n')
filename = mdir + filename
with open(filename, 'wb') as f:
f.write(r.content)
if os.stat(filename).st_size == 0:
flag = 0
else:
flag = 1
return flag
def download_file_W(pdf_url, mdir, filename, flag=False):
filename = mdir + filename
ssl._create_default_https_context = ssl._create_unverified_context
wget.download(pdf_url, filename)
if os.stat(filename).st_size == 0:
flag = 0
else:
flag = 1
return flag
def getDriver(url):
driver = webdriver.Chrome()
driver.get(url)
return driver
def is_valid_pdf(fn):
"""Check is the PDF valid """
try:
with open(fn, 'rb') as f:
pdf = PdfFileReader(f)
numpages = pdf.numPages
return (numpages > 0)
except Exception as e:
return False
|
Python
|
3ba95fbf0592a79440bda446efca68243b40937d2869cd5a0a0441841b9abdfd
|
0xabfdaeb793e02b8c
|
<gh_stars>0
"""
Experiment summary
------------------
Treat each province/state in a country cases over time
as a vector, do a simple K-Nearest Neighbor between
countries. What country has the most similar trajectory
to a given country?
Plots similar countries
"""
import sys
sys.path.insert(0, '..')
from utils import data
import os
import sklearn
import numpy as np
import json
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# ------------ HYPERPARAMETERS -------------
BASE_PATH = '../COVID-19/csse_covid_19_data/'
# ------------------------------------------
confirmed = os.path.join(
BASE_PATH,
'csse_covid_19_time_series',
'time_series_covid19_confirmed_global.csv')
confirmed = data.load_csv_data(confirmed)
features = []
targets = []
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111)
cm = plt.get_cmap('jet')
NUM_COLORS = 0
LINE_STYLES = ['solid', 'dashed', 'dotted']
NUM_STYLES = len(LINE_STYLES)
dist_diff = os.path.join('../exp/results/', 'knn_raw.json')
f = open(dist_diff,)
dist_diff = json.load(f)
for region, dist in dist_diff.items():
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111)
cm = plt.get_cmap('jet')
other_region = dist['manhattan'][0]
regions = [region, other_region]
for val in regions:
df = data.filter_by_attribute(
confirmed, "Country/Region", val)
cases, labels = data.get_cases_chronologically(df)
cases = cases.sum(axis=0)
lines = ax.plot(cases, label=val)
ax.set_ylabel('# of confirmed cases')
ax.set_xlabel("Time (days since Jan 22, 2020)")
ax.set_yscale('log')
ax.legend()
plt.tight_layout()
region = region.replace('*', '')
other_region = other_region.replace('*', '')
plt.title(f'Comparing confirmed cases in {region} and {other_region}')
plt.savefig(f'results/raw_manhattan/{region}.png')
plt.close()
print(region)
|
Python
|
3892bbaa446c6859124ea678a66a873eb57e72ef3d4e82ef6011a9599473cb90
|
0x3e09e178e315608c
|
<reponame>steven-lang/rational_activations
"""
Rational Activation Functions for MXNET
=======================================
This module allows you to create Rational Neural Networks using Learnable
Rational activation functions with MXNET networks.
"""
import mxnet as mx
from mxnet import initializer
from mxnet.gluon import HybridBlock
from rational.utils.get_weights import get_parameters
from rational.mxnet.versions import _version_a, _version_b, _version_c, _version_d
from rational._base.rational_base import Rational_base
class Rational(Rational_base, HybridBlock):
"""
Rational Activation Function, inheriting from ``mxnet.gluon.HybridBlock``.
Arguments:
approx_func (str):
The name of the approximated function for initialisation.
The different functions are available in `rational.rationals_config.json`.
Default: ``leaky_relu``
degrees (tuple of int):
The degrees of the numerator (P) and denominator (Q).
Default ``(5, 4)``
cuda (bool):
whether to execute on cuda device.
NOTE: THIS PARAMETER IS CURRENTLY NOT CONSIDERED.
CUDA GPUS ARE USED WHEN IT IS POSSIBLE
version (str):
Version of Rational to use. Rational(x) = P(x)/Q(x),
where
P(x) = (a_0 + a_1 * x + a_2 * x^2 + ... + a_n * x^n) and
`A`: Q(x) = (1 + |b_0 * x| + | b_1 * x^2| + ... + | b_m * x^{m+1}|)
`B`: Q(x) = (1 + |b_0 * x + b_1 * x^2 + ... + b_m * x^{m + 1}|)
`C`: Q(x) = (0.1 + |b_0 + b_1 * x + b_2 * x^2 + ... + b_m * x^m|)
`D`: like `B` with noised coefficients b_i
Default ``A``
trainable (bool):
Whether the weights are trainable, i.e, if they are updated during
backward pass.
Default ``True``
Returns:
HybridBlock:
Rational hybrid block
"""
def __init__(self, approx_func='leaky_relu', degrees=(5, 4), cuda=False,
version='A', trainable=True, **kwargs):
super(Rational, self).__init__(**kwargs)
# read initial parameter configuration from external files
w_numerator, w_denominator = get_parameters(
version, degrees, approx_func)
# convert w_numerator and w_denominator to mxnet arrays
w_numerator = mx.nd.array(w_numerator)
w_denominator = mx.nd.array(w_denominator)
# register the amount of weights in numerator and denominator, since we need them during
# symbolic execution, but are unable to retrieve them at later stages
self.numerator_length = len(w_numerator)
self.denominator_length = len(w_denominator)
self.training = trainable
self.degrees = degrees
self.version = version
self.init_approximation = approx_func
# set specified context (currently not happening, since unclear, how and why helpful)
# self.device = gpu() if cuda else cpu()
# register and configure weights (numerator and denominator coefficients)
with self.name_scope():
self.numerator = self.params.get(name='w_numerator', shape=(len(w_numerator),),
init=initializer.Constant(
w_numerator),
grad_req='write' if trainable
else 'null',
differentiable=trainable)
self.denominator = self.params.get(name='w_denominator', shape=(len(w_denominator),),
init=initializer.Constant(
w_denominator),
grad_req='write' if trainable
else 'null',
differentiable=trainable)
# register whether function is trainable, since this information needs to be passed to
# version D
self.training = trainable
self.init_approximation = approx_func
# set rational activation function version
self.rational_func = {'A': _version_a, 'B': _version_b, 'C': _version_c, 'D': _version_d} \
.get(version)
if self.rational_func is None:
raise ValueError(
"rational activation function version %s not implemented" % version)
def hybrid_forward(self, F, x, numerator, denominator):
return self.rational_func(F, x, numerator, denominator, self.training,
self.numerator_length, self.denominator_length)
def numpy(self):
"""
Returns a numpy version of this activation function.
"""
from rational.numpy import Rational as Rational_numpy
rational_n = Rational_numpy(self.init_approximation, self.degrees,
self.version)
rational_n.numerator = self.numerator.data().asnumpy().tolist()
rational_n.denominator = self.denominator.data().asnumpy().tolist()
return rational_n
|
Python
|
5ff164365206465f3f2318f2b8152fc1224060dc3005ded9b59fca7a0df5cb33
|
0x69ddd8e0a0dc8a9c
|
<filename>torchflare/criterion/utils.py<gh_stars>1-10
"""Utils for criterion."""
import torch
import torch.nn.functional as F
def normalize(x, axis=-1):
"""Performs L2-Norm."""
num = x
denom = torch.norm(x, 2, axis, keepdim=True).expand_as(x) + 1e-12
return num / denom
# Source : https://github.com/earhian/Humpback-Whale-Identification-1st-/blob/master/models/triplet_loss.py
def euclidean_dist(x, y):
"""Computes Euclidean distance."""
m, n = x.size(0), y.size(0)
xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n)
yy = torch.pow(x, 2).sum(1, keepdim=True).expand(m, m).t()
dist = xx + yy - 2 * torch.matmul(x, y.t())
dist = dist.clamp(min=1e-12).sqrt()
return dist
def cosine_dist(x, y):
"""Computes Cosine Distance."""
x = F.normalize(x, dim=1)
y = F.normalize(y, dim=1)
dist = 2 - 2 * torch.mm(x, y.t())
return dist
|
Python
|
78ce00ca6c55feba8354953695d1668c7ac148d1556ceb77b93e26fb811a80c0
|
0x9a5d82ebc45e698c
|
<reponame>geudrik/hautomation
#! /usr/bin/env python2.7
# -*- coding: latin-1 -*-
from flask import Blueprint
from flask import current_app
from flask import render_template
from flask_login import login_required
homestack = Blueprint("homestack", __name__, url_prefix="/homestack")
@homestack.route("/", methods=["GET"])
@login_required
def home():
return render_template("homestack/home.html")
|
Python
|
a0838d3a04f088d52fb7f7ff2895f2afb5516d976530b1b023edd8a5b5c1e563
|
0x221d82b8d9de228c
|
"""Forms for RTD donations"""
import logging
from django import forms
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from readthedocs.payments.forms import StripeModelForm, StripeResourceMixin
from readthedocs.payments.utils import stripe
from .models import Supporter
log = logging.getLogger(__name__)
class SupporterForm(StripeResourceMixin, StripeModelForm):
"""Donation support sign up form
This extends the basic payment form, giving fields for credit card number,
expiry, and CVV. The proper Knockout data bindings are established on
:py:class:`StripeModelForm`
"""
class Meta:
model = Supporter
fields = (
'last_4_digits',
'name',
'email',
'dollars',
'logo_url',
'site_url',
'public',
)
labels = {
'public': _('Make this donation public'),
}
help_texts = {
'public': _('Your name and image will be displayed on the donation page'),
'email': _('Your email is used for Gravatar and so we can send you a receipt'),
'logo_url': _("URL of your company's logo, images should be 300x300 pixels or less"),
'dollars': _('Companies donating over $400 can specify a logo URL and site link'),
}
widgets = {
'dollars': forms.HiddenInput(attrs={
'data-bind': 'value: dollars'
}),
'logo_url': forms.TextInput(attrs={
'data-bind': 'value: logo_url, enable: urls_enabled'
}),
'site_url': forms.TextInput(attrs={
'data-bind': 'value: site_url, enable: urls_enabled'
}),
'last_4_digits': forms.TextInput(attrs={
'data-bind': 'valueInit: card_digits, value: card_digits'
}),
}
last_4_digits = forms.CharField(widget=forms.HiddenInput(), required=True)
name = forms.CharField(required=True)
email = forms.CharField(required=True)
def __init__(self, *args, **kwargs):
self.user = kwargs.pop('user')
super(SupporterForm, self).__init__(*args, **kwargs)
def validate_stripe(self):
"""Call stripe for payment (not ideal here) and clean up logo < $200"""
dollars = self.cleaned_data['dollars']
if dollars < 200:
self.cleaned_data['logo_url'] = None
self.cleaned_data['site_url'] = None
stripe.Charge.create(
amount=int(self.cleaned_data['dollars']) * 100,
currency='usd',
source=self.cleaned_data['stripe_token'],
description='Read the Docs Sustained Engineering',
receipt_email=self.cleaned_data['email']
)
def save(self, commit=True):
supporter = super(SupporterForm, self).save(commit)
if commit and self.user is not None and self.user.is_authenticated():
supporter.user = self.user
supporter.save()
return supporter
class EthicalAdForm(StripeResourceMixin, StripeModelForm):
"""Payment form for ethical ads
This extends the basic payment form, giving fields for credit card number,
expiry, and CVV. The proper Knockout data bindings are established on
:py:class:`StripeModelForm`
"""
class Meta:
model = Supporter
fields = (
'last_4_digits',
'name',
'email',
'dollars',
)
help_texts = {
'email': _('Your email is used so we can send you a receipt'),
}
widgets = {
'dollars': forms.HiddenInput(attrs={
'data-bind': 'value: dollars'
}),
'last_4_digits': forms.TextInput(attrs={
'data-bind': 'valueInit: card_digits, value: card_digits'
}),
}
last_4_digits = forms.CharField(widget=forms.HiddenInput(), required=True)
name = forms.CharField(required=True)
email = forms.CharField(required=True)
def validate_stripe(self):
stripe.Charge.create(
amount=int(self.cleaned_data['dollars']) * 100,
currency='usd',
source=self.cleaned_data['stripe_token'],
description='Read the Docs Sponsorship Payment',
receipt_email=self.cleaned_data['email']
)
|
Python
|
4614dcc698cbb82e6ce9b48f236a1a67155190b773eba2d7ca26337974526099
|
0xd40c027b5addd4f4
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from .base import DataReaderBase
from ..tools import COL, _get_dates, to_float, to_int
import pandas as pd
#from pandas.tseries.frequencies import to_offset
from six.moves import cStringIO as StringIO
import logging
import traceback
import datetime
import json
import token, tokenize
def ymd_to_date(y, m, d):
"""
Returns date
>>> expiration = {u'd': 1, u'm': 12, u'y': 2014}
>>> ymd_to_date(**expiration)
datetime.date(2014, 12, 1)
>>> ymd_to_date(2014, 3, 1)
datetime.date(2014, 3, 1)
"""
return(datetime.date(year=y, month=m, day=d))
def date_to_ymd(date):
"""
Returns dict like {'y': ..., 'm': ..., 'd': ...}
>>> date_to_ymd(datetime.date(year=2010, month=1, day=3))
{'y': 2010, 'm': 1, 'd': 3}
"""
d = {
'y': date.year,
'm': date.month,
'd': date.day
}
return(d)
def fix_lazy_json(in_text):
"""
Handle lazy JSON - to fix expecting property name
this function fixes the json output from google
http://stackoverflow.com/questions/4033633/handling-lazy-json-in-python-expecting-property-name
"""
tokengen = tokenize.generate_tokens(StringIO(in_text).readline)
result = []
for tokid, tokval, _, _, _ in tokengen:
# fix unquoted strings
if (tokid == token.NAME):
if tokval not in ['true', 'false', 'null', '-Infinity', 'Infinity', 'NaN']:
tokid = token.STRING
tokval = u'"%s"' % tokval
# fix single-quoted strings
elif (tokid == token.STRING):
if tokval.startswith ("'"):
tokval = u'"%s"' % tokval[1:-1].replace ('"', '\\"')
# remove invalid commas
elif (tokid == token.OP) and ((tokval == '}') or (tokval == ']')):
if (len(result) > 0) and (result[-1][1] == ','):
result.pop()
# fix single-quoted strings
elif (tokid == token.STRING):
if tokval.startswith ("'"):
tokval = u'"%s"' % tokval[1:-1].replace ('"', '\\"')
result.append((tokid, tokval))
return tokenize.untokenize(result)
def json_decode(json_string):
try:
ret = json.loads(json_string)
except:
json_string = fix_lazy_json(json_string)
ret = json.loads(json_string)
return ret
class DataReaderGoogleFinanceOptions(DataReaderBase):
"""
DataReader to fetch data from Google Finance Options
see https://www.google.com/finance/option_chain
https://github.com/makmac213/python-google-option-chain
http://www.drtomstarke.com/index.php/option-chains-from-google-finance-api
"""
def init(self, *args, **kwargs):
self._get_multi = self._get_multi_todict
def _get_one(self, name, *args, **kwargs):
return(self._get_one_raw(name, 'All', 'json'))
def _get_one_raw(self, symbol, typ='All', output='json', y='2014', m='12', d='1'):
url = "https://www.google.com/finance/option_chain"
params = {
'q': symbol,
'type': typ,
'output': output,
}
data = self._get_content(url, params)
d = {}
lst = []
for typ in [u'puts', u'calls']:
df_typ = pd.DataFrame(data[typ])
df_typ['Type'] = typ
lst.append(df_typ)
del data[typ]
for i, expiration in enumerate(data['expirations']):
params = {
'q': symbol,
'output': output,
'expy': expiration['y'],
'expm': expiration['m'],
'expd': expiration['d'],
}
data = self._get_content(url, params)
for typ in [u'puts', u'calls']:
df_typ = pd.DataFrame(data[typ])
df_typ['Type'] = typ
lst.append(df_typ)
del data[typ]
lst.append(df_typ)
df = pd.concat(lst, axis=0, ignore_index=True)
d_cols = {
"a": "Ask",
"b": "Bid",
"p": "Last",
"strike": "Strike",
"expiry": "Expiry",
"vol": "Volume",
"name": "Name"
}
df = df.rename(columns=d_cols)
"""
d_cols = {
"a": "ask",
"b": "bid",
"c": "change",
"cid": "identity code",
"cp": "cp"
"cs": change direction. "chg" = up, "chr" = down, "chg"?
"e": # I think this tells us something about what country where the stock is traded. "OPRA" means USA.
"expiry": expiration date for this option
"name": I don't know. I have never seen a value for this
"oi": open interest. How many of these are currently being held by others.
See, http://www.investopedia.com/terms/o/openinterest.asp
"p": price, last
"s": option code.
Basically, Stock Symbol + 7 if mini option + date + "C" or "P" + price
"strike": "strike price for this option"
"vol": "the volume of options traded."
}
"""
for col in ['Ask', 'Bid', 'c', 'cp', 'Last', 'Strike']:
df[col] = df[col].map(to_float)
for col in ['Volume', 'oi', 'cid']:
df[col] = df[col].map(to_int)
df['Expiry'] = pd.to_datetime(df['Expiry'])
data['options'] = df
data['underlying_id'] = int(data['underlying_id'])
data['expiry'] = ymd_to_date(**data['expiry'])
for i, expiration in enumerate(data['expirations']):
data['expirations'][i] = ymd_to_date(**expiration)
#for col in ['Volume']:
# df[col] = df[col].fillna(0)
#d = {}
#d["options"] = df
#return(d)
return(data)
def _get_content(self, url, params):
#response = requests.get(url, params=params)
response = self.session.get(url, params=params)
if response.status_code == 200:
content_json = response.text
data = json_decode(content_json)
return(data)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Python
|
0f9d5d5311a69262771a50a6f7993d96b0a1573a9bcc4dd6c1d48e1bb3fe5938
|
0xa1d964f3a37826f5
|
<reponame>Vail-qin/Keras-TextClassification
# !/usr/bin/python
# -*- coding: utf-8 -*-
# @time : 2019/11/2 21:08
# @author : Mo
# @function:
from keras_textclassification.data_preprocess.text_preprocess import load_json, save_json
from keras_textclassification.conf.path_config import path_model_dir
path_fast_text_model_vocab2index = path_model_dir + 'vocab2index.json'
path_fast_text_model_l2i_i2l = path_model_dir + 'l2i_i2l.json'
import numpy as np
import os
class PreprocessGenerator:
"""
数据预处理, 输入为csv格式, [label,ques]
"""
def __init__(self):
self.l2i_i2l = None
if os.path.exists(path_fast_text_model_l2i_i2l):
self.l2i_i2l = load_json(path_fast_text_model_l2i_i2l)
def prereocess_idx(self, pred):
if os.path.exists(path_fast_text_model_l2i_i2l):
pred_i2l = {}
i2l = self.l2i_i2l['i2l']
for i in range(len(pred)):
pred_i2l[i2l[str(i)]] = pred[i]
pred_i2l_rank = [sorted(pred_i2l.items(), key=lambda k: k[1], reverse=True)]
return pred_i2l_rank
else:
raise RuntimeError("path_fast_text_model_label2index is None")
def prereocess_pred_xid(self, pred):
if os.path.exists(path_fast_text_model_l2i_i2l):
pred_l2i = {}
l2i = self.l2i_i2l['l2i']
for i in range(len(pred)):
pred_l2i[pred[i]] = l2i[pred[i]]
pred_l2i_rank = [sorted(pred_l2i.items(), key=lambda k: k[1], reverse=True)]
return pred_l2i_rank
else:
raise RuntimeError("path_fast_text_model_label2index is None")
def preprocess_get_label_set(self, path):
# 首先获取label,set,即存在的具体类
label_set = set()
len_all = 0
file_csv = open(path, "r", encoding="utf-8")
for line in file_csv:
len_all += 1
if len_all > 1: # 第一条是标签'label,ques',不选择
line_sp = line.split(",")
label_org = str(line_sp[0]).strip().upper()
label_real = "NAN" if label_org=="" else label_org
label_set.add(label_real)
file_csv.close()
return label_set, len_all
def preprocess_label_ques_to_idx(self, embedding_type, batch_size, path, embed, rate=1):
label_set, len_all = self.preprocess_get_label_set(path)
# 获取label转index字典等, 如果label2index存在则不转换了, dev验证集合的时候用
if not os.path.exists(path_fast_text_model_l2i_i2l):
count = 0
label2index = {}
index2label = {}
for label_one in label_set:
label2index[label_one] = count
index2label[count] = label_one
count = count + 1
l2i_i2l = {}
l2i_i2l['l2i'] = label2index
l2i_i2l['i2l'] = index2label
save_json(l2i_i2l, path_fast_text_model_l2i_i2l)
else:
l2i_i2l = load_json(path_fast_text_model_l2i_i2l)
# 读取数据的比例
len_ql = int(rate * len_all)
if len_ql <= 500: # sample时候不生效,使得语料足够训练
len_ql = len_all
def process_line(line):
# 对每一条数据操作,获取label和问句index
line_sp = line.split(",")
ques = str(line_sp[1]).strip().upper()
label = str(line_sp[0]).strip().upper()
label = "NAN" if label == "" else label
que_embed = embed.sentence2idx(ques)
label_zeros = [0] * len(l2i_i2l['l2i'])
label_zeros[l2i_i2l['l2i'][label]] = 1
return que_embed, label_zeros
while True:
file_csv = open(path, "r", encoding="utf-8")
cout_all_line = 0
cnt = 0
x, y = [], []
# 跳出循环
if len_ql < cout_all_line:
break
for line in file_csv:
cout_all_line += 1
if cout_all_line > 1: # 第一条是标签'label,ques',不选择
x_line, y_line = process_line(line)
x.append(x_line)
y.append(y_line)
cnt += 1
if cnt == batch_size:
if embedding_type in ['bert', 'albert']:
x_, y_ = np.array(x), np.array(y)
x_1 = np.array([x[0] for x in x_])
x_2 = np.array([x[1] for x in x_])
x_all = [x_1, x_2]
elif embedding_type == 'xlnet':
x_, y_ = x, np.array(y)
x_1 = np.array([x[0][0] for x in x_])
x_2 = np.array([x[1][0] for x in x_])
x_3 = np.array([x[2][0] for x in x_])
x_all = [x_1, x_2, x_3]
else:
x_all, y_ = np.array(x), np.array(y)
cnt = 0
yield (x_all, y_)
x, y =[], []
file_csv.close()
print("preprocess_label_ques_to_idx ok")
|
Python
|
e197c9c464126d62d78c30aec5ad91317d62797304bd554419f3a9f7bf56e9f2
|
0x30ba1e2c126485e4
|
<gh_stars>1-10
######## Image Object Detection Using Tensorflow-trained Classifier #########
#
# Author: <NAME>
# Date: 1/15/18
# Description:
# This program uses a TensorFlow-trained classifier to perform object detection.
# It loads the classifier uses it to perform object detection on an image.
# It draws boxes and scores around the objects of interest in the image.
## Some of the code is copied from Google's example at
## https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
## and some is copied from Dat Tran's example at
## https://github.com/datitran/object_detector_app/blob/master/object_detection_app.py
## but I changed it to make it more understandable to me.
# Import packages
import os
import cv2
import numpy as np
import tensorflow as tf
import sys
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
# Import utilites
from utils import label_map_util
from utils import visualization_utils as vis_util
# Name of the directory containing the object detection module we're using
MODEL_NAME = 'inference_graph'
IMAGE_NAME = 'test1.jpg'
# Grab path to current working directory
CWD_PATH = os.getcwd()
# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,'training','labelmap.pbtxt')
# Path to image
PATH_TO_IMAGE = os.path.join(CWD_PATH,IMAGE_NAME)
# Number of classes the object detector can identify
NUM_CLASSES = 6
# Load the label map.
# Label maps map indices to category names, so that when our convolution
# network predicts `5`, we know that this corresponds to `king`.
# Here we use internal utility functions, but anything that returns a
# dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
sess = tf.Session(graph=detection_graph)
# Define input and output tensors (i.e. data) for the object detection classifier
# Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Output tensors are the detection boxes, scores, and classes
# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represents level of confidence for each of the objects.
# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Load image using OpenCV and
# expand image dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
image = cv2.imread(PATH_TO_IMAGE)
image_expanded = np.expand_dims(image, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_expanded})
# Draw the results of the detection (aka 'visulaize the results')
vis_util.visualize_boxes_and_labels_on_image_array(
image,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8,
min_score_thresh=0.60)
# All the results have been drawn on image. Now display the image.
cv2.imshow('Object detector', image)
# Press any key to close the image
cv2.waitKey(0)
# Clean up
cv2.destroyAllWindows()
|
Python
|
6f10fe3937e35951eeb5bb4ced40684942e59c93a0a0e010b52991db4e9b39fb
|
0x8b7df1d7623d6607
|
from django.db.models import Q
from django.shortcuts import render
from django.http import Http404
# Create your views here.
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.decorators import api_view
from .models import Product, Category
from .serializers import ProductSerializer, CategorySerializer
class LatestProductsList(APIView):
def get(self, request, format=None):
products = Product.objects.all()[0:4]
serializer = ProductSerializer(products,many=True)
return Response(serializer.data)
class ProductDetail(APIView):
def get_object(self, category_slug, product_slug):
try:
return Product.objects.filter(category__slug=category_slug).get(slug=product_slug)
except Product.DoesNotExist:
raise Http404
def get(self, request, category_slug, product_slug, format= None):
product = self.get_object(category_slug, product_slug)
serializer = ProductSerializer(product)
return Response(serializer.data)
class CategoryDetail(APIView):
def get_object(self, category_slug):
try:
return Category.objects.get(slug=category_slug)
except Category.DoesNotExist:
raise Http404
def get(self, request, category_slug, format= None):
category = self.get_object(category_slug)
serializer = CategorySerializer(category)
return Response(serializer.data)
@api_view(['POST'])
def search(request):
query = request.data.get('query', '')
if query:
products = Product.objects.filter(Q(name__icontains=query) | Q(description__icontains=query))
serializer = ProductSerializer(products, many=True)
return Response(serializer.data)
else:
return Response({"products": []})
|
Python
|
371e23c5cb59408b66ff53795e32d56b67cca34d4bb8eb6abf142a8a695ed799
|
0xe94ba4ff187e3cf9
|
from sys import maxsize
class Contact:
def __init__(self, fname=None, mname=None, lname=None, nick=None, title=None, comp=None, addr=None,
home=None, mobile=None, work=None, fax=None, email1=None, email2=None, email3=None,
homepage=None, bday=None, bmonth=None, byear=None, aday=None, amonth=None, ayear=None,
secaddr=None, secphone=None, note=None, id =None):
self.fname = fname
self.mname = mname
self.lname = lname
self.nick = nick
self.title = title
self.comp = comp
self.addr = addr
self.home = home
self.mobile = mobile
self.work = work
self.fax = fax
self.email1 = email1
self.email2 = email2
self.email3 = email3
self.homepage = homepage
self.bday = bday
self.bmonth = bmonth
self.byear = byear
self.aday = aday
self.amonth = amonth
self.ayear = ayear
self.secaddr = secaddr
self.secphone = secphone
self.note = note
self.id = id
def __repr__(self):
return "%s:%s:%s" % (self.id, self.fname, self.lname)
def __eq__(self, other):
return (self.id is None or other.id is None or self.id == other.id) and self.fname == other.fname and self.lname == other.lname
def id_or_max(self):
if self.id:
return int(self.id)
else:
return maxsize
|
Python
|
0cde2967b1feb08a16836293cbedac48942ca7b246b3bc4e40a0237b8ddc0a83
|
0x9495bce932493775
|
from zeit.cms.i18n import MessageFactory as _
import zope.interface
import zope.schema
class IGlobalSettings(zope.interface.Interface):
"""Global CMS settings."""
default_year = zope.schema.Int(
title=_("Default year"),
min=1900,
max=2100)
default_volume = zope.schema.Int(
title=_("Default volume"),
min=1,
max=54)
def get_working_directory(template):
"""Return the collection which is the main working directory.
template:
Template which will be filled with year and volume. In
``template`` the placeholders $year and $volume will be replaced.
Example: 'online/$year/$volume/foo'
If the respective collection does not exist, it will be created before
returning it.
"""
|
Python
|
6051aec4e9e9b8946edf7c3233ae0856116d17e93db7c71a33f42a9ab9671e71
|
0xc325e4f7991f87a5
|
<filename>abc/abc165/abc165e.py
N, M = map(int, input().split())
for i in range(1, M + 1):
if i % 2 == 1:
j = (i - 1) // 2
print(1 + j, M + 1 - j)
else:
j = (i - 2) // 2
print(M + 2 + j, 2 * M + 1 - j)
|
Python
|
4a0a273e581e1fb27e62539d38fbfbebca765a24f45dcd10741115fb4ca10e67
|
0x462abe2e02c1619b
|
<filename>eth2/beacon/chains/base.py
from abc import (
ABC,
abstractmethod,
)
import logging
from typing import (
TYPE_CHECKING,
Tuple,
Type,
)
from eth._utils.datatypes import (
Configurable,
)
from eth.db.backends.base import (
BaseAtomicDB,
)
from eth.exceptions import (
BlockNotFound,
)
from eth.validation import (
validate_word,
)
from eth_typing import (
Hash32,
)
from eth_utils import (
ValidationError,
encode_hex,
)
from eth2._utils.ssz import (
validate_imported_block_unchanged,
)
from eth2.beacon.db.chain import (
BaseBeaconChainDB,
BeaconChainDB,
)
from eth2.beacon.exceptions import (
BlockClassError,
StateMachineNotFound,
)
from eth2.beacon.types.blocks import (
BaseBeaconBlock,
)
from eth2.beacon.types.states import (
BeaconState,
)
from eth2.beacon.typing import (
FromBlockParams,
Slot,
)
from eth2.beacon.validation import (
validate_slot,
)
if TYPE_CHECKING:
from eth2.beacon.state_machines.base import ( # noqa: F401
BaseBeaconStateMachine,
)
class BaseBeaconChain(Configurable, ABC):
"""
The base class for all BeaconChain objects
"""
chaindb = None # type: BaseBeaconChainDB
chaindb_class = None # type: Type[BaseBeaconChainDB]
sm_configuration = None # type: Tuple[Tuple[Slot, Type[BaseBeaconStateMachine]], ...]
chain_id = None # type: int
#
# Helpers
#
@classmethod
@abstractmethod
def get_chaindb_class(cls) -> Type[BaseBeaconChainDB]:
pass
#
# Chain API
#
@classmethod
@abstractmethod
def from_genesis(cls,
base_db: BaseAtomicDB,
genesis_state: BeaconState,
genesis_block: BaseBeaconBlock) -> 'BaseBeaconChain':
pass
#
# State Machine API
#
@classmethod
@abstractmethod
def get_state_machine_class(
cls,
block: BaseBeaconBlock) -> Type['BaseBeaconStateMachine']:
pass
@abstractmethod
def get_state_machine(self, at_block: BaseBeaconBlock=None) -> 'BaseBeaconStateMachine':
pass
@classmethod
@abstractmethod
def get_state_machine_class_for_block_slot(
cls,
slot: Slot) -> Type['BaseBeaconStateMachine']:
pass
#
# Block API
#
@abstractmethod
def get_block_class(self, block_root: Hash32) -> Type[BaseBeaconBlock]:
pass
@abstractmethod
def create_block_from_parent(self,
parent_block: BaseBeaconBlock,
block_params: FromBlockParams) -> BaseBeaconBlock:
pass
@abstractmethod
def get_block_by_root(self, block_root: Hash32) -> BaseBeaconBlock:
pass
@abstractmethod
def get_canonical_head(self) -> BaseBeaconBlock:
pass
@abstractmethod
def get_score(self, block_root: Hash32) -> int:
pass
@abstractmethod
def ensure_block(self, block: BaseBeaconBlock=None) -> BaseBeaconBlock:
pass
@abstractmethod
def get_block(self) -> BaseBeaconBlock:
pass
@abstractmethod
def get_canonical_block_by_slot(self, slot: Slot) -> BaseBeaconBlock:
pass
@abstractmethod
def get_canonical_block_root(self, slot: Slot) -> Hash32:
pass
@abstractmethod
def import_block(
self,
block: BaseBeaconBlock,
perform_validation: bool=True
) -> Tuple[BaseBeaconBlock, Tuple[BaseBeaconBlock, ...], Tuple[BaseBeaconBlock, ...]]:
pass
class BeaconChain(BaseBeaconChain):
"""
A Chain is a combination of one or more ``StateMachine`` classes. Each ``StateMachine``
is associated with a range of slots. The Chain class acts as a wrapper around these other
StateMachine classes, delegating operations to the appropriate StateMachine depending on the
current block slot number.
"""
logger = logging.getLogger("eth2.beacon.chains.BeaconChain")
chaindb_class = BeaconChainDB # type: Type[BaseBeaconChainDB]
def __init__(self, base_db: BaseAtomicDB) -> None:
if not self.sm_configuration:
raise ValueError(
"The Chain class cannot be instantiated with an empty `sm_configuration`"
)
else:
# TODO implment validate_sm_configuration(self.sm_configuration)
# validate_sm_configuration(self.sm_configuration)
pass
self.chaindb = self.get_chaindb_class()(base_db)
#
# Helpers
#
@classmethod
def get_chaindb_class(cls) -> Type['BaseBeaconChainDB']:
if cls.chaindb_class is None:
raise AttributeError("`chaindb_class` not set")
return cls.chaindb_class
#
# Chain API
#
@classmethod
def from_genesis(cls,
base_db: BaseAtomicDB,
genesis_state: BeaconState,
genesis_block: BaseBeaconBlock) -> 'BaseBeaconChain':
"""
Initialize the ``BeaconChain`` from a genesis state.
"""
sm_class = cls.get_state_machine_class_for_block_slot(genesis_block.slot)
if type(genesis_block) != sm_class.block_class:
raise BlockClassError(
"Given genesis block class: {}, StateMachine.block_class: {}".format(
type(genesis_block),
sm_class.block_class
)
)
chaindb = cls.get_chaindb_class()(db=base_db)
chaindb.persist_state(genesis_state)
return cls._from_genesis_block(base_db, genesis_block)
@classmethod
def _from_genesis_block(cls,
base_db: BaseAtomicDB,
genesis_block: BaseBeaconBlock) -> 'BaseBeaconChain':
"""
Initialize the ``BeaconChain`` from the genesis block.
"""
chaindb = cls.get_chaindb_class()(db=base_db)
chaindb.persist_block(genesis_block, genesis_block.__class__)
return cls(base_db)
#
# StateMachine API
#
@classmethod
def get_state_machine_class(cls, block: BaseBeaconBlock) -> Type['BaseBeaconStateMachine']:
"""
Returns the ``StateMachine`` instance for the given block slot number.
"""
return cls.get_state_machine_class_for_block_slot(block.slot)
@classmethod
def get_state_machine_class_for_block_slot(
cls,
slot: Slot) -> Type['BaseBeaconStateMachine']:
"""
Return the ``StateMachine`` class for the given block slot number.
"""
if cls.sm_configuration is None:
raise AttributeError("Chain classes must define the StateMachines in sm_configuration")
validate_slot(slot)
for start_slot, sm_class in reversed(cls.sm_configuration):
if slot >= start_slot:
return sm_class
raise StateMachineNotFound("No StateMachine available for block slot: #{0}".format(slot))
def get_state_machine(self, at_block: BaseBeaconBlock=None) -> 'BaseBeaconStateMachine':
"""
Return the ``StateMachine`` instance for the given block number.
"""
block = self.ensure_block(at_block)
sm_class = self.get_state_machine_class_for_block_slot(block.slot)
return sm_class(
chaindb=self.chaindb,
block=block,
)
#
# Block API
#
def get_block_class(self, block_root: Hash32) -> Type[BaseBeaconBlock]:
slot = self.chaindb.get_slot_by_root(block_root)
sm_class = self.get_state_machine_class_for_block_slot(slot)
block_class = sm_class.block_class
return block_class
def create_block_from_parent(self,
parent_block: BaseBeaconBlock,
block_params: FromBlockParams) -> BaseBeaconBlock:
"""
Passthrough helper to the ``StateMachine`` class of the block descending from the
given block.
"""
return self.get_state_machine_class_for_block_slot(
slot=parent_block.slot + 1 if block_params.slot is None else block_params.slot,
).create_block_from_parent(parent_block, block_params)
def get_block_by_root(self, block_root: Hash32) -> BaseBeaconBlock:
"""
Return the requested block as specified by block hash.
Raise ``BlockNotFound`` if there's no block with the given hash in the db.
"""
validate_word(block_root, title="Block Hash")
block_class = self.get_block_class(block_root)
return self.chaindb.get_block_by_root(block_root, block_class)
def get_canonical_head(self) -> BaseBeaconBlock:
"""
Return the block at the canonical chain head.
Raise ``CanonicalHeadNotFound`` if there's no head defined for the canonical chain.
"""
block_root = self.chaindb.get_canonical_head_root()
block_class = self.get_block_class(block_root)
return self.chaindb.get_block_by_root(block_root, block_class)
def get_score(self, block_root: Hash32) -> int:
"""
Return the score of the block with the given hash.
Raise ``BlockNotFound`` if there is no matching black hash.
"""
return self.chaindb.get_score(block_root)
def ensure_block(self, block: BaseBeaconBlock=None) -> BaseBeaconBlock:
"""
Return ``block`` if it is not ``None``, otherwise return the block
of the canonical head.
"""
if block is None:
head = self.get_canonical_head()
return self.create_block_from_parent(head, FromBlockParams())
else:
return block
def get_block(self) -> BaseBeaconBlock:
"""
Return the current TIP block.
"""
return self.get_state_machine().block
def get_canonical_block_by_slot(self, slot: Slot) -> BaseBeaconBlock:
"""
Return the block with the given number in the canonical chain.
Raise ``BlockNotFound`` if there's no block with the given number in the
canonical chain.
"""
validate_slot(slot)
return self.get_block_by_root(self.chaindb.get_canonical_block_root(slot))
def get_canonical_block_root(self, slot: Slot) -> Hash32:
"""
Return the block hash with the given number in the canonical chain.
Raise ``BlockNotFound`` if there's no block with the given number in the
canonical chain.
"""
return self.chaindb.get_canonical_block_root(slot)
def import_block(
self,
block: BaseBeaconBlock,
perform_validation: bool=True
) -> Tuple[BaseBeaconBlock, Tuple[BaseBeaconBlock, ...], Tuple[BaseBeaconBlock, ...]]:
"""
Import a complete block and returns a 3-tuple
- the imported block
- a tuple of blocks which are now part of the canonical chain.
- a tuple of blocks which were canonical and now are no longer canonical.
"""
try:
parent_block = self.get_block_by_root(block.previous_block_root)
except BlockNotFound:
raise ValidationError(
"Attempt to import block #{}. Cannot import block {} before importing "
"its parent block at {}".format(
block.slot,
block.signed_root,
block.previous_block_root,
)
)
base_block_for_import = self.create_block_from_parent(
parent_block,
FromBlockParams(),
)
state, imported_block = self.get_state_machine(base_block_for_import).import_block(block)
# Validate the imported block.
if perform_validation:
validate_imported_block_unchanged(imported_block, block)
# TODO: Now it just persists all state. Should design how to clean up the old state.
self.chaindb.persist_state(state)
(
new_canonical_blocks,
old_canonical_blocks,
) = self.chaindb.persist_block(imported_block, imported_block.__class__)
self.logger.debug(
'IMPORTED_BLOCK: slot %s | signed root %s',
imported_block.slot,
encode_hex(imported_block.signed_root),
)
return imported_block, new_canonical_blocks, old_canonical_blocks
|
Python
|
0fdf55898a047dcb568336091976a73f11ffb352a11657711ba48b761be983ad
|
0x42b14a08e446a55
|
#!/usr/local/bin/python3
import paramiko,time
#using as SSH Client
client = paramiko.SSHClient()
# check dir(client) to find available options.
# auto adjust host key verification with yes or no
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# time for connecting to remote Cisco IOS
"""
Manually taking input
addr = input('Provide IP address to connect to: ')
user = input('Username: ')
pwd = <PASSWORD>('Password: ')"""
# Taking input from files
f1 = open("devices.txt","r")
f2 = open("commands.txt","r")
for line in f1:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
data = line.split(" ")
# print(data)
addr = data[0]
user = data[1]
pwd = data[2]
f3 = open(addr+".txt","w+")
# print(addr +" "+ user +" " +pwd)
client.connect(addr,username=user,password=<PASSWORD>,allow_agent=False,look_for_keys=False)
# we have to ask for Shell
device_access = client.invoke_shell()
for line in f2:
device_access.send(line)
time.sleep(1)
output = device_access.recv(55000).decode('ascii')
f3.write(output)
"""
THIS CODE IS FOR SINGLE COMMAND, FOR MULTIPLE COMMANDS CODE BELOW
# send command to the device
device_access.send("ter len 0\nshow run \n")
time.sleep(2)
# receive output from the device, convert it to byte-like format and print it
print(device_access.recv(550000).decode('ascii'))
# We can print the same to a file too
with open("csr1000v.txt","w") as f:
f.write(device_access.recv(550000).decode('ascii'))"""
|
Python
|
70dc9698b60a1bb5772ed9a128cb4f35dff6805fef87c5ec63f87750d44cfc4e
|
0x4752a98e476560c9
|
# for n in range(400,500):
# i = n // 100
# j = n // 10 % 10
# k = n % 10
# if n == i ** 3 + j ** 3 + k ** 3:
# print(n)
# 第一道题(16)
# input("请输入(第一次):")
# s1 = input("请输入(第二次):")
# l1 = s1.split(' ')
# l2 = []
# for i in l1:
# if i.isdigit():
# l2.append(int(i))
# for i in l2:
# if not (i % 6):
# print(i, end=" ")
# 第二道题(17)
out_l1 = []
def bian_int_list(l1):
re_l1 = [] # 返回出去的列表
for i in l1:
re_l1.append(i)
def jisuan(str_num):
he1 = 0
global out_l1
for i in l1():
he1 += int(i)**2
if he1 > int(str_num):
out_l1.append(str_num)
return None
while 1:
in_1 = input("请输入数值:")
nums_l1 = in_1.split(' ')
|
Python
|
40daad66139ecb5c081448dec2e82a1402aaf650ef4327c07595648dbd37c0c3
|
0x647e97df45a0485c
|
<filename>graphdb/transformer.py<gh_stars>1-10
"""
A query transformer is a function that accepts a program and returns a program, plus a priority level.
Higher priority transformers are placed closer to the front of the list. We’re ensuring is a function,
because we’re going to evaluate it later 31 .
We’ll assume there won’t be an enormous number of transformer additions,
and walk the list linearly to add a new one.
We’ll leave a note in case this assumption turns out to be false —
a binary search is much more time-optimal for long lists,
but adds a little complexity and doesn’t really speed up short lists.
"""
class Transformer:
def __init__(self):
self.T = []
def transform(self, program):
return program
"""
Dagoba.T = [] # transformers (more than meets the eye)
"""
"""
Dagoba.addTransformer = function(fun, priority) {
if(typeof fun != 'function')
return Dagoba.error('Invalid transformer function')
for(var i = 0; i < Dagoba.T.length; i++) # OPT: binary search
if(priority > Dagoba.T[i].priority) break
Dagoba.T.splice(i, 0, {priority: priority, fun: fun})
}
"""
"""
Dagoba.transform = function(program) {
return Dagoba.T.reduce(function(acc, transformer) {
return transformer.fun(acc)
}, program)
}
"""
"""
Dagoba.addAlias = function(newname, oldname, defaults) {
defaults = defaults || [] # default arguments for the alias
Dagoba.addPipetype(newname, function() {}) # because there's no method catchall in js
Dagoba.addTransformer(function(program) {
return program.map(function(step) {
if(step[0] != newname) return step
return [oldname, Dagoba.extend(step[1], defaults)]
})
}, 100) # these need to run early, so they get a high priority
}
"""
"""
Dagoba.extend = function(list, defaults) {
return Object.keys(defaults).reduce(function(acc, key) {
if(typeof list[key] != 'undefined') return acc
acc[key] = defaults[key]
return acc
}, list)
}
"""
|
Python
|
b3c876767fc289cce0b62ec9582025d24fb101ffce0942a3476775cffa8b4a59
|
0xcb07d697e2b916d2
|
<filename>yzcore/templates/project_template/src/const/_job.py
#!/usr/bin/python3.6.8+
# -*- coding:utf-8 -*-
"""
@auth: cml
@date: 2020-12-2
@desc: ...
"""
class JobStatus(object):
PENDING = 0 # 任务等待执行
STARTED = 100 # 任务执行开始
PROCESS = 110
POLLING = 120
CALLBACK = 130
SUCCESS = 200 # 任务执行成功
RETRY = 300 # 任务重试
FAILURE = 400 # 任务执行失败
REVOKED = 500 # 任务撤销
|
Python
|
e557a00feaaa84c1dac22980c90d6a62a9f62eafdaae9e91599c07cfa7f6b1fb
|
0xcd5f639fd4e70d0d
|
<reponame>kithsirij/NLP-based-Syllabus-Coverage-Exam-paper-checker-Tool
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'add_subject.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_Dialog_add_subject(object):
def setupUi(self, Dialog_add_subject):
Dialog_add_subject.setObjectName(_fromUtf8("Dialog_add_subject"))
Dialog_add_subject.resize(568, 374)
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(10)
Dialog_add_subject.setFont(font)
Dialog_add_subject.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(_fromUtf8("Qt_interface/SE_syllabus/4zIr6y.jpg")), QtGui.QIcon.Normal, QtGui.QIcon.Off)
Dialog_add_subject.setWindowIcon(icon)
self.lbl_subject_name = QtGui.QLabel(Dialog_add_subject)
self.lbl_subject_name.setGeometry(QtCore.QRect(50, 235, 131, 21))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.lbl_subject_name.setFont(font)
self.lbl_subject_name.setObjectName(_fromUtf8("lbl_subject_name"))
self.label_add_subject = QtGui.QLabel(Dialog_add_subject)
self.label_add_subject.setGeometry(QtCore.QRect(220, 30, 151, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(14)
font.setBold(True)
font.setWeight(75)
self.label_add_subject.setFont(font)
self.label_add_subject.setObjectName(_fromUtf8("label_add_subject"))
self.lineEdit_subject_name = QtGui.QLineEdit(Dialog_add_subject)
self.lineEdit_subject_name.setGeometry(QtCore.QRect(190, 230, 321, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.lineEdit_subject_name.setFont(font)
self.lineEdit_subject_name.setObjectName(_fromUtf8("lineEdit_subject_name"))
self.label_year = QtGui.QLabel(Dialog_add_subject)
self.label_year.setGeometry(QtCore.QRect(50, 95, 81, 21))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.label_year.setFont(font)
self.label_year.setObjectName(_fromUtf8("label_year"))
self.label_semester = QtGui.QLabel(Dialog_add_subject)
self.label_semester.setGeometry(QtCore.QRect(50, 165, 91, 21))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.label_semester.setFont(font)
self.label_semester.setObjectName(_fromUtf8("label_semester"))
self.pushButton_save = QtGui.QPushButton(Dialog_add_subject)
self.pushButton_save.setGeometry(QtCore.QRect(190, 290, 111, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(10)
self.pushButton_save.setFont(font)
icon1 = QtGui.QIcon()
icon1.addPixmap(QtGui.QPixmap(_fromUtf8("Qt_interface/SE_syllabus/Save-as.png")), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_save.setIcon(icon1)
self.pushButton_save.setIconSize(QtCore.QSize(20, 20))
self.pushButton_save.setObjectName(_fromUtf8("pushButton_save"))
self.pushButton_cancel = QtGui.QPushButton(Dialog_add_subject)
self.pushButton_cancel.setGeometry(QtCore.QRect(340, 290, 111, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
self.pushButton_cancel.setFont(font)
icon2 = QtGui.QIcon()
icon2.addPixmap(QtGui.QPixmap(_fromUtf8("Qt_interface/SE_syllabus/if_draw-08_725558.png")), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_cancel.setIcon(icon2)
self.pushButton_cancel.setIconSize(QtCore.QSize(20, 20))
self.pushButton_cancel.setObjectName(_fromUtf8("pushButton_cancel"))
self.comboBox_year = QtGui.QComboBox(Dialog_add_subject)
self.comboBox_year.setGeometry(QtCore.QRect(190, 91, 111, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.comboBox_year.setFont(font)
self.comboBox_year.setObjectName(_fromUtf8("comboBox_year"))
self.comboBox_semester = QtGui.QComboBox(Dialog_add_subject)
self.comboBox_semester.setGeometry(QtCore.QRect(190, 160, 111, 31))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Times New Roman"))
font.setPointSize(12)
self.comboBox_semester.setFont(font)
self.comboBox_semester.setObjectName(_fromUtf8("comboBox_semester"))
self.retranslateUi(Dialog_add_subject)
QtCore.QObject.connect(self.pushButton_cancel, QtCore.SIGNAL(_fromUtf8("clicked()")), self.lineEdit_subject_name.clear)
QtCore.QMetaObject.connectSlotsByName(Dialog_add_subject)
def retranslateUi(self, Dialog_add_subject):
Dialog_add_subject.setWindowTitle(_translate("Dialog_add_subject", "Dialog", None))
self.lbl_subject_name.setText(_translate("Dialog_add_subject", "SUBJECT NAME", None))
self.label_add_subject.setText(_translate("Dialog_add_subject", "ADD SUBJECT", None))
self.label_year.setText(_translate("Dialog_add_subject", "YEAR", None))
self.label_semester.setText(_translate("Dialog_add_subject", "SEMESTER", None))
self.pushButton_save.setText(_translate("Dialog_add_subject", "SAVE", None))
self.pushButton_cancel.setText(_translate("Dialog_add_subject", "CANCEL", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
Dialog_add_subject = QtGui.QDialog()
ui = Ui_Dialog_add_subject()
ui.setupUi(Dialog_add_subject)
Dialog_add_subject.show()
sys.exit(app.exec_())
|
Python
|
e40326f4ed06a3aa8a0bd13034069a017cf72dee70e21e6954978b307075f201
|
0xb27b12fd167cb4b4
|
<gh_stars>1-10
from django.db.models import signals
from django.test import TestCase
from django.core import management
from django.utils import six
from shared_models import models
PRE_SYNCDB_ARGS = ['app', 'create_models', 'verbosity', 'interactive', 'db']
SYNCDB_DATABASE = 'default'
SYNCDB_VERBOSITY = 1
SYNCDB_INTERACTIVE = False
class PreSyncdbReceiver(object):
def __init__(self):
self.call_counter = 0
self.call_args = None
def __call__(self, signal, sender, **kwargs):
self.call_counter = self.call_counter + 1
self.call_args = kwargs
class OneTimeReceiver(object):
"""
Special receiver for handle the fact that test runner calls syncdb for
several databases and several times for some of them.
"""
def __init__(self):
self.call_counter = 0
self.call_args = None
def __call__(self, signal, sender, **kwargs):
# Although test runner calls syncdb for several databases,
# testing for only one of them is quite sufficient.
if kwargs['db'] == SYNCDB_DATABASE:
self.call_counter = self.call_counter + 1
self.call_args = kwargs
# we need to test only one call of syncdb
signals.pre_syncdb.disconnect(pre_syncdb_receiver, sender=models)
# We connect receiver here and not in unit test code because we need to
# connect receiver before test runner creates database. That is, sequence of
# actions would be:
#
# 1. Test runner imports this module.
# 2. We connect receiver.
# 3. Test runner calls syncdb for create default database.
# 4. Test runner execute our unit test code.
pre_syncdb_receiver = OneTimeReceiver()
signals.pre_syncdb.connect(pre_syncdb_receiver, sender=models)
class SyncdbSignalTests(TestCase):
def test_pre_syncdb_call_time(self):
self.assertEqual(pre_syncdb_receiver.call_counter, 1)
def test_pre_syncdb_args(self):
r = PreSyncdbReceiver()
signals.pre_syncdb.connect(r, sender=models)
management.call_command('syncdb', database=SYNCDB_DATABASE,
verbosity=SYNCDB_VERBOSITY, interactive=SYNCDB_INTERACTIVE,
load_initial_data=False, stdout=six.StringIO())
args = r.call_args
self.assertEqual(r.call_counter, 1)
self.assertEqual(set(args), set(PRE_SYNCDB_ARGS))
self.assertEqual(args['app'], models)
self.assertEqual(args['verbosity'], SYNCDB_VERBOSITY)
self.assertEqual(args['interactive'], SYNCDB_INTERACTIVE)
self.assertEqual(args['db'], 'default')
|
Python
|
69b2e70b7523ce5e3faf1bd0000c7b61be1ed4c7e22a0c167f0763ce86a22f22
|
0x8397ea8332b6defd
|
<filename>examples/mouse.py
#!/usr/bin/env python
import time
import os
import math
from trackball import TrackBall
print("""Trackball: Mouse
Use the trackball as a mouse in Raspbian, with right-click
when the switch is pressed.
Press Ctrl+C to exit!
""")
trackball = TrackBall(interrupt_pin=4)
trackball.set_rgbw(0, 0, 0, 0)
# Check for xte (used to control mouse)
use_xte = os.system('which xte') == 0
if use_xte == 0:
raise RuntimeError("xte not found. Did you sudo apt install xautomation?")
while True:
up, down, left, right, switch, state = trackball.read()
# Send movements and clicks to xte
if switch:
cmd = 'xte "mouseclick 1"'
os.system(cmd)
elif right or up or left or down:
x = right - left
x = math.copysign(x**2, x)
y = down - up
y = math.copysign(y**2, y)
cmd = 'xte "mousermove {} {}"'.format(int(x), int(y))
os.system(cmd)
time.sleep(0.0001)
|
Python
|
feedd1e6406a530eb6f617eb2411a511f2dab30b8813ab18052103ed2e10786a
|
0x9b1c85abc4d6cc18
|
<filename>garaged/src/garage/tf/regressors/gaussian_mlp_regressor_model.py
"""GaussianMLPRegressorModel."""
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from garage.experiment import deterministic
from garage.tf.models import GaussianMLPModel
class GaussianMLPRegressorModel(GaussianMLPModel):
"""GaussianMLPRegressor based on garage.tf.models.Model class.
This class can be used to perform regression by fitting a Gaussian
distribution to the outputs.
Args:
input_shape (tuple[int]): Input shape of the training data.
output_dim (int): Output dimension of the model.
name (str): Model name, also the variable scope.
hidden_sizes (list[int]): Output dimension of dense layer(s) for
the MLP for mean. For example, (32, 32) means the MLP consists
of two hidden layers, each with 32 hidden units.
hidden_nonlinearity (callable): Activation function for intermediate
dense layer(s). It should return a tf.Tensor. Set it to
None to maintain a linear activation.
hidden_w_init (callable): Initializer function for the weight
of intermediate dense layer(s). The function should return a
tf.Tensor.
hidden_b_init (callable): Initializer function for the bias
of intermediate dense layer(s). The function should return a
tf.Tensor.
output_nonlinearity (callable): Activation function for output dense
layer. It should return a tf.Tensor. Set it to None to
maintain a linear activation.
output_w_init (callable): Initializer function for the weight
of output dense layer(s). The function should return a
tf.Tensor.
output_b_init (callable): Initializer function for the bias
of output dense layer(s). The function should return a
tf.Tensor.
learn_std (bool): Is std trainable.
init_std (float): Initial value for std.
adaptive_std (bool): Is std a neural network. If False, it will be a
parameter.
std_share_network (bool): Boolean for whether mean and std share
the same network.
std_hidden_sizes (list[int]): Output dimension of dense layer(s) for
the MLP for std. For example, (32, 32) means the MLP consists
of two hidden layers, each with 32 hidden units.
min_std (float): If not None, the std is at least the value of min_std,
to avoid numerical issues.
max_std (float): If not None, the std is at most the value of max_std,
to avoid numerical issues.
std_hidden_nonlinearity (callable): Nonlinearity for each hidden layer
in the std network.
std_hidden_w_init (callable): Initializer function for the weight
of intermediate dense layer(s) in the std network.
std_hidden_b_init (callable): Initializer function for the bias
of intermediate dense layer(s) in the std network.
std_output_nonlinearity (callable): Activation function for output
dense layer in the std network. It should return a tf.Tensor. Set
it to None to maintain a linear activation.
std_output_w_init (callable): Initializer function for the weight
of output dense layer(s) in the std network.
std_parameterization (str): How the std should be parametrized. There
are two options:
- exp: the logarithm of the std will be stored, and applied a
exponential transformation
- softplus: the std will be computed as log(1+exp(x))
layer_normalization (bool): Bool for using layer normalization or not.
"""
def __init__(self,
input_shape,
output_dim,
name='GaussianMLPRegressorModel',
hidden_sizes=(32, 32),
hidden_nonlinearity=tf.nn.tanh,
hidden_w_init=tf.initializers.glorot_uniform(
seed=deterministic.get_tf_seed_stream()),
hidden_b_init=tf.zeros_initializer(),
output_nonlinearity=None,
output_w_init=tf.initializers.glorot_uniform(
seed=deterministic.get_tf_seed_stream()),
output_b_init=tf.zeros_initializer(),
learn_std=True,
adaptive_std=False,
std_share_network=False,
init_std=1.0,
min_std=1e-6,
max_std=None,
std_hidden_sizes=(32, 32),
std_hidden_nonlinearity=tf.nn.tanh,
std_hidden_w_init=tf.initializers.glorot_uniform(
seed=deterministic.get_tf_seed_stream()),
std_hidden_b_init=tf.zeros_initializer(),
std_output_nonlinearity=None,
std_output_w_init=tf.initializers.glorot_uniform(
seed=deterministic.get_tf_seed_stream()),
std_parameterization='exp',
layer_normalization=False):
super().__init__(output_dim=output_dim,
name=name,
hidden_sizes=hidden_sizes,
hidden_nonlinearity=hidden_nonlinearity,
hidden_w_init=hidden_w_init,
hidden_b_init=hidden_b_init,
output_nonlinearity=output_nonlinearity,
output_w_init=output_w_init,
output_b_init=output_b_init,
learn_std=learn_std,
adaptive_std=adaptive_std,
std_share_network=std_share_network,
init_std=init_std,
min_std=min_std,
max_std=max_std,
std_hidden_sizes=std_hidden_sizes,
std_hidden_nonlinearity=std_hidden_nonlinearity,
std_output_nonlinearity=std_output_nonlinearity,
std_parameterization=std_parameterization,
layer_normalization=layer_normalization)
self._input_shape = input_shape
def network_output_spec(self):
"""Network output spec.
Return:
list[str]: List of key(str) for the network outputs.
"""
return [
'normalized_dist', 'normalized_mean', 'normalized_log_std', 'dist',
'mean', 'log_std', 'x_mean', 'x_std', 'y_mean', 'y_std'
]
def _build(self, state_input, name=None):
"""Build model given input placeholder(s).
Args:
state_input (tf.Tensor): Place holder for state input.
name (str): Inner model name, also the variable scope of the
inner model, if exist. One example is
garage.tf.models.Sequential.
Return:
tfp.distributions.MultivariateNormalDiag: Normlizaed distribution.
tf.Tensor: Normalized mean.
tf.Tensor: Normalized log_std.
tfp.distributions.MultivariateNormalDiag: Vanilla distribution.
tf.Tensor: Vanilla mean.
tf.Tensor: Vanilla log_std.
tf.Tensor: Mean for data.
tf.Tensor: log_std for data.
tf.Tensor: Mean for label.
tf.Tensor: log_std for label.
"""
with tf.compat.v1.variable_scope('normalized_vars'):
x_mean_var = tf.compat.v1.get_variable(
name='x_mean',
shape=(1, ) + self._input_shape,
dtype=np.float32,
initializer=tf.zeros_initializer(),
trainable=False)
x_std_var = tf.compat.v1.get_variable(
name='x_std_var',
shape=(1, ) + self._input_shape,
dtype=np.float32,
initializer=tf.ones_initializer(),
trainable=False)
y_mean_var = tf.compat.v1.get_variable(
name='y_mean_var',
shape=(1, self._output_dim),
dtype=np.float32,
initializer=tf.zeros_initializer(),
trainable=False)
y_std_var = tf.compat.v1.get_variable(
name='y_std_var',
shape=(1, self._output_dim),
dtype=np.float32,
initializer=tf.ones_initializer(),
trainable=False)
normalized_xs_var = (state_input - x_mean_var) / x_std_var
_, normalized_dist_mean, normalized_dist_log_std = super()._build(
normalized_xs_var)
# Since regressor expects [N, *dims], we need to squeeze the extra
# dimension
normalized_dist_log_std = tf.squeeze(normalized_dist_log_std, 1)
with tf.name_scope('mean_network'):
means_var = normalized_dist_mean * y_std_var + y_mean_var
with tf.name_scope('std_network'):
log_stds_var = normalized_dist_log_std + tf.math.log(y_std_var)
normalized_dist = tfp.distributions.MultivariateNormalDiag(
loc=normalized_dist_mean,
scale_diag=tf.exp(normalized_dist_log_std))
vanilla_dist = tfp.distributions.MultivariateNormalDiag(
loc=means_var, scale_diag=tf.exp(log_stds_var))
return (normalized_dist, normalized_dist_mean, normalized_dist_log_std,
vanilla_dist, means_var, log_stds_var, x_mean_var, x_std_var,
y_mean_var, y_std_var)
def clone(self, name):
"""Return a clone of the model.
It copies the configuration and parameters of the primitive.
Args:
name (str): Name of the newly created model. It has to be
different from source model if cloned under the same
computational graph.
Returns:
garage.tf.policies.GaussianMLPModel: Newly cloned model.
"""
new_regressor = self.__class__(
name=name,
input_shape=self._input_shape,
output_dim=self._output_dim,
hidden_sizes=self._hidden_sizes,
hidden_nonlinearity=self._hidden_nonlinearity,
hidden_w_init=self._hidden_w_init,
hidden_b_init=self._hidden_b_init,
output_nonlinearity=self._output_nonlinearity,
output_w_init=self._output_w_init,
output_b_init=self._output_b_init,
learn_std=self._learn_std,
adaptive_std=self._adaptive_std,
std_share_network=self._std_share_network,
init_std=self._init_std,
min_std=self._min_std,
max_std=self._max_std,
std_hidden_sizes=self._std_hidden_sizes,
std_hidden_nonlinearity=self._std_hidden_nonlinearity,
std_hidden_w_init=self._std_hidden_w_init,
std_hidden_b_init=self._std_hidden_b_init,
std_output_nonlinearity=self._std_output_nonlinearity,
std_output_w_init=self._std_output_w_init,
std_parameterization=self._std_parameterization,
layer_normalization=self._layer_normalization)
new_regressor.parameters = self.parameters
return new_regressor
|
Python
|
64511465269da4605633ea0ba65e148b04fcd22e0eb7e96d30c80fc2a4da88e3
|
0x4db310b0a56b8b74
|
<filename>test.py
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import os
import argparse
from torch.autograd import Variable
from extensions.utils import progress_bar
from extensions.model_refinery_wrapper import ModelRefineryWrapper
from extensions.refinery_loss import RefineryLoss
from models import ShuffleNetv2_wrapper
from models import DiracDeltaNet_wrapper
parser = argparse.ArgumentParser(description='PyTorch imagenet inference')
parser.add_argument('--datadir', help='path to dataset')
parser.add_argument('--inputdir', help='path to input model')
args = parser.parse_args()
# Data
print('==> Preparing data..')
# Data loading code
valdir = os.path.join(args.datadir, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform_test = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
#imagenet
testset = datasets.ImageFolder(valdir, transform_test)
num_classes=1000
testloader = torch.utils.data.DataLoader(testset, batch_size=1000, shuffle=False, pin_memory=True, num_workers=30)
use_cuda = torch.cuda.is_available()
print('Using input path: %s' % args.inputdir)
checkpoint = torch.load(args.inputdir)
init_net = checkpoint['net']
net=init_net.to('cpu')
label_refinery=torch.load('./resnet50.t7')
net = ModelRefineryWrapper(net, label_refinery)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
net = nn.DataParallel(net)
net=net.to(device)
criterion = RefineryLoss()
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k)
return res
def test():
net.eval()
criterion.eval()
test_loss = 0
correct_1 = 0
correct_5 = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(testloader):
if use_cuda:
inputs, targets = inputs.cuda(device), targets.cuda(device)
with torch.no_grad():
outputs = net(inputs)
loss = criterion(outputs, targets)
if isinstance(loss, tuple):
loss_value, outputs = loss
else:
loss_value = loss
test_loss += loss_value.item()
prec1, prec5 = accuracy(outputs, targets, topk=(1, 5))
total += targets.size(0)
correct_1 += prec1
correct_5 += prec5
progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (test_loss/(batch_idx+1), 100.*float(correct_1)/float(total), correct_1, total))
return 100.*float(correct_1)/float(total),100.*float(correct_5)/float(total),test_loss
acc1,acc5,loss=test()
print('top-1 accuracy: {0:.3f}%, top-5 accuracy: {1:.3f}%'.format(acc1,acc5))
|
Python
|
87942be73c8fc38c801a786f9688d8e3eb9c3b1992a351468129815e1fa488e5
|
0xaf68c8ee48448514
|
<reponame>actingweb/box-actingweb<filename>aw-actor-trust.py
#!/usr/bin/env python
#
from actingweb import actor
from actingweb import config
from actingweb import trust
from actingweb import auth
import webapp2
import os
from google.appengine.ext.webapp import template
import json
import logging
import datetime
import time
# /trust handlers
#
# GET /trust with query parameters (relationship, type, and peerid) to retrieve trust relationships (auth: only creator and admins allowed)
# POST /trust with json body to initiate a trust relationship between this
# actor and another (reciprocal relationship) (auth: only creator and admins allowed)
# POST /trust/{relationship} with json body to create new trust
# relationship (see config.py for default relationship and auto-accept, no
# auth required)
# GET /trust/{relationship}}/{actorid} to get details on a specific relationship (auth: creator, admin, or peer secret)
# POST /trust/{relationship}}/{actorid} to send information to a peer about changes in the relationship
# PUT /trust/{relationship}}/{actorid} with a json body to change details on a relationship (baseuri, secret, desc) (auth: creator,
# admin, or peer secret)
# DELETE /trust/{relationship}}/{actorid} to delete a relationship (with
# ?peer=true if the delete is from the peer) (auth: creator, admin, or
# peer secret)
# Handling requests to trust/
class rootHandler(webapp2.RequestHandler):
def get(self, id):
if self.request.get('_method') == 'POST':
self.post(id)
return
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust')
if not myself or check.response["code"] != 200:
return
if not check.checkAuthorisation(path='trust', method='GET'):
self.response.set_status(403)
return
relationship = ''
type = ''
peerid = ''
relationship = self.request.get('relationship')
type = self.request.get('type')
peerid = self.request.get('peerid')
relationships = myself.getTrustRelationships(
relationship=relationship, peerid=peerid, type=type)
if not relationships:
self.response.set_status(404, 'Not found')
return
pairs = []
for rel in relationships:
pairs.append({
'baseuri': rel.baseuri,
'id': myself.id,
'peerid': rel.peerid,
'relationship': rel.relationship,
'approved': rel.approved,
'peer_approved': rel.peer_approved,
'verified': rel.verified,
'type': rel.type,
'desc': rel.desc,
'secret': rel.secret,
})
out = json.dumps(pairs)
self.response.write(out)
self.response.headers["Content-Type"] = "application/json"
self.response.set_status(200, 'Ok')
def post(self, id):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust')
if not myself or check.response["code"] != 200:
return
if not check.checkAuthorisation(path='trust', method='POST'):
self.response.set_status(403)
return
secret = ''
desc = ''
relationship = Config.default_relationship
type = ''
try:
params = json.loads(self.request.body.decode('utf-8', 'ignore'))
if 'url' in params:
url = params['url']
else:
url = ''
if 'relationship' in params:
relationship = params['relationship']
if 'type' in params:
type = params['type']
if 'desc' in params:
desc = params['desc']
except ValueError:
url = self.request.get('url')
relationship = self.request.get('relationship')
type = self.request.get('type')
if len(url) == 0:
self.response.set_status(400, 'Missing peer URL')
return
secret = Config.newToken()
new_trust = myself.createReciprocalTrust(
url=url, secret=secret, desc=desc, relationship=relationship, type=type)
if not new_trust:
self.response.set_status(408, 'Unable to create trust relationship')
return
self.response.headers.add_header(
"Location", str(Config.root + myself.id + '/trust/' + new_trust.relationship + '/' + new_trust.peerid))
pair = {
'baseuri': new_trust.baseuri,
'id': myself.id,
'peerid': new_trust.peerid,
'relationship': new_trust.relationship,
'approved': new_trust.approved,
'peer_approved': new_trust.peer_approved,
'verified': new_trust.verified,
'type': new_trust.type,
'desc': new_trust.desc,
'secret': new_trust.secret,
}
out = json.dumps(pair)
self.response.write(out)
self.response.headers["Content-Type"] = "application/json"
self.response.set_status(201, 'Created')
# Handling requests to /trust/*, e.g. /trust/friend
class relationshipHandler(webapp2.RequestHandler):
def get(self, id, relationship):
if self.request.get('_method') == 'POST':
self.post(id, relationship)
return
self.response.set_status(404, "Not found")
def put(self, id, relationship):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust', subpath=relationship, add_response=False)
if not myself:
return
if relationship != 'trustee':
self.response.set_status(404, "Not found")
return
# Access is the same as /trust
if not check.checkAuthorisation(path='trust', method='POST'):
self.response.set_status(403)
return
try:
params = json.loads(self.request.body.decode('utf-8', 'ignore'))
if 'trustee_root' in params:
trustee_root = params['trustee_root']
else:
trustee_root = ''
if 'creator' in params:
creator = params['creator']
else:
creator = None
except ValueError:
self.response.set_status(400, 'No json content')
return
if len(trustee_root) > 0:
myself.setProperty('trustee_root', trustee_root)
if creator:
myself.modify(creator=creator)
self.response.set_status(204, 'No content')
def delete(self, id, relationship):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust',
subpath=relationship,
add_response=False)
if not myself:
return
if relationship != 'trustee':
self.response.set_status(404, "Not found")
return
# Access is the same as /trust
if not check.checkAuthorisation(path='trust', method='DELETE'):
self.response.set_status(403)
return
myself.deleteProperty('trustee_root')
self.response.set_status(204, 'No content')
def post(self, id, relationship):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust',
subpath=relationship,
add_response=False)
if not myself:
return
if not check.checkAuthorisation(path='trust', subpath='<type>', method='POST'):
self.response.set_status(403)
return
try:
params = json.loads(self.request.body.decode('utf-8', 'ignore'))
if 'baseuri' in params:
baseuri = params['baseuri']
else:
baseuri = ''
if 'id' in params:
peerid = params['id']
else:
peerid = ''
if 'type' in params:
type = params['type']
else:
type = ''
if 'secret' in params:
secret = params['secret']
else:
secret = ''
if 'desc' in params:
desc = params['desc']
else:
desc = ''
if 'verify' in params:
verificationToken = params['verify']
else:
verificationToken = None
except ValueError:
self.response.set_status(400, 'No json content')
return
if len(baseuri) == 0 or len(peerid) == 0 or len(type) == 0:
self.response.set_status(400, 'Missing mandatory attributes')
return
if Config.auto_accept_default_relationship and Config.default_relationship == relationship:
approved = True
else:
approved = False
# Since we received a request for a relationship, assume that peer has approved
new_trust = myself.createVerifiedTrust(baseuri=baseuri, peerid=peerid, approved=approved, secret=secret,
verificationToken=verificationToken, type=type, peer_approved=True, relationship=relationship, desc=desc)
if not new_trust:
self.response.set_status(403, 'Forbidden')
return
self.response.headers.add_header(
"Location", str(Config.root + myself.id + '/trust/' + new_trust.relationship + "/" + new_trust.peerid))
pair = {
'baseuri': new_trust.baseuri,
'id': myself.id,
'peerid': new_trust.peerid,
'relationship': new_trust.relationship,
'approved': new_trust.approved,
'peer_approved': new_trust.peer_approved,
'verified': new_trust.verified,
'type': new_trust.type,
'desc': new_trust.desc,
'secret': new_trust.secret,
}
out = json.dumps(pair)
self.response.write(out)
self.response.headers["Content-Type"] = "application/json"
if approved:
self.response.set_status(201, 'Created')
else:
self.response.set_status(202, 'Accepted')
# Handling requests to specific relationships, e.g. /trust/friend/12f2ae53bd
class trustHandler(webapp2.RequestHandler):
def get(self, id, relationship, peerid):
if self.request.get('_method') == 'PUT':
self.put(id, relationship, peerid)
return
if self.request.get('_method') == 'DELETE':
self.delete(id, relationship, peerid)
return
logging.debug('GET trust headers: ' + str(self.request.headers))
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust', subpath=relationship)
if not myself or check.response["code"] != 200:
return
if not check.checkAuthorisation(path='trust', subpath='<type>/<id>', method='GET', peerid=peerid):
self.response.set_status(403)
return
relationships = myself.getTrustRelationships(
relationship=relationship, peerid=peerid)
if not relationships:
self.response.set_status(404, 'Not found')
return
my_trust = relationships[0]
# If the peer did a GET to verify
if check.trust and check.trust.peerid == peerid and not my_trust.verified:
my_trust.modify(verified=True)
verificationToken = my_trust.verificationToken
else:
verificationToken = ''
pair = {
'baseuri': my_trust.baseuri,
'id': myself.id,
'peerid': my_trust.peerid,
'relationship': my_trust.relationship,
'approved': my_trust.approved,
'peer_approved': my_trust.peer_approved,
'verified': my_trust.verified,
'verificationToken': verificationToken,
'type': my_trust.type,
'desc': my_trust.desc,
'secret': my_trust.secret,
}
out = json.dumps(pair)
self.response.write(out)
self.response.headers["Content-Type"] = "application/json"
if my_trust.approved:
self.response.set_status(200, 'Ok')
else:
self.response.set_status(202, 'Accepted')
def post(self, id, relationship, peerid):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust', subpath=relationship)
if not myself or check.response["code"] != 200:
return
if not check.checkAuthorisation(path='trust', subpath='<type>/<id>', method='POST', peerid=peerid):
self.response.set_status(403)
return
try:
params = json.loads(self.request.body.decode('utf-8', 'ignore'))
peer_approved = None
if 'approved' in params:
if params['approved'] and params['approved'] == True:
peer_approved = True
except ValueError:
self.response.set_status(400, 'No json content')
return
if myself.modifyTrustAndNotify(relationship=relationship, peerid=peerid, peer_approved=peer_approved):
self.response.set_status(204, 'Ok')
else:
self.response.set_status(405, 'Not modified')
def put(self, id, relationship, peerid):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust', subpath=relationship)
if not myself or check.response["code"] != 200:
return
if not check.checkAuthorisation(path='trust', subpath='<type>/<id>', method='PUT', peerid=peerid):
self.response.set_status(403)
return
try:
params = json.loads(self.request.body.decode('utf-8', 'ignore'))
if 'baseuri' in params:
baseuri = params['baseuri']
else:
baseuri = ''
if 'desc' in params:
desc = params['desc']
else:
desc = ''
if 'approved' in params:
if params['approved'] == True or params['approved'].lower() == "true":
approved = True
else:
approved = None
except ValueError:
if not self.request.get('_method') or self.request.get('_method') != "PUT":
self.response.set_status(400, 'No json content')
return
if self.request.get('approved') and len(self.request.get('approved')) > 0:
if self.request.get('approved').lower() == "true":
approved = True
else:
approved = None
if self.request.get('baseuri') and len(self.request.get('baseuri')) > 0:
baseuri = self.request.get('baseuri')
else:
baseuri = ''
if self.request.get('desc') and len(self.request.get('desc')) > 0:
desc = self.request.get('desc')
else:
desc = ''
if myself.modifyTrustAndNotify(relationship=relationship, peerid=peerid, baseuri=baseuri, approved=approved, desc=desc):
self.response.set_status(204, 'Ok')
else:
self.response.set_status(405, 'Not modified')
def delete(self, id, relationship, peerid):
(Config, myself, check) = auth.init_actingweb(appreq=self,
id=id, path='trust', subpath=relationship, add_response=False)
if not myself or (check.response["code"] != 200 and check.response["code"] != 401):
auth.add_auth_response(appreq=self, auth_obj=check)
return
# We allow non-approved peers to delete even if we haven't approved the relationship yet
if not check.checkAuthorisation(path='trust', subpath='<type>/<id>', method='DELETE', peerid=peerid, approved=False):
self.response.set_status(403)
return
isPeer = False
if check.trust and check.trust.peerid == peerid:
isPeer = True
else:
# Use of GET param peer=true is a way of forcing no deletion of a peer
# relationship even when requestor is not a peer (primarily for testing purposes)
peerGet = self.request.get('peer').lower()
if peerGet.lower() == "true":
isPeer = True
Config = config.config()
relationships = myself.getTrustRelationships(
relationship=relationship, peerid=peerid)
if not relationships:
self.response.set_status(404, 'Not found')
return
my_trust = relationships[0]
if isPeer:
deleted = myself.deleteReciprocalTrust(peerid=peerid, deletePeer=False)
else:
deleted = myself.deleteReciprocalTrust(peerid=peerid, deletePeer=True)
if not deleted:
self.response.set_status(502, 'Not able to delete relationship with peer.')
return
self.response.set_status(204, 'Ok')
application = webapp2.WSGIApplication([
webapp2.Route(r'/<id>/trust<:/?>', rootHandler, name='rootHandler'),
webapp2.Route(r'/<id>/trust/<relationship><:/?>',
relationshipHandler, name='relationshipHandler'),
webapp2.Route(r'/<id>/trust/<relationship>/<peerid><:/?>', trustHandler, name='trustHandler'),
], debug=True)
|
Python
|
4c930112d86b2b047feaa1e768b211a4fbe3406bda9735402c196b78d21eff7e
|
0x8dcfccbdc86e1edd
|
# Store the version here so:
# 1) we don't load dependencies by storing it in __init__.py
# 2) we can import it in setup.py for the same reason
# 3) we can import it into your module module
# https://stackoverflow.com/questions/458550/standard-way-to-embed-version-into-python-package
__version__ = '1.6.1'
release_notes = {
'1.6.1': """
Fixed GUI crash when loading certain RLBot config files with relative paths for agents.
Fixed agent preset loading to allow multiple agents to saved/loaded correctly if they have the same name. - ima9rd
""",
'1.6.0':"""
Add support for auto starting .NET executables.
""",
'1.5.1': """
Fixed crash with GUI when no default RLBot.cfg file was found.
Updated GUI to launch Rocket League when clicking run if no Rocket League process is found. - ima9rd
""",
'1.5.0': """
Adding a have_internet helper function to help streamline upgrade checks. - ima9rd
""",
'1.4.2': """
Adding support for auto-running java bots during tournaments. To take advantage of this
in your bot, see https://github.com/RLBot/RLBotJavaExample/wiki/Auto-Launching-Java
Plus bug fixes:
- Fixed a bug where auto-run executables would crash when trying to write to stderr.
- Dragging bots to another team in the GUI no longer breaks the config.
""",
'1.3.0': """
Accurate ball prediction for Hoops and Dropshot modes!
- Kipje13, Marvin, NeverCast, et. al.
""",
'1.2.6': """
Fixed a bug where field info was not extracted properly during dropshot mode.
It was reporting 2 goals rather than the expected 140.
""",
'1.2.5': """
***************************************************
* Fix for dodge cancels / half flips! - ccman32 *
***************************************************
Plus:
- Changing the rendering strategy for 3D lines that go past the camera. Formerly it was
"draw it, even though it's crazy sometimes", now it will be "don't draw it".
- Showing the rate that inputs are received for each player index when you press the
[home] key. Toggle back off with the [end] key.
- Fixed a bug where party_member_bot could get influenced by real controller input.
- Creating new presets in the GUI works better now.
- Got rid of the libpng warning seen when using the GUI.
- Giving specific error messages when cfg files are messed up.
""",
'1.2.2': """
- Rearranged the GUI a bit, and made it load and track appearance configs more effectively.
- Fixed bug where RUN button behavior in the GUI would not work after killing bots.
""",
'1.2.0': """
- We now offer a 'RigidBodyTick' thanks to whatisaphone! It's a lower-level representation of
physics data which updates at 120Hz and is not subject to interpolation. You can still make a
great bot without it, but this feature is quite nice for the scientists among us.
See https://github.com/RLBot/RLBotPythonExample/wiki/Rigid-Body-Tick for more details!
- Faster way to access ball prediction data in python. - Skyborg
""",
'1.1.3': """
- Faster way to access ball prediction data in python. - Skyborg
- Java bots will now shut down when the python framework quits. This has been necessary recently
to avoid buggy situations.
- Shutting down the python framework will no longer attempt to kill bots twice in a row.
- Clicking on the "Run" button twice in a row in the GUI will no longer spawn duplicate processes.
""",
'1.1.2': """
Faster way to access ball prediction data in python. - Skyborg
""",
'1.1.1': """
You can now get information about the ball's status in Dropshot mode thanks to hallo_doei!
Read all about it at https://github.com/RLBot/RLBot/wiki/Dropshot
Other changes:
- The loadout config for orange team is now respected again. - ccman32
- Fixed a bug where the GUI would crash with a "KeyError". - hallo_doei
- Avoiding and suppressing some game crashes, and also restoring the
ability to get game tick data during replays and the postgame. - tarehart
- Fixed a bug where bots would dodge when they intended to double jump. -tarehart
""",
'1.0.6': """
The latest Rocket League patch broke dodges for our bots; this update fixes it.
""",
'1.0.5': """
Maximum size for a render message has been decreased again because many people experienced
errors related to memory access. The limit is now only double the original.
""",
'1.0.4': """
- Maximum size for a render message has been increased by a factor of 100. This means you can
draw a lot of lines at once without getting errors.
- Boost amount for cars will now round up to the nearest integer, so 0.3% boost will now appear
as 1 instead of 0.
- Fixed a crash that would commonly happen after a match ends. As a side effect, you can no longer
see up-to-date player data during instant replays.
""",
'1.0.3': """
Time for the big 1.0 release! We actually left "beta" a long time ago so this isn't as big
a milestone as the number implies, but we DO have two great new features!
1. Setting game state. You can manipulate the position, velocity, etc of the ball and the cars!
This can be a great help during bot development, and you can also get creative with it. Visit
the wiki for details and documentation - https://github.com/RLBot/RLBot/wiki/Manipulating-Game-State
Code written by hallo_doei, ccman32, and tarehart
2. Ball prediction. We now provide a list of future ball positions based on chip's excellent
physics modeling. Take advantage of this to do next-level wall reads, catches, and dribbles! You can
read about the math involved here: https://samuelpmish.github.io/notes/RocketLeague/ball_bouncing/
Note: currently the wall bounces are only accurate on the standard arena, not hoops or dropshot.
Documentation and examples can be found here: https://github.com/RLBot/RLBot/wiki/Ball-Path-Prediction
Code written by chip and tarehart
Bonus:
- You can now play on Salty Shores thanks to hallo_doei
- Bug fix for people with spaces in their file path by Zaptive
- Subprocess agent for future Rust support by whatisaphone
""",
'0.0.32': """
More comprehensive fix for Rocket League patch 1.50. Compared to previous version:
- Dropshot tile data is fixed
- Boost pad data is fixed
- Loadout configuration is fixed
Thanks to ccman32 and dtracers for delivering this fix quickly!
""",
'0.0.31': """
Rapid response to Rocket League patch 1.50 with the following known issues:
- Dropshot tile data is missing
- Boost pad data is missing
- Loadout configuration is broken
Thanks to ccman32 and dtracers for delivering this short-term fix quickly.
We will follow this up with a proper fix as soon as possible. You may also choose to stay on
Rocket League 1.49 and RLBot 0.0.30, ask for instructions on discord.
""",
'0.0.30': """
- New core dll that is less likely to break when Rocket League is patched - ccman32 and hallo-doei
- Fixed bug resulting in incorrect quickchat - dtracers
- Added more built-in colors to the python rendering manager - Eastvillage
- Fix for items with a ':' not showing up in the GUI - hallo-doei
- Fix for GUI not saving correct path - hallo-doei
- Fix for GUI crash when saving preset then canceling - hallo-doei
- Adding file checking before injection (Resolves #167) - Redox
- Fixed typo in rlbot.cfg - Redox
- Fancy release notes - tarehart and Skyborg
"""
}
release_banner = """
______ _ ______ _
10100 | ___ \ | | ___ \ | | 00101
110011 | |_/ / | | |_/ / ___ | |_ 110011
00110110 | /| | | ___ \/ _ \| __| 01101100
010010 | |\ \| |____| |_/ / (_) | |_ 010010
10010 \_| \_\_____/\____/ \___/ \__| 01001
"""
def get_current_release_notes():
if __version__ in release_notes:
return release_notes[__version__]
return ''
def get_help_text():
return "Trouble? Ask on Discord at https://discord.gg/5cNbXgG " \
"or report an issue at https://github.com/RLBot/RLBot/issues"
def print_current_release_notes():
print(release_banner)
print("Version {}".format(__version__))
print(get_current_release_notes())
print(get_help_text())
print("")
|
Python
|
bf713112ef7ccdaf6306ef069d66d5e58c72df235c5d43c8aab7ae37dbb344ea
|
0xabbc659bf83f571d
|
from floodsystem.stationdata import build_station_list
from floodsystem.flood import stations_highest_rel_level
def run():
stations = build_station_list()
warning_stations = stations_highest_rel_level(stations,10)
for entry in warning_stations:
print(entry[0].name,entry[1])
if __name__ == "__main__":
print("*** Task 2C: CUED Part IA Flood Warning System ***")
run()
|
Python
|
2e151f4405b5a9ead5c65dd37f5c915a6d59c5eedb232b578805c6b0482f99d3
|
0x449cdcdd4b1b9a95
|
<filename>src/biotite/copyable.py
# This source code is part of the Biotite package and is distributed
# under the 3-Clause BSD License. Please see 'LICENSE.rst' for further
# information.
__name__ = "biotite"
__author__ = "<NAME>"
__all__ = ["Copyable"]
import abc
class Copyable(metaclass=abc.ABCMeta):
"""
Base class for all objects, that should be copyable.
The public method `copy()` first creates a fresh instance of the
class of the instance, that is copied via the `__copy_create__()`
method. All variables, that could not be set via the constructor,
are then copied via `__copy_fill__()`, starting with the method in
the uppermost base class and ending with the class of the instance
to be copied.
This approach solves the problem of encapsulated variables in
superclasses.
"""
def copy(self):
"""
Create a deep copy of this object.
Returns
-------
copy
A copy of this object.
"""
clone = self.__copy_create__()
self.__copy_fill__(clone)
return clone
def __copy_create__(self):
"""
Instantiate a new object of this class.
Only the constructor should be called in this method.
All further attributes, that need to be copied are handled
in `__copy_fill__()`
Do not call the `super()` method here.
This method must be overridden, if the constructor takes
parameters.
Returns
-------
copy
A freshly instantiated copy of *self*.
"""
return type(self)()
def __copy_fill__(self, clone):
"""
Copy all necessary attributes to the new object.
Always call the `super()` method as first statement.
Parameters
----------
clone
The freshly instantiated copy of *self*.
"""
pass
|
Python
|
341a51b18b82c90d7b3e30fd3336eb1d4b9c8a53b3b82ea111b029db942015e1
|
0x1373599914f8ea9
|
import logging
import time
from datetime import timedelta
from typing import List
from homeassistant.components.binary_sensor import (
BinarySensorEntity,
DEVICE_CLASS_MOTION
)
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import ATTR_ATTRIBUTION
from homeassistant.core import HomeAssistant
from wyzeapy.base_client import Device, AccessTokenError
from wyzeapy.client import Client
from wyzeapy.types import PropertyIDs
from .const import DOMAIN
_LOGGER = logging.getLogger(__name__)
ATTRIBUTION = "Data provided by Wyze"
SCAN_INTERVAL = timedelta(seconds=10)
async def async_setup_entry(hass: HomeAssistant, config_entry: ConfigEntry, async_add_entities):
_LOGGER.debug("""Creating new WyzeApi binary sensor component""")
client: Client = hass.data[DOMAIN][config_entry.entry_id]
def get_cameras() -> List[Device]:
try:
return client.get_cameras()
except AccessTokenError as e:
_LOGGER.warning(e)
client.reauthenticate()
return client.get_cameras()
cameras = [WyzeCameraMotion(client, camera) for camera in await hass.async_add_executor_job(get_cameras)]
async_add_entities(cameras, True)
class WyzeCameraMotion(BinarySensorEntity):
_on: bool
_available: bool
def __init__(self, wyzeapi_client: Client, device: Device):
self._client = wyzeapi_client
self._device = device
self._last_event = int(str(int(time.time())) + "000")
@property
def device_info(self):
return {
"identifiers": {
(DOMAIN, self._device.mac)
},
"name": self.name,
"manufacturer": "WyzeLabs",
"model": self._device.product_model
}
@property
def available(self) -> bool:
return self._available
@property
def name(self):
"""Return the display name of this switch."""
return self._device.nickname
@property
def is_on(self):
"""Return true if switch is on."""
return self._on
@property
def unique_id(self):
return "{}-motion".format(self._device.mac)
@property
def device_state_attributes(self):
"""Return device attributes of the entity."""
return {
ATTR_ATTRIBUTION: ATTRIBUTION,
"state": self.is_on,
"available": self.available,
"device model": self._device.product_model,
"mac": self.unique_id
}
@property
def device_class(self):
return DEVICE_CLASS_MOTION
def update(self):
try:
device_info = self._client.get_info(self._device)
except AccessTokenError:
self._client.reauthenticate()
device_info = self._client.get_info(self._device)
for property_id, value in device_info:
if property_id == PropertyIDs.AVAILABLE:
self._available = True if value == "1" else False
latest_event = self._client.get_latest_event(self._device)
if latest_event is not None:
if latest_event.event_ts > self._last_event:
self._on = True
self._last_event = latest_event.event_ts
else:
self._on = False
self._last_event = latest_event.event_ts
else:
self._on = False
|
Python
|
d0013ca39b15e9bfe2441a29d2eee564633127a70aabe41c3fa49857f13af58c
|
0x81620e9a22563077
|
<filename>src/Components/missions/GEMS/mcd43c.py
"""
Reads climate modeling grid 0.05 degree MCD43 BRDF files.
"""
import os
import sys
from numpy import loadtxt, array, tile, where, concatenate, flipud
from numpy import ones
from datetime import date, datetime, timedelta
from glob import glob
from pyhdf.SD import SD, HDF4Error
MISSING = 32.767
SDS = dict (
LAND = ('BRDF_Albedo_Parameter1_Band1','BRDF_Albedo_Parameter1_Band2',
'BRDF_Albedo_Parameter1_Band3','BRDF_Albedo_Parameter1_Band4',
'BRDF_Albedo_Parameter1_Band5','BRDF_Albedo_Parameter1_Band6',
'BRDF_Albedo_Parameter1_Band7',
'BRDF_Albedo_Parameter2_Band1','BRDF_Albedo_Parameter2_Band2',
'BRDF_Albedo_Parameter2_Band3','BRDF_Albedo_Parameter2_Band4',
'BRDF_Albedo_Parameter2_Band5','BRDF_Albedo_Parameter2_Band6',
'BRDF_Albedo_Parameter2_Band7',
'BRDF_Albedo_Parameter3_Band1','BRDF_Albedo_Parameter3_Band2',
'BRDF_Albedo_Parameter3_Band3','BRDF_Albedo_Parameter3_Band4',
'BRDF_Albedo_Parameter3_Band5','BRDF_Albedo_Parameter3_Band6',
'BRDF_Albedo_Parameter3_Band7'),
QUAL = ('BRDF_Albedo_Quality',
'Snow_BRDF_Albedo',
'BRDF_Albedo_Ancillary', )
)
ALIAS = dict ( BRDF_Albedo_Parameter1_Band1 = 'KISO_b1_645',
BRDF_Albedo_Parameter1_Band2 = 'KISO_b2_856',
BRDF_Albedo_Parameter1_Band3 = 'KISO_b3_465',
BRDF_Albedo_Parameter1_Band4 = 'KISO_b4_553',
BRDF_Albedo_Parameter1_Band5 = 'KISO_b5_1241',
BRDF_Albedo_Parameter1_Band6 = 'KISO_b6_1629',
BRDF_Albedo_Parameter1_Band7 = 'KISO_b7_2114',
BRDF_Albedo_Parameter2_Band1 = 'KVOL_b1_645',
BRDF_Albedo_Parameter2_Band2 = 'KVOL_b2_856',
BRDF_Albedo_Parameter2_Band3 = 'KVOL_b3_465',
BRDF_Albedo_Parameter2_Band4 = 'KVOL_b4_553',
BRDF_Albedo_Parameter2_Band5 = 'KVOL_b5_1241',
BRDF_Albedo_Parameter2_Band6 = 'KVOL_b6_1629',
BRDF_Albedo_Parameter2_Band7 = 'KVOL_b7_2114',
BRDF_Albedo_Parameter3_Band1 = 'KGEO_b1_645',
BRDF_Albedo_Parameter3_Band2 = 'KGEO_b2_856',
BRDF_Albedo_Parameter3_Band3 = 'KGEO_b3_465',
BRDF_Albedo_Parameter3_Band4 = 'KGEO_b4_553',
BRDF_Albedo_Parameter3_Band5 = 'KGEO_b5_1241',
BRDF_Albedo_Parameter3_Band6 = 'KGEO_b6_1629',
BRDF_Albedo_Parameter3_Band7 = 'KGEO_b7_2114',
)
#...........................................................................
class McD43C(object):
"""
This class implements the MODIS LAND BRDF 16-day Level 3 products, MCD43C1 (0.05 degree horz res),
"""
def __init__ (self,Path,lon,lat,Verb=1):
"""
Reads files for one day of Level 3 MCD43C1
present on a given *Path* and returns an object with
all 3 kernels coeff. On input,
Required parameters:
Path -- for now a single file. Eventually implement a single directory, or a list
of files and directories.
"""
if type(lon) is list:
lon = array(lon)
lat = array(lat)
# List of HDF files for a given date
#-----------------------------------
self.verb = Verb
self.SDS = SDS['LAND']
#self.Tfiles = glob(Path + '*.hdf')
if type(Path) is str:
self.Files = [Path]
else:
self.Files = Path
# From a list of lat and lon, return the
# dx, dy on the grid
# -------------------------------------
self.nobs = len(lon)
self._findNearest(Path,lon,lat)
# Read BRDF kernel in a MODIS tile
# ---------------------------------
self.read_BRDF()
# Result
#---
def _findNearest(self,path,lon,lat):
"""Given a list of lat, lon, return numbers to find the
position of the nearest neighbor on the grid (dx,dy)
"""
dLon = 0.05
dLat = 0.05
Lon0 = -180 - dLon
Lat0 = -90 + dLat
self.dx = (0.5+(lon-Lon0)/dLon).astype(int)
self.dy = (0.5+(lat-Lat0)/dLat).astype(int)
if self.verb:
print 'dx','dy', self.dx,self.dy
#---
def read_BRDF(self):
"""Reads MCD43C1 file with Level 3 BRDF kernels for each MODIS band."""
# Create empty lists for SDS to be read from file
# -----------------------------------------------
for name in self.SDS:
self.__dict__[name] = []
BRDF = MISSING * ones((len(self.SDS),self.nobs))
for fn in self.Files:
try:
if self.verb:
print "[] Working on "+fn
hfile = SD(fn)
except HDF4Error:
if self.verb > 2:
print "- %s: not recognized as an HDF file"%filename
return
# Read select variables (reshape to allow concatenation later)
# ------------------------------------------------------------
for sds in self.SDS:
if self.verb:
print 'sds',self.SDS.index(sds)
v = hfile.select(sds).get()
a = hfile.select(sds).attributes()
if a['scale_factor']!=1.0 or a['add_offset']!=0.0:
v = a['scale_factor'] * v + a['add_offset']
if self.verb:
print array(self.dx), BRDF.shape, BRDF[self.SDS.index(sds),:], v.shape
v = flipud(v)
BRDF[self.SDS.index(sds),:] = v[array(self.dy), array(self.dx)]
for sds in self.SDS:
self.__dict__[sds] = BRDF[self.SDS.index(sds),:]
if sds in ALIAS.keys():
self.__dict__[ALIAS[sds]] = self.__dict__[sds]
#---
#............................................................................
if __name__ == "__main__":
path = '/nobackup/3/pcastell/MODIS/MCD43C1/MCD43C1.A2005361.005.2008094071946.hdf'
lon = [-2.,-120.,15.2,17.2,170.1]
lat = [88.,40.,-20.,-20.,-55.5]
lon = np.arange(-180,180,1)
lat = np.arange(-90,90,1)
lon,lat = np.meshgrid(lon,lat)
ex = McD43C(path,lon.flatten(),lat.flatte())
|
Python
|
1eb7a1d7a8de2441577087996188a41b92e2a247f1d869409f741cb2dd6eba5d
|
0x6e69baeca0679c58
|
<gh_stars>100-1000
"""Lowest-common-denominator implementations of platform functionality."""
from __future__ import absolute_import, division, print_function, with_statement
import errno
import socket
from tornado.platform import interface
class Waker(interface.Waker):
"""Create an OS independent asynchronous pipe.
For use on platforms that don't have os.pipe() (or where pipes cannot
be passed to select()), but do have sockets. This includes Windows
and Jython.
"""
def __init__(self):
# Based on Zope async.py: http://svn.zope.org/zc.ngi/trunk/src/zc/ngi/async.py
self.writer = socket.socket()
# Disable buffering -- pulling the trigger sends 1 byte,
# and we want that sent immediately, to wake up ASAP.
self.writer.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
count = 0
while 1:
count += 1
# Bind to a local port; for efficiency, let the OS pick
# a free port for us.
# Unfortunately, stress tests showed that we may not
# be able to connect to that port ("Address already in
# use") despite that the OS picked it. This appears
# to be a race bug in the Windows socket implementation.
# So we loop until a connect() succeeds (almost always
# on the first try). See the long thread at
# http://mail.zope.org/pipermail/zope/2005-July/160433.html
# for hideous details.
a = socket.socket()
a.bind(("127.0.0.1", 0))
a.listen(1)
connect_address = a.getsockname() # assigned (host, port) pair
try:
self.writer.connect(connect_address)
break # success
except socket.error as detail:
if (not hasattr(errno, 'WSAEADDRINUSE') or
detail[0] != errno.WSAEADDRINUSE):
# "Address already in use" is the only error
# I've seen on two WinXP Pro SP2 boxes, under
# Pythons 2.3.5 and 2.4.1.
raise
# (10048, 'Address already in use')
# assert count <= 2 # never triggered in Tim's tests
if count >= 10: # I've never seen it go above 2
a.close()
self.writer.close()
raise socket.error("Cannot bind trigger!")
# Close `a` and try again. Note: I originally put a short
# sleep() here, but it didn't appear to help or hurt.
a.close()
self.reader, addr = a.accept()
self.reader.setblocking(0)
self.writer.setblocking(0)
a.close()
self.reader_fd = self.reader.fileno()
def fileno(self):
return self.reader.fileno()
def write_fileno(self):
return self.writer.fileno()
def wake(self):
try:
self.writer.send(b"x")
except (IOError, socket.error):
pass
def consume(self):
try:
while True:
result = self.reader.recv(1024)
if not result:
break
except (IOError, socket.error):
pass
def close(self):
self.reader.close()
self.writer.close()
|
Python
|
6dad4fdb6d5bf0e726de49f158f0165fa3bc39470fba97ee991cd8eb04eaf9fd
|
0xc32fedb84ba444ac
|
"""
This script will modulate the blinky lights using the following algorithm:
1) uses user-provided location to obtain row of pixel data from bathy image
2) samples a 'number of LEDs' number of pixels from that row
3) shifts the sampled row data to center it at the location specified by user
4) displays resulting pixels on Blinky Tape
5) shifts next row by a given latitude, also specified by user
6) sleeps for user-specified period of time
Uses the following arguments:
-l/--location: tuple
Location of the user in tuple(lat, lon). This represents the center of the LED strip. Defaults to (0, 0)
-u/--update-interval: int
Update interval of the script, in minutes. Defaults to 10.
-p/--port: str
Serial port of the BlinkyLight (e.g., 'ttyAMA0', 'COM3'). Defaults to 'COM5'.
-d/--delta_latitude: int
Vertical change in latitude every update rate. May be 0, but this will result in a never-changing LEDs.
-i/--image: str
Name of the PNG image that contains the color coded pathymetric data.
The file current named mapserv.png was obtained using the following API:
https://www.gebco.net/data_and_products/gebco_web_services/web_map_service/mapserv?request=getmap&service=wms&BBOX=-90,-180,90,180&format=image/png&height=600&width=1200&crs=EPSG:4326&layers=GEBCO_LATEST_SUB_ICE_TOPO&version=1.3.0
In lieu of providing command line arguments, you may alternatively edit the defaults in bath_config.json.
NOTE: runs via:
runfile('/BlinkyTape_Python/bathymetry_blink/bathymetry_blink.py', wdir='/BlinkyTape_Python/')
(C) 2021 <NAME> (https://joeycodes.dev)
MIT Licensed
"""
import optparse
import json
from blinkytape import BlinkyTape
from time import sleep
from PIL import Image
import numpy as np
import sys
MAX_ERRORS = 3
num_errors = 0
# Obtain default parameters
with open("./bathymetry_blink/bathy_config.json") as f:
config = json.load(f)
# Default Blinky Tape port on Raspberry Pi is /dev/ttyACM0
parser = optparse.OptionParser()
parser.add_option("-p", "--port", dest="portname",
help="serial port (ex: /dev/ttyACM0)", default=config["port"])
parser.add_option("-l", "--location", dest="location",
help="Location of the center of the LED strip (ex: 70,-110)", default=config["location"])
parser.add_option("-u", "--update-rate", dest="update_rate",
help="How often to update elevation profile (mins) (ex: 5)", default=config["update_rate"])
parser.add_option("-d", "--delta-latitude", dest="delta_latitude",
help="Change in latitude during update (ex: 5)", default=config["delta_latitude"])
parser.add_option("-n", "--num-leds", dest="num_leds",
help="Number of LEDs in strip (ex: 60)", default=config["num_leds"])
parser.add_option("-i", "--image", dest="image_name",
help="Name of the map/bathymetry image (ex: ./mapserv.png)", default=config["image"])
(options, args) = parser.parse_args()
if args:
print("Unknown parameters: " + args)
# grab the values provided by user (or defaults)
port = options.portname
loc = options.location
rate = options.update_rate
delta = options.delta_latitude
n_leds = options.num_leds
i_name = options.image_name
# Some visual indication that it works, for headless setups (green tape)
bt = BlinkyTape(port, n_leds)
bt.displayColor(0, 100, 0)
bt.show()
sleep(2)
while True:
try:
# first, load image
im = Image.open(i_name) # Can be many different formats.
cols, rows = im.size
a = np.asarray(im) # of shape (rows, cols, channels)
# map loc latitude to 0-based index
latitude_index = min(rows - 1, max(0, (int)(((loc[0] - -90) / (90 - -90)) * (rows - 0) + 0)))
longitude_index = min(cols - 1, max(0, (int)(((loc[1] - -180) / (180 - -180)) * (cols - 0) + 0)))
# update the location of the next row of elevation data to take
loc[0] += delta
loc[0] = ((loc[0] + 90) % 180) - 90 # wraps to next pole if overflow
print("Lat index: " + str(latitude_index))
print("Lon index: " + str(longitude_index))
print("Next latitude: " + str(loc[0]))
# grab the applicable pixel indices
indices = [(int)(x*(cols/n_leds)) for x in range(n_leds)]
# sample that row of pixel data
output_pixels = np.take(a[latitude_index], indices, axis=0)
# rotate the row to center around the specified longitude
output_pixels = np.roll(output_pixels, longitude_index, axis=0)
# send all pixel data to bt
for pixel in output_pixels:
print("Sending r: {}, g: {}, b: {}".format(*pixel))
bt.sendPixel(*pixel)
# finally, show the image
bt.show()
# delete variables for memory management
del a
del im
# Tape resets to stored pattern after a few seconds of inactivity
sleep(rate * 60) # Wait specified number of minutes
# sleep(10) # Wait specified number of minutes
except KeyboardInterrupt:
print("Keyboard interrupt, ending program.")
sys.exit()
except RuntimeError as e:
print("Encountered runtime error: " + e.args[0])
# flush any incomplete data
bt.show()
num_errors += 1
if num_errors > MAX_ERRORS:
sys.exit("Error count exceeds that allowed.")
|
Python
|
88d282f085d39d538dad12154d3654aa89d69ee2338bbdb712a04635a156a87a
|
0xa32df8fb991778be
|
<filename>service/transforms/export_submissions.py
""" Export Submissions Transform module """
#pylint: disable=too-few-public-methods
import pandas as pd
from .transform import TransformBase
from ..resources.field_configs import FieldConfigs
from ..resources.field_maps import FieldMaps
class ExportSubmissionsTransform(TransformBase):
""" Transform for Export Submissions """
def transform(self, data, sep):
"""
transform submissions from export
"""
output = list(map(self.get_data, data))
output = list(map(self.pretty_format, output))
output = [i for i in output if i is not None]
output = self.normalize(output)
output = self.to_csv(output, sep)
return output
# pylint: disable=R0201
def get_data(self, submission):
"""
Get data from submission object
"""
# skip permit type = existingPermitApplication submissions
#pylint: disable=too-many-nested-blocks
if submission['data']['permitType'] and submission['data']['permitType'] != 'existingPermitApplication':
output = {}
data = submission['data']
output['id'] = submission['_id']
output['created'] = submission['created']
#pylint: disable=too-many-nested-blocks
for key in data:
# flatten list values
if isinstance(data[key], list):
if len(data[key]) > 0:
if isinstance(data[key][0], (int, str)):
output[key] = ', '.join(map(str, data[key]))
else:
file_names = []
for index, val in enumerate(data[key]):
# if storage, concat filename
if 'storage' in val and 'originalName' in val:
file_names.append(val['originalName'])
else:
output[key+str(index+1)] = val
if len(file_names) > 0:
output[key] = ', '.join(file_names)
# flatten multi select values
elif isinstance(data[key], dict):
# building use code needs manual process
if FieldConfigs.is_building_use(key):
output[key] = self.convert_building_use(key, data[key], data)
# flatten nested address fields
elif FieldConfigs.is_nested_address_field(key):
output = self.convert_address_fields(key, data[key], output)
else:
multi_selects = []
for multi_key, multi_value in data[key].items():
if multi_value:
multi_selects.append(multi_key)
output[key] = ', '.join(multi_selects)
else:
output[key] = data[key]
return output
def normalize(self, data):
"""
Normalize data into a flat structure into DataFrame
"""
dataframe = pd.json_normalize(data)
# update column names
dataframe.rename(columns=self.pretty_string, inplace=True)
return dataframe
def to_csv(self, dataframe, sep=','):
"""
Return CSV from DataFrame
"""
return dataframe.to_csv(index=False, sep=sep, line_terminator='\r\n')
def pretty_format(self, data):
""" Pretty format data fields """
output = {}
if data:
data = self.set_pts_fields(data)
for key in data:
if self.datetime_valid(data[key]):
output[key] = self.pretty_time(data[key])
else:
field_key = FieldConfigs.get_field_key(key, 'map')
phone_appnum_key = FieldConfigs.get_field_key(key, 'pretty')
if field_key is not None:
output[key] = FieldMaps.map_key_value(field_key, data[key])
# manually add Fire Rating and proposed Fire Rating
if field_key == 'construction_type' and data[key] != '':
output = self.add_fire_rating(key, data[key], output)
# format phone numbers and building application number
elif phone_appnum_key is not None:
if phone_appnum_key == 'phone_fields':
output[key] = self.pretty_phonenumber(data[key])
# cleanse characters that break the csv
elif isinstance(data[key], (str, bytes)):
output[key] = data[key].replace('\n', '\t').replace('|', '')
# relabel field, if necessary
relabel_field = FieldConfigs.get_relabel_fields(key)
if relabel_field:
output[relabel_field] = output.pop(key)
output = self.reorder_fields(output)
return output
|
Python
|
91c43d6a87fb88abf29db5c27b412966c574541e630ad3ae310a4f237c61e654
|
0xd66990b90b686675
|
from rest_framework import serializers
from core import models
class AssetSerializer(serializers.ModelSerializer):
class Meta:
model = models.Asset
fields = '__all__'
|
Python
|
5a439d05e6474ea64663a4cf4a763a625b532342287ce325df0fa21207df6583
|
0x63919591fb7f1935
|
<filename>Pzzzzz/plugins/wm.py
from nonebot import CommandSession, on_command
from langdetect import detect, detect_langs
from aiohttp import ClientSession
from nonebot import get_bot
from nonebot.argparse import ArgumentParser
import time
import hmac
import random, sys
import hashlib
import binascii
import urllib
bot = get_bot()
# 百度通用翻译API,不包含词典、tts语音合成等资源,如有相关需求请联系<EMAIL>
# coding=utf-8
import hashlib
import urllib
import random
@on_command("wm", aliases={"翻译", "translate"}, only_to_me=False)
async def wm(session: CommandSession):
session.get("token", prompt="请输入你想翻译的句子!")
myurl = "/api/trans/vip/translate"
q = session.state["token"]
fromLang = session.state["fr"] # 原文语种
toLang = session.state["to"] # 译文语种
salt = random.randint(32768, 65536)
sign = bot.config.BAIDUAPI + q + str(salt) + bot.config.BAIDUKey
sign = hashlib.md5(sign.encode()).hexdigest()
myurl = (
myurl
+ "?appid="
+ bot.config.BAIDUAPI
+ "&q="
+ urllib.parse.quote(q)
+ "&from="
+ fromLang
+ "&to="
+ toLang
+ "&salt="
+ str(salt)
+ "&sign="
+ sign
)
async with ClientSession() as sess:
async with sess.get("https://fanyi-api.baidu.com" + myurl) as resp:
if resp.status != 200:
pass
ShitAns = await resp.json()
try:
ans = [i["dst"] for i in ShitAns["trans_result"]]
ans = "\n".join(ans)
except:
session.finish("翻译错误,原因是:" + ShitAns["error_code"])
session.finish("翻译结果为:\n" + ans)
@wm.args_parser
async def _(session: CommandSession):
arg = session.current_arg_text.strip()
if session.is_first_run:
parser = ArgumentParser(session=session)
parser.add_argument("--fr", "-f", type=str, default="no")
parser.add_argument("--to", "-t", type=str, default="no")
parser.add_argument("token", type=str, default="", nargs="+")
argv = parser.parse_args(session.current_arg.split(" "))
arg = " ".join(argv.token)
if arg == "":
session.pause("输入不能为空哦!")
session.state["fr"] = detect(arg) if argv.fr == "no" else argv.fr
if session.state["fr"][:2] == "zh":
session.state["fr"] = "zh"
if argv.to == "no":
if session.state["fr"] == "zh":
session.state["to"] = "en"
else:
session.state["to"] = "zh"
else:
session.state["to"] = argv.to
if argv.fr == "no":
session.state["fr"] = "auto"
session.state["token"] = arg
|
Python
|
6e37e4bbd35bb24bf0254a76ed7c85a364f34a1f685a0ffcac3b380b919b24c6
|
0x4f2dc2fd077c2085
|
import matplotlib.pyplot as plt
import pandas as pd
def group_by_category(df):
grouped = df.groupby(['CATEGORY']).size().to_frame('Crimes')
labels = ['Trespassing', 'Vehicle theft', 'General Theft',
'Damage to Property', 'Robbery', 'Homicide']
p = grouped.plot.pie(y='Crimes', labels=labels, autopct='%1.1f%%')
p.set_title('Crimes Percentage Grouped By Category')
p.get_legend().remove()
plt.savefig('../charts/category.png')
def group_by_time_of_day(df):
grouped = df.groupby(['TIME_OF_DAY']).size().to_frame('Crimes')
p = grouped.plot.pie(y='Crimes', labels=['Day', 'Evening', 'Night'], autopct='%1.1f%%')
p.set_title('Crimes Percentage Grouped By Time of Day')
p.get_legend().remove()
plt.savefig('../charts/time_of_day.png')
def group_by_day_of_the_week(df):
grouped = df.groupby(['DAY_OF_THE_WEEK']).size().to_frame('Crimes')
labels = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
p = grouped.plot.pie(y='Crimes', labels=labels, autopct='%1.1f%%')
p.set_title('Crimes Percentage Grouped By Day of The Week')
p.get_legend().remove()
plt.savefig('../charts/day_of_the_week.png')
def group_by_month(df):
grouped = df.groupby(['MONTH']).size().to_frame('Size')
grouped['Percentage'] = 100 * grouped['Size'] / len(df)
grouped = grouped.drop(columns='Size')
p = grouped.plot.bar()
p.set_title('Crimes Percentage Grouped By Month')
p.set_ylabel('Percentage of Crimes')
p.set_xlabel('Month')
p.get_legend().remove()
plt.savefig('../charts/month.png')
def group_by_year(df):
grouped = df.groupby(['YEAR']).size().to_frame('Crimes')
p = grouped.plot.pie(y='Crimes', autopct='%1.1f%%')
p.set_title('Crimes Percentage Grouped By Year')
p.get_legend().remove()
plt.savefig('../charts/year.png')
def group_by_territory(df):
grouped = df.groupby(['PDQ']).size().to_frame('Size')
grouped['Percentage'] = 100 * grouped['Size'] / len(df)
grouped = grouped.drop(columns='Size')
grouped.index = grouped.index.astype(int)
p = grouped.plot.bar()
p.set_title('Crimes Percentage Grouped By Territory')
p.set_ylabel('Percentage of Crimes')
p.set_xlabel('Territory Number')
p.get_legend().remove()
plt.savefig('../charts/territory.png')
if __name__ == '__main__':
df = pd.read_csv('../data/crimes_dataset_processed_incomplete.csv')
group_by_territory(df)
group_by_year(df)
group_by_month(df)
group_by_time_of_day(df)
group_by_day_of_the_week(df)
group_by_category(df)
|
Python
|
caea9dee308587e9d9cc634fa5d85d3dc860b40ed56d2df7c38496f8640aa0d7
|
0xf3c3f9ed5f23e829
|
Dataset Card for Indro-Sovereign Gold Dataset (V39)
This dataset is a highly refined, industrially filtered collection of web-crawled text from the Common Crawl (CC-MAIN-2025-05), specifically curated for the development of the Indro AI Base Model. It is the result of the "Zero Money Startup" challenge, focusing on high-quality Hindi and English data.
Dataset Details
Dataset Description
The Indro-Sovereign Gold Dataset is designed to solve the "repetition loop" and "junk data" problems in large language model (LLM) training. Using the V39 Iron Guard Refinery, every document is subjected to rigorous heuristic and algorithmic checks to ensure "Gold" status.
- Curated by: Abhinav Anand (Indro Studio)
- Funded by: Community Driven (Zero Money Startup Challenge)
- Shared by: Abhinav Anand
- Language(s) (NLP): Hindi (Primary), English (Secondary)
- License: MIT
Dataset Sources
- Repository: abhinav337463/indro-web-data
- Project Goal: Base Model Training from scratch.
Uses
Direct Use
- Pre-training of Large Language Models (LLMs).
- Fine-tuning for Hindi-English (Hinglish) understanding.
- Research in web-scale data cleaning and deduplication.
Out-of-Scope Use
- Not intended for use in production without secondary safety alignment (RLHF).
- Not suitable for tasks requiring real-time updated information beyond the crawl date.
Dataset Structure
The data is delivered in compressed .jsonl.gz shards. Each entry contains:
text: The cleaned, high-quality extracted text.meta: Metadata including language (lang), token count (tokens), and Shannon entropy (ent).ex: 128-bit unique exact hash for deduplication.lsh: 128-bit SimHash for near-duplicate detection.host: The source domain of the document.
Dataset Creation
Curation Rationale
To build a truly Sovereign AI, we need data that reflects Indian linguistic nuances without the "noise" of global spam. This dataset was created to provide a cleaner alternative to raw web-scrapes.
Source Data
- Source: Common Crawl (WET files).
- Collection: Distributed mining via Indro-Titan V39 Workers.
Data Collection and Processing
We utilize a multi-stage Iron Guard pipeline:
- Language Filtering: FastText LID (Score > 0.97).
- Anti-Loop: Word frequency analysis to prevent "the-the-the" repetition loops.
- Entropy Guard: Documents must fall within $6.5 < H < 9.5$ to ensure information density.
- Deduplication: Bloom Filters and 128-bit SimHash (Hamming Distance $\le 5$).
Bias, Risks, and Limitations
While the V39 refinery is strict, users should note that web data inherently reflects the biases of its creators.
Recommendations
It is recommended to apply secondary toxicity filters before using this data for consumer-facing AI applications.
Glossary
- Entropy (H): A measure of the randomness or information density in a text document.
- SimHash: A locality-sensitive hashing algorithm used to find similar documents.
- Iron Guard: The proprietary multi-stage filtering logic of Indro Studio.
Dataset Card Contact
Abhinav Anand - Indro Studio
- Downloads last month
- 3,293