zephyr/.github/workflows/devicetree_checks.yml

65 lines
1.9 KiB
YAML
Raw Normal View History

scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2020 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Devicetree script tests
on:
push:
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
pull_request:
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
jobs:
devicetree-checks:
name: Devicetree script tests
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
steps:
- name: checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
- name: install python dependencies
run: |
pip3 install wheel
2021-03-27 07:18:58 +08:00
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
scripts: dts: convert test suites to pytest Use the pytest test framework in the dtlib.py and edtlib.py test suites (testdtlib.py and testedtlib.py respectively). The goal here is not to change what is being tested. The existing test suite is excellent and very thorough. However, it is made up of executable scripts where all of the tests are run using a hand-rolled framework in a single function per file. This is a bit all-or-nothing and prevents various nice features available in the de-facto standard pytest test framework from being used. In particular, pytest can: - drop into a debugger (pdb) when there is a problem - accept a pattern which specifies a subset of tests to run - print very detailed error messages about the actual and expected results in various traceback formats from brief to very verbose - gather coverage data for the python scripts being tested (via plugin) - run tests in parallel (via plugin) - It's easy in pytest to run tests with temporary directories using the tmp_path and other fixtures. This us avoid temporarily dirtying the working tree as is done now. Moving to pytest lets us leverage all of these things without any loss in ease of use (in fact, some things are nicer in pytest): - Any function that starts with "test_" is automatically picked up and run. No need for rolling up lists of functions into a test suite. - Tests are written using ordinary Python 'assert' statements. - Pytest magic unpacks the AST of failed asserts to print details on what went wrong in really nice ways. For example, it will show you exactly what parts of two strings that are expected to be equal differ. For the most part, this is a pretty mechanical conversion: - extract helpers and test cases into separate functions - insert temporary paths and adjust tests accordingly to not match file names exactly - use 'assert CONDITION' instead of 'if not CONDITION: fail()' There are a few cases where making this happen required slightly larger changes than that, but they are limited. Move the checks from check_compliance.py to a new GitHub workflow, removing hacks that are no longer needed. Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-09-30 08:04:34 +08:00
run: |
2021-03-27 07:18:58 +08:00
tox