Git Product home page Git Product logo

embedded-test-stand's Introduction

Embedded Test Stand

Introduction

A test stand for firmware applications. Allows you to control an application under test while it is running on the hardware, and verify that it behaves correctly.

Status

As of this writing, this repository contains two test stands that test various peripheral APIs of the HAL libraries of their respective targets:

In addition, this repository contains common infrastructure to support these test stands. This existing infrastructure should be able to support test stands for other firmware applications too, but it is still a work in progress and is not as useful as it could be.

Concepts

This section explains some concepts, which should make the structure of this repository easier to understand.

Test target: The subject of the test. The firmware it runs might be part of the system being tested, or it might be purpose-built to support the test. In both cases it communicates with the host system, so that the test suite running there can trigger behavior and check results.

Test assistant: A development board that assists the test suite running on the host in performing the testing. It provides the test suite with additional capabilities that the host system might not have otherwise (i.e. GPIO, protocols like I2C/SPI).

Test node: The umbrella term that can refer to either the test target or the test assistant.

Test suite: A collection of test cases, which run on the host computer. It communicates with the test nodes, to orchestrate the test and gather information about the test target's behavior.

Test case: A single test that is part of a test suite.

Structure

The crates in this repository are split into two groups: Infrastructure that can be used to build target-specific test stands, and the target-specific test stands.

Test Stand Infrastructure

These are the crates that are independent of any specific test suite. If you want to use this test stand for your own project, these are the crates you want to use:

  • test-stand-infra/protocol: Building blocks that can be used to build a protocol for communication between the host and the test nodes.
  • test-stand-infra/firmware-lib: Library for firmware running on the target or assistant. This might be deprecated in the future. See issue #85.
  • host-lib: Library that provides functionality for test suites running on the host.

LPC845 Test Stand

Supports a test suite that covers some of the peripheral APIs in the LPC8xx HAL library. See its README file for more information.

STM32L4 Test Stand

Supports a test suite that covers some of the peripheral APIs in the STM32L4 HAL library. See its README file for more information.

License

Code in this repository, unless specifically noted otherwise, is available under the terms under the 0BSD License. This essentially means you can do what you want with it, without any restrictions.

See LICENSE.md for the full license text.

Created by Braun Embedded
Initial development sponsored by Georg Fischer Signet

embedded-test-stand's People

Contributors

hannobraun avatar

Stargazers

Satyam Tiwary avatar Huw Percival avatar cz avatar  avatar GAURAV avatar Chris Hemingway avatar Lotte Steenbrink avatar Noah Hüsser avatar Todd avatar Antonio Gutierrez avatar Viktor Lazarev avatar

Watchers

 avatar James Cloos avatar  avatar

embedded-test-stand's Issues

Deprecate firmware-lib

It was useful to share code between the target and the assistant, but now that I'm adding more test stands that are not related to the LPC845, it doesn't really make sense to have it as part of the generic test stand infrastructure.

Parts of it should be merged into the test assistant directly, as that is cleaned up and becomes a generic tool that is useful for many test stands. Anything that is generally useful could maybe be merged into LPC8xx HAL.

First test run usually fails

After connecting the target/assistant and flashing the firmware, the first run of the test suite usually fails.

Output from an example run:

running 2 tests
test it_should_read_input_level ... FAILED
test it_should_set_pin_level ... FAILED

failures:

---- it_should_read_input_level stdout ----
Error: TargetPinRead(Timeout)
thread 'it_should_read_input_level' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /home/hanno/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/macros.rs:16:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- it_should_set_pin_level stdout ----
Error: AssistantPinRead(Timeout)
thread 'it_should_set_pin_level' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /home/hanno/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/macros.rs:16:9


failures:
    it_should_read_input_level
    it_should_set_pin_level

test result: FAILED. 0 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out

Log output of the assistant:

11:57:54.184 Starting assistant.
11:58:06.688 panicked at 'Error receiving from USART0: Usart(Framing)', src/main.rs:564:14

The target shows no sign of problems.

Doing a reset of the target and assistant using the reset button on the boards fixes the problem for me. Subsequent test runs are 100% reliable, as far as I can tell.

I've been aware of this problem for a while, but initially assumed that it is a problem with my machine (at the same time, I started seeing weird USB issues which have since gone away). I've received confirmation that others are having the same problem, so its definitely not something specific to me. If I recall directly, I was doing work on the USART API in LPC8xx HAL when I first saw this, so this might be an issue I introduced there.

STM32L4: First USART test after connecting boards often fails

Output from cargo test on the host:

   Compiling stm32l4-test-suite v0.1.0 (/home/hanno/Projects/braun-embedded/embedded-test-stand/stm32l4-test-stand/test-suite)
    Finished test [unoptimized + debuginfo] target(s) in 0.95s
     Running ../../target/debug/deps/stm32l4_test_suite-0e23d79b18f2299e

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

     Running ../../target/debug/deps/usart-9c79a52b3c2777aa

running 2 tests
test it_should_send_messages ... FAILED
test it_should_receive_messages ... FAILED

failures:

---- it_should_send_messages stdout ----
Error: Assistant(UsartWait(Receive(ConnReceiveError(Io(Custom { kind: TimedOut, error: "Operation timed out" })))))
thread 'it_should_send_messages' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /home/hanno/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:191:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- it_should_receive_messages stdout ----
Error: TargetUsartWait(Receive(ConnReceiveError(Io(Custom { kind: TimedOut, error: "Operation timed out" }))))
thread 'it_should_receive_messages' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /home/hanno/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/test/src/lib.rs:191:5


failures:
    it_should_receive_messages
    it_should_send_messages

test result: FAILED. 0 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--test usart'

Output from the target:

14:04:34.612 Starting target...done.
14:04:55.130 Error reading from USART2: Framing
14:04:55.130 panicked at 'Error decoding message: DeserializeBadEncoding', src/main.rs:157:22

Sometimes the assistant also panics. It didn't in the run from which I captured this output.

After running cargo embed again for both target and assistant, it tends to work reliably. This kind of reminds me of #76, but I've yet to look into it more deeply.

Show firmware panics in test suite output

If firmware on the target or the firmware panics, this usually results in a test failure, but the error messages are unhelpful. The test suite just knows that some operation timed out, so that's all it can show.

Both target and assistant are configured to output panic messages to the host via panic-semihosting, but unfortunately those panic messages don't show up on the host. Panic messages will show up, if you start OpenOCD as a standalone process, but currently it is configured to be started by GDB. I don't know why the panic messages don't show up in this configuration, but they don't.

(If the firmware ever panics and you need to see the panic message, do this: In openocd.cfg, comment out the gdb_port and log_output lines, then start OpenOCD without arguments, in that same directory.)

Ideally, the test suite output would show that the firmware panicked, and display the panic message. This should be possible by using probe-rs to attach to the firmware (and possibly upload it before that, see #6).

Querying pin from the test suite times out, if no change has been received

Querying a pin from the test suite waits for a message from the assistant. Since such a message is only sent, if a pin changes, this will result in a time-out when querying for a second time, without a change having been registered in between.

I can think of two straight-forward ways to fix this:

  • Cache the latest known state. Return that state, if no update is received within the timeout period.
  • Change the protocol to follow a query-response model, so each time the test suite needs to know the pin state, it will send a message and receive a reply.

Advanced USART testing

The USART tests that exist so far are pretty basic. They verify that the USART sends and receives data. I think writing some more tests would be worthwhile, and have added some ideas here.

Some of those might not be possible with the current infrastructure, and actually require additional development boards that assist in the test.

Setting the baud rate

I think some of this might already be tested implicitly, through the host-side baud rate configuration. I honestly don't know though, if that setting has much meaning, considering that the serial data is transported over USB.

In any case, it might be worth verifying that various baud rate configurations are set correctly.

Interrupt handling

The test firmware currently uses interrupts to receive USART data, so obviously they work. It could be worth verifying that all flags are reset correctly, and the interrupt doesn't actually fire too often.

Error handling

We could deliberately inject errors into USART communication, and verify that those are handled correctly.

make GPIO handling more generic & explicit

Opening this PR mainly for documentation and to discuss objections/changes early on (ish).

IMO, the current API for writing GPIO tests is a bit opaque: which pin is set by calling test_stand.assistant.set_pin_low()? How would I even refer to these pins if I were to add more? Could we reconfigure this pin from our test code, during runtime since we have dynamic pins now?

I'd like to propose a new API (documented at the end of this issue).
I already have running code that implements this both for the host API as well as the T-A, but it's a lot of different changes:

  • host API rearrangement
  • introduction of dynamic pins on host and T-A
  • introduction of arduino-style consecutive pin numbering
  • introduction of non-interrupt driven dynamic pins on T-A as well while we're at it
  • decoupling of Test-Assistant and Test-Target (so you can test any other random device as well)
    and probably a pain to review in one go.

So, I'd start by splitting off a PR that only introduces the Test API below– without adding new functionality to the T-A. This may include adding some temporary shims like rejecting all pin numbers in create_gpio_output_pin() that aren't 29 (i.e. the red LED) that can be taken out when implementing the T-A side changes.

Does that work for you? What are your thoughts?


We've designed the interface so that

  • it is visible which pin is being changed/read by looking at the function call
  • visual noise is kept down
  • you're stopped at compile time if you try to do something you shouldn't (read an output pin for example)
  • pins are named by simple consecutive numbering, i.e. the gray numbers 1-40 in this diagram:

LPC845 Pinout Diagram

To illustrate this, let's look at the test in test-suite/tests/gpio.rs checking whether voltage changes at the test-assistant's PIO1_2 pin are registered by the target.
Before, it looked like this:

#[test]
fn it_should_read_input_level() -> Result {
    let mut test_stand = TestStand::new()?;

    test_stand.assistant.set_pin_low()?;
    //                   ^^^^^^^^^^^^^
    //                   unclear: which pin?
    assert!(test_stand.target.pin_is_low()?);

    // [...]

    Ok(())
}

Now, it looks like this:

const RED_LED_PIN: PinNumber = 29;
//                             ^^
//                             simpler pin numbering

#[test]
fn target_should_read_input_level() -> Result {
    // SETUP
    let mut test_stand = TestStand::new()?;
    let mut out_pin = test_stand
    //      ^^^^^^
    //      is of type `OutputPin`
        .assistant
        .create_gpio_output_pin(RED_LED_PIN, Level::High)?;
    //                          ^^^^^^^^^^^  ^^^^^^^^^^^
    //                    pin used for test  set initial pin
    //                                       voltage level so
    //                                       that pin is off


    // RUN TEST
    out_pin.set_low()?;
    //      ^^^^^^^
    //      can only be called on `OutputPin`s
    //      sets pin voltage level to 🚨

    assert!(test_stand.target.pin_is_low()?); // note: since `test-target` hasn't
                                              // been modified, this hasn't changed 

    // [...]
    // for demonstration purposes, let's add some more code:

    let in_pin = out_pin.into_input_pin()?;
    //  ^^^^^
    //  is now of type `InputPin`
    
    // this mistake will be prevented at compile time!
    in_pin.set_low()?;
    //     ^^^^^^^
    //     trying to set the voltage of an input pin 🛑 

    Ok(())
}

Automatically upload the firmware

The test suite requires the test firmware to be running on the device. As of now, this requires manual setup, which is far from optimal, of course. Not only is it tedious, it is also error-prone, as it would be pretty easy to download the wrong firmware, or at least a wrong version.

Ideally, the test suite should make sure the correct firmware is running on the device, and, if necessary, flash it. We could use external tooling for this, like OpenOCD, but I think it might be much nicer to use probe-rs. I've verified that uploading firmware to the LPC845-BRK works great (tested via cargo flash), but there are two problems right now:

  • The download is pretty slow.
  • There's only limited debugging support. I think hooking into the debugging support would be great, to monitor the firmware and become aware of any panics.

cc @Yatekii (just to let you know about the use case I'm thinking about, and in case you have any comments)

USART receive test fails after test target reset

After the test target is reset, the USART receive test fails with a timeout error:

running 1 test
test it_should_receive_messages ... FAILED

failures:

---- it_should_receive_messages stdout ----
Error: TargetUsartWait(TargetUsartWaitError(TestLib(Io(Custom { kind: TimedOut, error: "Operation timed out" }))))
thread 'it_should_receive_messages' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', src/libtest/lib.rs:197:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

This still happens, even if I set the timeout value very high.

I've also seen it fail once again after that, with an error message complaining that only about half the expected data was received. But that happens rarely, and I've only ever seen it as a direct follow-up to this failure.

In subsequent test runs, everything works fine, until the target is reset again.

Consider porting test assistant to different hardware

For ease of implementation and ease of use, I've decided to base the test assistant on the same hardware that the test target is already using, the LPC845-BRK. This is not ideal, as the same HAL APIs that are supposed to be under test, are used for the test.

It might be better to port the test assistant to a different hardware platform. The STM32F303 Discovery Kit might be a good choice, as that's the hardware used in the Discovery book, and thus might already be commonly available among Rust users.

`lpc845-test-assistant` doesn't build with latest `lpc8xx-hal` commit

The "lpc8xx-hal" dependency in lpc845-test-stand/test-assistant/Cargo.lock is still at401d73d6a3f2c643d4d496d856c727b59bada6af, but the latest hal commit is bec1cf72a00d86beaeaa43bea78e39542db12888.

When I update this dep to bec1cf72a00d86beaeaa43bea78e39542db12888 by changing

-source = "git+https://github.com/lpc-rs/lpc8xx-hal.git#401d73d6a3f2c643d4d496d856c727b59bada6af"
+source = "git+https://github.com/lpc-rs/lpc8xx-hal.git#bec1cf72a00d86beaeaa43bea78e39542db12888"

I get the following error while building:

test-assistant git:(master) ✗ cargo build
   Compiling lpc8xx-hal v0.8.2 (https://github.com/lpc-rs/lpc8xx-hal.git#bec1cf72)
   Compiling firmware-lib v0.1.0 (/<redacted>/embedded-test-stand/test-stand-infra/firmware-lib)
error[E0369]: cannot subtract `u32` from `lpc8xx_hal::mrt::Ticks`
   --> /<redacted>/embedded-test-stand/test-stand-infra/firmware-lib/src/pin_interrupt.rs:100:46
    |
100 |                 period = Some(mrt::MAX_VALUE - self.timer.value());
    |                               -------------- ^ ------------------ u32
    |                               |
    |                               lpc8xx_hal::mrt::Ticks

error: aborting due to previous error

For more information about this error, try `rustc --explain E0369`.
error: could not compile `firmware-lib`.

(maybe due to src/mrt.rs:69 in lpc-rs/lpc8xx-hal#298 ?)

Is this on purpose?

Improve error handling strategy in firmware

Currently, it's a bit of a mess. Some errors are ignored (anything that's relevant to the test will cause a test failure on the host, after all), some are logged, and some cause a panic. It would be better to have a coherent strategy that helps make the tests more robust.

I think it would probably be best to panic on all errors. This should result in a test failure, and as long as semihosting is set up correctly, the panic message should show up on the host (albeit not as part of the test suite output).

Once the test suite controls the firmware on the targets (see #6), we can improve this and show panic messages as part of test output.

Consider other means of communication between firmware and test suite

Currently, the test suite and firmware communicate via USART, simply because that was easy to implement and is readily available. I think this is good enough for now, but it has one disadvantage: If any changes are made to the HAL's USART API, the test suite may not be able to deliver detailed feedback about what broke, since any bugs might break the whole testing infrastructure.

It might be possible to use the on-board programmer to directly manipulate the RAM on the device, and use that as a alternate means of communication. I haven't looked into that yet, and don't know how feasible it would be.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.