pytest plugin

We use special pytest plugin to improve the testing side of this project.

For example: it is a popular request to ensure that your container does have its error pass handled. Because otherwise, developers might forget to do it properly. It is impossible to fix with types, but is really simple to check with tests.

Installation

You will need to install pytest separately.

Usage

There’s no need to install anything special. pytest will automatically find and use this plugin.

To use it in your tests, request returns fixture like so:

def test_my_container(returns):
    ...

assert_equal

We have a special helper to compare containers’ equality.

It might be an easy task for two Result or Maybe containers, but it is not very easy for two ReaderResult or FutureResult instances.

Take a look:

>>> from returns.result import Result
>>> from returns.context import Reader

>>> assert Result.from_value(1) == Result.from_value(1)
>>> Reader.from_value(1) == Reader.from_value(1)
False

So, we can use assert_equal() or returns.assert_equal method from our pytest fixture:

>>> from returns.result import Success
>>> from returns.context import Reader
>>> from returns.contrib.pytest import ReturnsAsserts

>>> def test_container_equality(returns: ReturnsAsserts):
...     returns.assert_equal(Success(1), Success(1))
...     returns.assert_equal(Reader.from_value(1), Reader.from_value(1))

>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_container_equality(returns_fixture)

is_error_handled

The first helper we define is is_error_handled function. It tests that containers do handle error track.

>>> from returns.result import Failure, Success
>>> from returns.contrib.pytest import ReturnsAsserts

>>> def test_error_handled(returns: ReturnsAsserts):
...     assert not returns.is_error_handled(Failure(1))
...     assert returns.is_error_handled(
...         Failure(1).lash(lambda _: Success('default value')),
...     )

>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_error_handled(returns_fixture)

We recommend to unit test big chunks of code this way. This is helpful for big pipelines where you need at least one error handling at the very end.

This is how it works internally:

  • Methods like fix and lash mark errors inside the container as handled

  • Methods like map and alt just copies the error handling state from the old container to a new one, so there’s no need to re-handle the error after these methods

  • Methods like bind create new containers with unhandled errors

Note

We use monkeypathing of containers inside tests to make this check possible. They are still purely functional inside. It does not affect production code.

assert_trace

Sometimes we have to know if a container is created correctly in a specific point of our flow.

assert_trace helps us to check exactly this by identifying when a container is created and looking for the desired function.

>>> from returns.result import Result, Success, Failure
>>> from returns.contrib.pytest import ReturnsAsserts

>>> def desired_function(arg: str) -> Result[int, str]:
...     if arg.isnumeric():
...         return Success(int(arg))
...     return Failure('"{0}" is not a number'.format(arg))

>>> def test_if_failure_is_created_at_convert_function(
...     returns: ReturnsAsserts,
... ):
...     with returns.assert_trace(Failure, desired_function):
...         Success('not a number').bind(desired_function)

>>> def test_if_success_is_created_at_convert_function(
...     returns: ReturnsAsserts,
... ):
...     with returns.assert_trace(Success, desired_function):
...         Success('42').bind(desired_function)

>>> # We only run these tests manually, because it is a doc example:
>>> returns_fixture = getfixture('returns')
>>> test_if_failure_is_created_at_convert_function(returns_fixture)
>>> test_if_success_is_created_at_convert_function(returns_fixture)

markers

We also ship a bunch of pre-defined markers with returns:

Further reading

API Reference

classDiagram BaseException <|-- _DesiredFunctionFound
final class ReturnsAsserts(errors_handled)[source]

Bases: object

Class with helpers assertions to check containers.

Parameters:

errors_handled (Dict[int, Any]) –

static assert_equal(first, second, *, deps=None, backend='asyncio')[source]

Can compare two containers even with extra calling and awaiting.

Parameters:

backend (str) –

Return type:

None

is_error_handled(container)[source]

Ensures that container has its error handled in the end.

Return type:

bool

static assert_trace(trace_type, function_to_search)[source]

Ensures that a given function was called during execution.

Use it to determine where the failure happened.

Parameters:
  • trace_type (TypeVar(_ReturnsResultType, bound= Union[ResultLikeN, Callable[..., ResultLikeN]])) –

  • function_to_search (TypeVar(_FunctionType, bound= Callable)) –

Return type:

Iterator[None]

pytest_configure(config)[source]

Hook to be executed on import.

We use it define custom markers.

Return type:

None

returns()[source]

Returns class with helpers assertions to check containers.

Return type:

Iterator[ReturnsAsserts]

assert_equal(first, second, *, deps=None, backend='asyncio')[source]

Custom assert function to compare two any containers.

The important note here is that this assert should probably used in tests. Not real application code.

It will call all Reader based containers and await all Future based ones.

It also works recursively. For example, ReaderFutureResult will be called and then awaited.

You can specify different dependencies to call your containers. And different backends to await then using anyio.

By the way, anyio should be installed separately.

Parameters:

backend (str) –

Return type:

None