3 GraphQL Things To Know When You Start
December 27, 2022“Stupid” answers matter
December 29, 2022The Code Testing Paradox
It is no secret that there’s a paradox in the dev industry: we want our code to be tested and we don’t want to test our code at the same time. Tested code is reliable and accessible. When the need arises to reaffirm what tested code does or fails to do, the tests are particularly useful. Testing code is a sunk cost into a project that possibly (or very likely – depending on your level of optimism) works just fine. Thirdly, there is no infinite budget or timeframe to infinitely test all of a project’s code from every angle in every situation. The world simply isn’t that utopian of a place. To resolve these tensions and escape the paradox, we need a plan to effectively and practically test our project’s code. We need a working definition of tested code.
Not All Code is Equal
My typical frame of reference is deeply layered, complex algorithmic, high throughput projects for clients that I get to consult with. This is relatively high on the scale of what exists throughout the developing world in general. On the other side of the same industry we have thinly layered, algorithmically simple, low throughput projects. These are just as valuable projects as the large, complex ones because they do real work in the real world. Both matter, but the simple projects don’t require the same amount of attention in the code verification area as the high performing, complex projects I typically have in mind when I write. To say that both types of projects require the same level of code testing is absurd.
Thinly layered projects have easily accessible logic. They are not onions needing their layers to be peeled away but are more like a sandwich in which every portion is apparent to the consumer. To test them out by operating them live isn’t much different than to test them with a code testing framework. Their deeply layered counterparts are exactly the opposite. They are the onion. The layers of this onion produce a complexity in which unintended behavior and side effects can occur far from view. It is these inner layers that benefit particularly well from code testing frameworks to validate their intended and unintended behavior.
Algorithms also vary in depth and complexity. Transferred data packages and straightforward calculations don’t yield many surprises, but heavy mutations, state management, and diversified value analysis certainly do. Piecing out the more complex algorithms to verify their behavior patterns and dangers improves overall confidence in the most sensitive and critical portions of a project.
Throughput is our third concern. Code that is infrequently activated at low velocity doesn’t create a lot of problems, but a single data point manipulated unexpectedly a million times over will get any project manager’s attention. The higher the throughput, the higher the stakes. Being able to access and verify a high throughput portion of an application to determine what went wrong can save a ton of rework and data repair.
Testing Accordingly
Any working definition of tested code must be practical to the project being tested.
One part of your testing plan should consider testing according to the number of layers. A simple project may have as few as two layers: an input and an output layer. Most of the behavior is direct and requires little code testing. A complex project will have any number of transfer and processing layers. Each one deserves at the very least verification tests to ensure the app is performing end-to-end as intended and on command. This verifies that the layers of your application are behaving as intended. Simple means less testing. Each additional layer adds complexity worth testing directly and distinctly.
Another part of the testing plan should assess the algorithms within the layers. Anything that processes near to a WYSIWYG (What You See Is What You Get) style logic is a candidate for exemption. Everything else is certainly on the docket to get verified by code testing.
Lastly, consider the portions of your project that have varying throughput. Any core process of the application is a good throughput candidate. Anything that is on the fringes of application processes and is rarely utilized is low. The higher a section is for throughput candidacy the more reasonable it is to consider more extensive testing around it. You can test it against a wider variety of use cases in its test suite or specifically simulate high usage of that area. Throughput is the last consideration as it addresses the wider context of the project, but it is definitely worth considering.
Make A Plan
What makes these considerations work is coming up with a plan for testing the project. A working definition of tested code is a plan that verifies the intent to test the bits, pieces, and whole of the project.
Make the following considerations, basing your plan for the amount of testing based on the factors we have considered as follows:
Layered | Test Less | Test More |
Layered | Thin | Deep |
Algorithm | Simple | Complex |
Throughput | Low | High |
By analyzing these aspects of your project, in relevant areas, you can fashion a reasonable plan for testing any project. That plan is a Working Definition of Tested Code.