AMTSO Publishes Guidelines for Testing of IoT Security Products
AMTSO, the cybersecurity industry’s testing standard community, announced it has published its first Guidelines for Testing of IoT Security Products. Comprised of input from testers and vendors, the guidelines cover principles for the testing of IoT security products providing recommendations on test environment, sample selection, testing of specific security functionality, and performance benchmarking for testers.
Recommended AI News: Darktrace Adds Early Warning System to Antigena Email
The guidelines include the following sections:
- General principles: All tests and benchmarks should focus on validating the end result and performance of protection delivered, instead of how the product functions on the backend.
- Sample selection: The guidelines provide guidance for challenges with choosing the right samples for IoT security solution benchmarking. For a relevant test, testers need to select samples that are still active, and that actually target the operating systems smart devices are running on.
- Determination of “detection”: IoT security solutions work very differently than traditional cybersecurity products when it comes to detections and actions taken; for example, some solutions will simply detect and prevent a threat without notifying the user. The guidelines suggest to use threats with admin consoles that can be controlled by the tester or to use devices where the attack will be visible if conducted.
- Test environment: In an ideal case, all tests and benchmarks would be executed in a controllable environment using real devices. However, the setup can be complex, and if the tester decides against using real devices in the testing environment, it is advised that they should validate their approach by running their desired scenario with the security functionality of the security device disabled and checking the attack execution and success.
- Testing of specific security functionality: The guidelines embrace advice on different attack stages, including reconnaissance, initial access, and execution. They outline the possibility to test each stage individually vs. going through the whole attack at the same time. Choices on this should be documented in the testing methodology.
- Performance benchmarking: The guidelines also provide considerations on performance benchmarking, e.g. suggesting to differentiate between various use cases such as consumers vs. businesses, or the criticality of latency or reduced throughput per protocol, which depends on its purpose.
Recommended AI News: Pega Launches Pega Process Extender for Salesforce Lightning on Salesforce App Exchange and Pega Marketplace
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.