diff --git a/_posts/2016-11-27-testing-the-untestable.markdown b/_posts/2016-11-27-testing-the-untestable.markdown index 21f7c9b..1622d4c 100644 --- a/_posts/2016-11-27-testing-the-untestable.markdown +++ b/_posts/2016-11-27-testing-the-untestable.markdown @@ -37,6 +37,6 @@ Is it enough to test the full experience of KDE software? No, but this is a good ## Is this enough for everything? -OF course not. Automated testing only gets so much, so this is not an excuse for being lazy and [not filing those reports](https://bugs.kde.org). Also, since the tests run in a VM, they won't be able to catch some issues that only occur on real hardware (multiscreen, compositing). But is surely a good start to ensure that at least obvious regressions are found before the code is actually shipped to distributions and then to end users. +Of course not. Automated testing only gets so much, so this is not an excuse for being lazy and [not filing those reports](https://bugs.kde.org). Also, since the tests run in a VM, they won't be able to catch some issues that only occur on real hardware (multiscreen, compositing). But is surely a good start to ensure that at least obvious regressions are found before the code is actually shipped to distributions and then to end users. What needs to be done? MOre tests, of course. In particular, Plasma regression tests (handling applets, etc.) would be likely needed. But as they say, *every journey starts with the first step*.