Glossary of Software Testing Terms Provided by Testing Realms
image
image
image


Glossary of Software Testing Terms: M

This glossary of software testing terms and conditions is a compilation of knowledge, gathered over time, from many different sources. It is provided “as-is” in good faith, without any warranty as to the accuracy or currency of any definition or other information contained herein. If you have any questions or queries about the contents of this glossary, please contact Project Realms directly.


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Maintainability Monitors Multiple Condition Coverage
Maintenance Requirements Modified Condition/Decision Coverage Mutation Analysis
Manual Testing Modified Condition/Decision Testing Mutation Testing
Metric Monkey Testing  

Maintainability
The ease with which the system/software can be modified to correct faults, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.

Top


Maintenance Requirements
A specification of the required maintenance needed for the system/software. The released software often needs to be revised and/or upgraded throughout its lifecycle. Therefore it is essential that the software can be easily maintained, and any errors found during rework and upgrading.

Within traditional software testing techniques, script maintenance is often a problem as it can be very completed and time consuming to ensure correct maintenance of the software as the scripts these tools use need updating every time the application under test changes.

Top
Manual Testing
Manual testing is the oldest type of software testing. It requires a tester to perform manual test operations on the test software without the help of test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open minded, resourceful, un-opinionated, and skillful.

As a tester, it is always advisable to use manual white box testing and black-box testing techniques on the test software. Manual testing helps discover and record any software bugs or discrepancies related to the functionality of the product

Manual testing can be augmented by test automation. It is possible to record and playback manual steps and write automated test scrip(s) using test automation tools. Although, test automation tools will only help execute test scripts written primarily for executing a particular specification and functionality. Test automation tools lack the ability of decision-making and recording any unscripted discrepancies during program execution. It is recommended that one should perform manual testing of the entire product at least a couple of times before actually deciding to automate the more mundane activities of the product.

Manual testing helps discover defects related to the usability testing and GUI testing area. While performing manual tests the software application can be validated whether it meets the various standards defined for effective and efficient usage and accessibility. For example, the standard location of the OK button on a screen is on the left and the CANCEL button on the right. During manual testing you might discover that on some screen, it is not. This is a new defect related to the usability of the screen. In addition, there could be many cases where the GUI is not displayed correctly and the basic functionality of the program is correct. Such bugs are not detectable using test automation tools.

Repetitive manual testing can be difficult to perform on large software applications or applications having very large dataset coverage. This drawback is compensated for by using manual black-box testing techniques including equivalence partitioning and boundary value analysis. Using these, the vast dataset specifications can be divided and converted into a more manageable and achievable set of test suites.

There is no complete substitute for manual testing. Manual testing is crucial for testing software applications more thoroughly.

Top
Metric
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Top
Monitors
Tools, which are used to measure performance characteristics of a specific component. The Controller has monitors that can measure system performance, network delay, application and web server performance.

Top
Modified Condition/Decision Coverage
The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

Top
Modified Condition/Decision Testing
A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

Top
Monkey Testing
Testing a system or an application on the fly, i.e. a unit test with no specific end result in mind.

Top
Multiple Condition Coverage
See Branch Condition Combination Coverage.

Top
Mutation Analysis
A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also Error Seeding.

Top
Mutation Testing
Testing done on the application where bugs are purposely added to it. See Bebugging.

Top
| Contact us for more info
image
image


image