Consider a TAS implemented to perform automated testing on native mobile apps at the UI level, where the TAF implements a client-server architecture. The client runs on-premise and allows creation of automated test scripts using TAF libraries to recognize and interact with the app’s UI objects. The server runs in the cloud as part of a PaaS service, receiving commands from the client, translating them into actions for the mobile device, and sending the results to the client. The cloud platform hosts several mobile devices dedicated for use by this TAS. The device on which to run test scripts/test suites is specified at run time. You are currently verifying whether the test automation environment and all other TAS/TAF components work correctly. Which of the following activities would you perform to achieve your goal?
In a first possible implementation, the automated test scripts within a suite locate and interact with elements of a web UI indirectly through the browsers using browser-specific drivers and APIs, provided by an automated test tool used as part of the TAS. In an alternative implementation, these test scripts locate and interact with elements of the same web UI directly at the HTML level by accessing the DOM (Document Object Model) and internal JavaScript code. The first possible implementation:
A TAS that performs automated testing in a single test environment was successfully manually installed and configured from a central repository, with all its components in the correct versions. It was also verified that all TAS components in this environment are capable of providing reliable and repeatable performance. The TAS will be used to run several suites of automated regression test scripts on various SUTs in the test environment. Your current goal is to complete all preliminary verifications to ensure that the TAS works correctly. Which of the following activities would you perform FIRST?
Automated tests at the UI level for a web app adopt an asynchronous waiting mechanism that allows them to synchronize test steps with the app, so that they are executed correctly and at the right time, only when the app is ready and has processed the previous step: this is done when there are no timeouts or pending asynchronous requests. In this way, the tests automatically synchronize with the app's web pages. The same initialization tasks to set test preconditions are implemented as test steps for all tests. Regarding the pre-processing (Setup) features defined at the test suite level, the TAS provides both a Suite Setup (which runs exactly once when the suite starts) and a Test Setup (which runs at the start of each test case in the suite). Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?
Which of the following information in API documentation is LEAST relevant for implementing automated tests on that API?
As a TA-E, you have successfully verified that a test automation environment and all other components of the TAS are working as expected. Now your goal is to verify the correct behavior for a given automated test suite that will be run by the TAS. Which of the following should NOT be part of the verifications aimed at achieving your goal?
Which of the following is the BEST example of how static analysis tools can help improve the test automation code quality in terms of security?
Consider a TAS aimed at implementing and running automated test scripts at the UI level on web apps. The TAS must support cross-browser compatibility for a variety of supported browsers, by ensuring that the same test script will run on such browsers in the same way without making any changes to it. This is achieved by introducing appropriate abstractions into the TAA for connection and interaction with different browsers. Because of this, the TAS will be able to make direct calls to the supported browsers using each different browser’s native support for automation. Which of the following SOLID principles was adopted?