Prosecution Insights
Last updated: April 19, 2026
Application No. 18/447,850

DETERMINING INTRODUCTION POINT OF APPLICATION REGRESSIONS THROUGH SCREENSHOT ANALYSIS

Final Rejection §101§103
Filed
Aug 10, 2023
Examiner
TRAN, TRAVIS VIET
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
LENOVO (SINGAPORE) PTE. LTD.
OA Round
3 (Final)
93%
Grant Probability
Favorable
4-5
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
13 granted / 14 resolved
+37.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
20.6%
-19.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
DETAILED ACTION The Office Action is in response to claims filed 10/09/2025. Claims 5-6 are cancelled. Claims 1-3, 7, 9-11, and 15-17 are currently amended. Claims 1-4, 7-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 objected to because of the following informalities: Claims 2, 10, and 16 contain a minor informality when reciting the limitation “the respective user application”. In order to properly reference said “user application” in claims 1, 9, and 15 respectively, applicant is recommended to amend as follows: “the Claim 10 contains a minor informality when reciting the limitation “accessing plurality of screenshots”. In order to properly reference said “plurality of screenshots” in claim 9, applicant is recommended to amend as follows: “accessing the plurality of screenshots”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 7, 9-11, and 15-17 rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more. Claims 1, 9, and 15 as drafted recite a process, under its broadest reasonable interpretation, that can be practically performed by the human mind with pen and paper but for the recitation of generic computer/computing components. The claims disclose the limitation of “scan the plurality of screenshots to detect presence of the regression assertion…to predict conditions under which subsequent user application errors will occur, and wherein the system is configured to generate additional automated tests based on the predicted conditions” which is a process that can be practically performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the claims fall under the “Mental Process” group of abstract ideas. The claims recite additional elements that are not integrated into a practical application. The claims disclose “A system for software testing, the system comprising: a storage device configured to store a plurality of screenshots of a user application captured during execution of the user application; a computer vision device capable of a pixel-by-pixel examination of the screenshots and coupled to the storage device, the computer vision device configured” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). The claims further disclose the additional elements to “receive a description of a regression assertion to be detected within the plurality of screenshots, ... provide output indicating the presence of the regression assertion.” which are processes, under its broadest reasonable interpretation, that are directed to the insignificant extra solution activity of mere data gathering or outputting (See MPEP 2106.05(g)). Claim 1 recites the additional element “wherein the computer vision device includes machine learning circuitry” which is recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Claim 15 recites the additional elements “A non-transitory machine readable medium including instructions that, when executed on a processor, cause the processor to perform operations” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements are not integrated into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The claims recite additional elements that do not amount to significantly more than the abstract idea. The claim recites “A system for software testing, the system comprising: a storage device configured to store a plurality of screenshots of a user application captured during execution of the user application; a computer vision device capable of a pixel-by-pixel examination of the screenshots and coupled to the storage device, the computer vision device configured’ which are generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). The claims also disclose the additional elements to “receive a description of a regression assertion to be detected within the plurality of screenshots, ... provide output indicating the presence of the regression assertion” which has been determined as a well-known routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Claim 1 recites the additional element “wherein the computer vision device includes machine learning circuitry” which is recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Claim 15 recites the additional elements “A non-transitory machine-readable medium including instructions that, when executed on a processor, cause the processor to perform operations” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Thus, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. The claims are not patent eligible. Claims 2, 10, and 16 recite “associating the plurality of screenshots with a software version information of the respective user application” which is a process, under its broadest reasonable interpretation, that can be practically performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the claim falls under the “Mental Process” group of abstract ideas. The claims recite the additional element of “a relational database” which is recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Claim 2 recites “wherein the plurality of screenshots are stored in a relational database, ... and wherein the output indicates a first software version at which the regression assertion was present.” Claims 10 and 16 also recite “accessing plurality of screenshots from a relational database, ... and wherein the output indicates a first software version at which the assertion was present.” These additional elements, under its broadest reasonable interpretation, are directed to the insignificant extra solution activity of mere data gathering or outputting (See MPEP 2106.05(g)). Accordingly, the additional elements are not integrated into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The additional elements do not amount to significantly more. The claims recite “a relational database” which is recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Claim 2 recites “wherein the plurality of screenshots are stored in a relational database, ... and wherein the output indicates a first software version at which the regression assertion was present.” Claims 10 and 16 also recite “accessing plurality of screenshots from a relational database, ... and wherein the output indicates a first software version at which the assertion was present.” These additional elements have been determined as a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Thus, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. The claims are not patent eligible. Claim 3 recites additional elements “further comprising a user interface component coupled to the computer vision device,” which are recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). The claim further recites the additional element “wherein the user interface component is configured to display a first indication of the presence of the regression assertion and a second indication of a software build number.” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data outputting (See MPEP 2106.05(g)). Accordingly, the additional elements are not integrated into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The additional elements are not integrated into a practical application. Claim 3 recites the additional elements “further comprising a user interface component coupled to the computer vision device,” which are generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). The claim further recites the additional element “wherein the user interface component is configured to display an indication of presence of the regression assertion and an indication of a software build number.” which has been determined to be a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Thus, the additional elements recited in the claim cannot provide an inventive concept nor amount to significantly more. The claim is not patent eligible. Claims 4, 12, and 18 recite the additional element “wherein the operations further comprise receiving one or more conditional statements limiting the scan to at least one of a date range, a software build number range, and a user application state” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data gathering (See MPEP 2106.05(g)). Accordingly, the additional elements are not integrated into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The insignificant extra solution activity does not amount to significantly more. The claims recite “wherein the operations further comprise receiving one or more conditional statements limiting the scan to at least one of a date range, a software build number range, and a user application state” which has been determined to be a well-known, routine, and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Thus, the additional elements recited in the claims cannot provide an inventive concept nor amount to significantly more. The claims are not patent eligible. Claims 13, and 19 recite “predict conditions…under which subsequent user application errors occur,” or equivalents thereof, which is a process, under its broadest reasonable interpretation, that can be practically performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the claim falls under the “Mental Process” group of abstract ideas. Claims 13, and 19 further include the additional element “based on a machine learning model” or equivalents thereof, which is recited at a high level of generality such that it amounts to no more than mere generic computer/computing components to apply the abstract idea (See MPEP 2106.05(f)). Accordingly, the additional elements are not integrated into a practical application because they do not impose any meaningful limits upon practicing the abstract idea. The additional elements do not amount to significantly more. Claims 13, and 19 recite “based on a machine learning model” or equivalents thereof, which are generic computer/computing components to apply the abstract idea (See MPEP 2106.05(d)). Thus, the additional elements cannot provide an inventive concept nor amount to significantly more. The claims are not patent eligible. Claims 14, and 20 recite the additional element of “generating additional automated tests based on the predicted conditions” which is a process, under its broadest reasonable interpretation, that can be practically performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the claims fall under the “Mental Processes” group of abstract ideas and are not patent eligible. Claim 7 recites the additional limitation “wherein the regression assertion is defined using computer vision device natural-language based definitions” which is a process, under its broadest reasonable interpretation, that can be practically performed by the human mind through observation, evaluation, judgement, and/or opinion with the aid of pen and paper. Thus, the claim falls under the “Mental Processes” group of abstract ideas and is not patent eligible. Claim 8 recites the additional element “wherein the plurality of screenshots are generated by automated tests of the user application” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data gathering. Accordingly, the additional element is not integrated into a practical application because it does not amount to significantly more than the abstract idea. Claims 11 and 17 recite the additional element “further comprising displaying a first indication of the presence of the regression assertion and a second indication of a software build number at which a corresponding error or software bug was introduced into the associated user application.” which is a process, under its broadest reasonable interpretation, that is directed to the insignificant extra solution activity of mere data outputting (See MPEP 2106.05(g)). Accordingly, the additional element is not integrated into a practical application because it does not impose any meaningful limits upon practicing the abstract idea. The additional element does not amount to significantly more. Claims 11 and 17 recite the additional element “further comprising displaying a first indication of the presence of the regression assertion and a second indication of a software build number at which a corresponding error or software bug was introduced into the associated user application.” which has been determined to be a well-known routine and/or conventional activity of receiving or transmitting data over a network (See MPEP 2106.05(d)(II)). Thus, the claim cannot provide an inventive concept nor amount to significantly more. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210397546 A1 hereinafter “Cser” in view of US 20120243745 A1 hereinafter “Amintafreshi” and further in view of US 20220100647 A1 hereinafter “Hamid”. With regards to claim 1, Cser teaches A system for software testing, the system comprising: a storage device configured to store a plurality of screenshots of a user application captured during execution of the user application; (Cser [0107], “In some embodiments, the monitoring component directly monitors the application that is being tested or the environment that is executing the application (e.g., a web browser). In some embodiments, the monitoring component stores a picture of a web page as a screenshot and the monitoring component stores source code associated with the picture of the web page that was captured as a screenshot.”) and a computer vision device capable of a pixel-by-pixel examination of the screenshots and coupled to the storage device, the computer vision device configured to: (Cser [0112], “In some embodiments, the technology comprises using computer vision to acquire, process, analyze, and understand digital images (e.g., to understand the visual render of a UI, elements of a UI, and/or to assign attributes and/or attribute values to elements of a UI) [a computer vision device capable of a pixel-by-pixel examination of the screenshots]. As used herein, the term “understanding” refers to transforming visual images into a description of the image that is stored in a database, data structure (e.g., a statistical model), and/or used by subsequent analysis [and coupled to the storage device]. In some embodiments, element attributes and/or attribute values are determined and stored in a database, data structure (e.g., a statistical model), and/or used for subsequent analysis, e.g., to produce an element definition and/or an element match score.”) Cser does not teach: receive a description of a regression assertion to be detected within the plurality of screenshots; scan the plurality of screenshots to detect presence of the regression assertion; provide output indicating presence of the regression assertion; However, in an analogous art Amintafreshi teaches receive a description of a regression assertion to be detected within the plurality of screenshots; (Amintafreshi [0047], “In the following examples, an automated GUI test tool is used to initiate most of the automatic tests in a node or in a system. The GUI test tool can initiate a regression test based on each computer program GUI by analyzing the GUI showing on a screen. The most fundamental functions of the GUI test tool can be collected in two categories: regression testing a system; and automatically controlling a program to achieve one or more tasks that can be hard or inefficient to control manually. In the regression testing case, the GUI test tool reads one or several program script instructions and according to the program script instructions the tool performs tasks on a GUI that, in many cases, is a predefined GUI (test object). The GUI test tool can further be seen as an interworking interface reading instructions from the script database and making sure that they are performed.”) scan the plurality of screenshots to detect presence of the regression assertion; provide output indicating presence of the regression assertion; (Amintafreshi [0041], “Referring back to FIG. 3, the method further comprises loading (2) one or more program script instructions from the script database 104 and reading (3) the program script instructions; retrieving (4) data and at least one image object corresponding to the predefined GUI 103 from the data and image object database 105; taking (5) a screenshot of the uploaded predefined GUI and analyzing (6) whether there is a matching image object in the taken screenshot, followed by, if Yes, calculating (7) a target position on the screen 101 of the matched image object using retrieved data from the data and image object database [scan the plurality of screenshots to detect presence of the regression assertion]; activating (8) a control function for controlling the predefined GUI based on the program script instructions and the calculated target position; starting (9) a timer after further reading of the program script instructions, followed by taking (10) a further screenshot of the predefined GUI 103; analyzing (11) whether there is a matching image object equal to an expected result, which can also be retrieved from the data and image object database 105; if Yes, determining (12) whether the regression test is finished by further reading of the program script instructions; if Yes, determining (13) if there are more regression tests to be performed; and if No, completing (14) the regression testing by displaying at least one result on the screen 101 [provide output indicating presence of the regression assertion].”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Amintafreshi into the teachings of Cser. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of of initiating one or more regression tests on a predefined GUI in accordance with programming script instructions (Amintafreshi [0035-36]). The combination of Cser and Amintafreshi does not teach: wherein the computer vision device includes machine learning circuitry to predict conditions under which subsequent user applications will occur, wherein the system is configured to generate additional automated tests based on the predicted conditions However, in an analogous art Hamid teaches wherein the computer vision device includes machine learning circuitry to predict conditions under which subsequent user applications will occur, (Hamid [0114], “Similarly, a reinforcement learning algorithm can be trained by operating objects in a large number of applications to determine a probability of certain actions occurring from operation of a given type of object. From those probabilities, an application can be explored without code or testing scripts 1333. For example, the reinforcement learning model may learn through training data that for certain types of applications pushing a “help” button is most likely result in transfer to a static help page, but for other types of applications, pushing a “help” button is most likely to result in opening a chat window. This information may be used to predict the operational flow of an application from its application mapping 1331 and object identification information by predicting an action for each object on each screen of an application and assigning a probability as to the correctness of the prediction. If the actual result of operating the object results in an action different from the predicted action, a possible application error can be noted. As one example, the reinforcement learning model may be trained in such a way that it is rewarded for actions that result in moving to a new screen that is consistent with both the function of the identified object and the application mapping. Thus, during training, if an object is identified as a “help” button and clicking on the object consistently results in transfer to a screen mapped as a “help” screen, the reinforcement learning algorithm will learn to associate “help” buttons with “help” screens, and will be able to accurately predict the function of a “help” button in a software application being analyzed. If, during execution of a testing scenario, the “help” button does not result in a transfer to a “help” screen, this counter-predictive result can be flagged as a possible application error.”) wherein the system is configured to generate additional automated tests based on the predicted conditions (Hamid [0120], “After the software is analyzed using models 1520, the validation tests 1524 may be automatically executed on the software 1530 by receiving a list of mobile devices on which the software application should be tested 1531, automatically installing the application on devices in a testing lab 1532, and performing testing in accordance with the testing scenario(s) 1533 generated at the analysis stage 1520 based on the analysis of the software 1521-1523 and validation tests selected 1524.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Hamid into the teachings of Cser in view of Amintafreshi. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, and using machine learning to predict areas of potential failure for generating additional tests, as in Hamid. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of processing multiple screens through a trained machine learning algorithm to identify screens and objects, understand the operational flow of the application, define priorities and dependencies within the application, define validation tests, and automatically generate one or more testing scenarios for the application (Hamid [0042]). With regards to claim 7, the rejection of claim 1 is incorporated. Cser further teaches: wherein the regression assertion is defined using computer vision device natural-language based definitions (Cser [0111], “In some embodiments, the technology comprises use of computer vision to analyze a software application [using computer vision device] (e.g., a web application (e.g., a web application UI (e.g., a screen shot of a web application Ul))). In some embodiments, computer vision provides data describing the visual render of a UI and/or one or more elements on a software application (e.g., web application) Ul In some embodiments, computer vision provides data describing the text ( e.g., a text string) of an element ( element text data), e.g., by the use of computer vision and optical character recognition (OCR). *) (Cser [0154], “During the development of embodiments of the technology provided herein, the technology was used to identify a target element [wherein the assertion is defined] by identifying the intent of a test case. See, e.g., FIG. 15. As described herein, a software test case consists of a sequence of actions and elements upon which those actions are performed. A written description of the test case step may also be included in the test case definition. An example of a test step description can be seen in FIG. 15 such as "click on the first element in the list." If a natural language test case description is included, this language can be used to build a step intention model [natural-language based definitions]).”) With regards to claim 8, the rejection of claim 1 is incorporated. Cser further teaches: wherein the plurality of screenshots are generated by automated tests of the user application. (Cser [0107], “In some embodiments, the monitoring component executes with the automated testing utility to capture screenshots during testing of a web application and to capture metadata that describes the context in which the test is being performed. In some embodiments, the metadata is specific for a particular automated test (e.g., the metadata is applicable to every screenshot taken while executing the particular automated test (e.g., name of the test script, name of the web application, etc.)) In some embodiments, the metadata is specific for a step or portion of an automated test ( e.g., the metadata is applicable to one or more particular screenshots).”) Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Cser in view of Amintafreshi in view of Hamid and further in view of US 20240220083 A1 hereinafter “Wright”. With regards to claim 2, the rejection of claim 1 is incorporated. Cser further teaches: wherein the plurality of screenshots are stored in a relational database, the relational database [associating the plurality of screenshots with software version information of the respective user application,] (Cser [0109], “In some embodiments, a user and/or test case (e.g., script) action is a click received as input by a UI and screenshots and associated metadata are recorded during the click. In some embodiments, screenshots and associated metadata are recorded during a web application event, e.g., a validation event, setting flags, variables, or threshold conditions as met, the creation or deletion of a particular element such as a DOM element, communications between the server side web application and the client side web application in either direction or both directions, internal validation functions such as user input being a certain primitive type or a certain length before forwarding the data to a database, successfully executing a sub-routine or function such as scanning for malware, or any combination thereof. In some embodiments, screenshots and associated metadata are recorded during a browser event, e.g., a page loading, completion of page loading, loading a previous page in the browser history, loading a next page in the browser history, opening a new page in a new browser, opening a new page in a new tab, saving a page, printing a page, opening a print preview screen, changing the size of text in a web browser, or any combination thereof.”) and wherein the output indicates a first software version at which the assertion was present (Cser [0113], “In some embodiments, computer vision comprises identifying changes in the attributes and/or attribute values of elements [at which the assertion was present] for multiple versions (e.g., 1,2, 3,4,5,6,7, 8,9, 10, 11, 12, 13, 14, 15, 16, 17, 1~ 19,20,25, 3~ 35,40,45, 50, 55, 60, 65, 70,75, 80, 85, 90, 95, 100, or more versions) of a web application UL [wherein the output indicates a first software version]”) [Examiner’s Note: The process of recording screenshots can include incorporating the images into a database along with the associated metadata] The combination Cser, Amintafreshi, and Hamid does not teach: [wherein the plurality of screenshots are stored in a relational database, the relational database] associating the plurality of screenshots with software version information of the respective user application, However, in an analogous art Wright teaches [wherein the plurality of screenshots are stored in a relational database, the relational database] associating the plurality of screenshots with software version information of the respective user application, (Wright [0060], “In one embodiment, a strategy may be one or more rules, parameters, criteria, conditions, etc., for generating labels/identifiers for user interfaces that are associated with an event. An event may be a collection, group, snapshot, etc., of data that was captured [wherein the plurality of screenshots are stored] (e.g., recorded, gathered, etc.) by a recorder (e.g., recorder 141 illustrated in FIG. 1) as a user interacts with the user interfaces of one or more application. For example, an event may include views of a user interfaces (e.g., screenshots), user interactions with the user interfaces (e.g., mouse clicks, mouse movements, keyboard inputs, etc.) and related input data, and metadata for the application (e.g., the name/title of the window for the application, the size of the window, the name of a document/file used by the application, a version number of the application [associating the plurality of screenshots with software version information of the respective user application], etc.).”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Wright into the teachings of Cser in view of Amintafreshi and further in view of Hamid. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, using machine learning to predict areas of potential failure for generating additional tests, as in Hamid, and using the scanned screenshots to determine software version of the released user application, as in Wright. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of identifying user interfaces using one or more strategies (Wright [0002]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cser in view of Amintafreshi in view of Hamid in view of Wright in view of US 20190227917 Al hereinafter “Henry” and further in view of US 20240354102 Al hereinafter “Hemadri” . With regards to claim 3, the rejection of claim 2 is incorporated. Cser further teaches further comprising a user interface component coupled to the computer vision device, [wherein the user interface component is configured to display a first indication of presence of the assertion and a second indication of a software build number.] (Cser [0142], “In some embodiments, the computer system is coupled by the bus to a display, such as a cathode ray tube (CRT), liquid crystal display (LCD), or other display technology known in the art, for displaying information to a computer user.”) The combination of Cser, Amintafreshi, Hamid, and Wright does not explicitly teach: [further comprising a user interface component coupled to the computer vision device,] wherein the user interface component is configured to display a first indication of presence of the assertion [and a second indication of a software build number.] However, in an analogous art Henry teaches [further comprising a user interface component coupled to the computer vision device,] wherein the user interface component is configured to display an indication of presence of the assertion [and an indication of a software build number.] (Henry [0020-22], “Assets may be recognized by the test performance monitoring and reporting system using different types of image recognition techniques [of presence of the assertion]. Using the asset library makes it easier to adapt a script using these logical blocks. When a new device is added, assets that are executed by the logical blocks are changed to assets within the library specific to the device that was added... These logic blocks represent different actions that are executed by the mobile device. The blocks can be displayed in a graphical user interface as nodes, in some embodiments [wherein the user interface component is configured to display an indication]. Different blocks can be added or removed based on the varying command interfaces of the devices used.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Henry into the teachings of Cser in view of Amintafreshi in view of Hamid and further in view of Wright. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, using machine learning to predict areas of potential failure for generating additional tests, as in Hamid, and using the scanned screenshots to determine software version of the released user application, as in Wright, wherein the processing comprises scanning for described input elements and providing output accordingly, as in Henry. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of providing a test protocol that can dynamically accommodate for different presentations of a live website or user interface (Henry [0005]). The combination of Cser, Amintafreshi, Hamid, Wright and Henry does not teach: [further comprising a user interface component coupled to the computer vision device, wherein the user interface component is configured to display a first indication of presence of the assertion] and a second indication of a software build number. However, in an analogous art Hemadri teaches [further comprising a user interface component coupled to the computer vision device, wherein the user interface component is configured to display a first indication of presence of the assertion] and a second indication of a software build number. (Hemadri [0053], “For example, in some cases, the identifier of the snapshot may have the format <identifier of the software application+identifier of the continuous integration pipeline>-<identifier of the target version>-<build number>.") Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Hemadri into the teachings of Cser in view of Amintafreshi in view of Hamid in view of Wright and further in view of Henry. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, using machine learning to predict areas of potential failure for generating additional tests, as in Hamid, and using the scanned screenshots to determine software version of the released user application, as in Wright, wherein the processing comprises scanning for described input elements and providing output accordingly, as in Henry, and the output is configured to display or indicate the build number of the user space under test, as in Hemadri. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of streamlining the build process to include new features or bug fixes in large applications or complex build processes (Hemadri [0002]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cser in view of Amintafreshi in view of Hamid in view of Wright in view of Henry in view of Hemadri and further in view of US 20220413845 Al hereinafter “Matthew”. With regards to claim 4, the rejection of claim 3 is incorporated. The combination of Cser, Amintafreshi, Hamid, Wright, and Hemadri does not teach: wherein the computer vision device is configured to receive one or more conditional statements from the user interface component, the one or more conditional statements [limiting the scan to at least one of a date range, a software build number range, and a user application state.] However, in an analogous art Henry teaches wherein the computer vision device is configured to receive one or more conditional statements from the user interface component, the one or more conditional statements [limiting the scan to at least one of a date range, a software build number range, and a user application state.] (Henry [0067-68], “In step 506, the test performance monitoring and reporting system executes at least one block that uses an asset from the selected set of device specific assets. The asset is an image that represents an onscreen element that can be manipulated by a user, such as a play button or a call button. The logic block using the asset directs the script to search for the asset on the mobile device screen... In step 508, at least one block that uses conditional branching is executed. Conditional branching is employed when a logical block triggers one of a plurality of outcomes in response to different input parameters. In some embodiments, Boolean logic may be used to implement branching.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Henry into the teachings of Cser in view of Amintafreshi in view of Hamid in view of Wright and further in view of Hemadri. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, using machine learning to predict areas of potential failure for generating additional tests, as in Hamid, and using the scanned screenshots to determine software version of the released user application, as in Wright, wherein the processing comprises scanning for described input elements and providing output accordingly, as in Henry, and the output is configured to display or indicate the build number of the user space under test, as in Hemadri. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of providing a test protocol that can dynamically accommodate for different presentations of a live website or user interface (Henry [0005]). The combination of Cser, Amintafreshi, Hamid, Wright, Henry, and Hemadri does not explicitly teach: [wherein the computer vision device is configured to receive one or more conditional statements from the user interface component, the one or more conditional statements] limiting the scan to at least one of a date range, a software build number range, and a user application state. However, in an analogous art Matthew teaches [wherein the computer vision device is configured to receive one or more conditional statements from the user interface component, the one or more conditional statements] limiting the scan to at least one of a date range, (Matthew [0120], “In some implementations, the first time period may be determined, e.g., by a processor in the cloud management system, based on statistical properties of one or more metrics in the list of metrics”) a software build number range (Matthew [0051], “In some implementations, an analysis of a release may be performed automatically, e.g., by an automatic process based on auto detection of a release, e.g., based on detection of an updated build time, an updated build version identifier, a build number, detection of a change in size of a code base or a size of a code image, or may be inferred based on an alert from a performance management system, e.g., based on a change in a performance metric that meets a predetermined threshold”), and a user application state (Matthew [0145], “Post-release input metrics are utilized to determine a state of the software application, assuming the new release exhibits a similar behavior to a previous release of the software application, as characterized by the trained ML model.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Matthew into the teachings of Cser in view of Amintafreshi in view of Hamid in view of Wright in view of Henry and further in view of Hemadri. This combination of teachings would have resulted in a system for software testing configured to store screenshots with a device capable of computer vision analysis, as in Cser, and receiving a regression description for scanning the screenshots, as in Amintafreshi, using machine learning to predict areas of potential failure for generating additional tests, as in Hamid, and using the scanned screenshots to determine software version of the released user application, as in Wright, wherein the processing comprises scanning for described input elements and providing output accordingly, as in Henry, the output is configured to display or indicate the build number of the user space under test, as in Hemadri, and further using the scan to determine metadata, as in Matthew. One of ordinary skill in the art would have been motivated to combine these teachings for the purpose of checking abnormal behavior of applications with a ML model to analyze and provide recommendations, based on origin of errors (Matthew [0041]). Claims 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cser in view of Amintafreshi. With regards to claim 9, Cser teaches [receiving an indication of a regression assertion to be detected within the plurality of screenshots of a user application during execution of the user application,] the regression assertion being defined using a computer vision device capable of a pixel-by-pixel examination of the screenshots (Cser [0111], “In some embodiments, the technology comprises use of computer vision to analyze a software application [using computer vision device] (e.g., a web application (e.g., a web application UI (e.g., a screen shot of a web application Ul))). In some embodiments, computer vision provides data describing the visual render of a UI and/or one or more elements on a software application (e.g., web application) UL In some embodiments, computer vision provides data describing the text ( e.g., a text string) of an element ( element text data), e.g., by the use of computer vision and optical character recognition (OCR). ©) and natural language-based definitions (Cser [0154], “During the development of embodiments of the technology provided herein, the technology was used to identify a target element [wherein the regression assertion is defined] by identifying the intent of a test case. See, e.g., FIG. 15. As described herein, a software test case consists of a sequence of actions and elements upon which those actions are performed. A written description of the test case step may also be included in the test case definition. An example of a test step description can be seen in FIG. 15 such as "click on the first element in the list." If a natural language test case description is included, this language can be used to build a step intention model [natural-language based definitions].”) and the plurality of screenshots being generated by previously-executed automated tests of the user application; (Cser [0108], “Accordingly, embodiments provide a test case (e.g., script) that provides a series of instructions to an automated testing utility and the automated testing utility interacts with a software application (e.g., a web application (e.g., a web page) UI. The monitoring component captures screenshots of the UI of the software application while the test case (e.g., script) is being executed by the automated testing utility. In some embodiments, screen captures are recorded when defined criteria are satisfied, e.g., when certain types of steps are identified in the test case (e.g., script), when certain types of actions or events occur in the software application, or a combination thereof. In some embodiments, screen captures are recorded before they are processed by the automated testing utility or after they are displayed in the UI of the software application. Thus, in some embodiments, the monitoring component tracks an interaction before it occurs by analyzing the test case (e.g., script) and a screen capture occurs before the UI changes. In some embodiments, the monitoring component records a screen capture after a web application responds to actions taken by a simulated user. Screen captures are recorded relative to the monitoring component identifying an action or event occurring in the test case (e.g., script) and waiting for the action event to happen in the web application (e.g., as a change in the UI).”) [Examiner’s Note: The script can run to monitor and capture screenshots. These captured screenshots can be generated over previous executions of the script as well] Cser does not teach: receiving an indication of a regression assertion to be detected within the plurality of screenshots of a user application during execution of the user application, [the regression assertion being defined using a computer vision device capable of a pixel-by-pixel examination of the screenshots and natural language-based definitions and the plurality of screenshots being generated by previously-executed automated tests of the user application;] However, in an analogous art Amintafreshi teaches receiving an indication of a regression assertion to be detected within the plurality of screenshots of a user application during execution of the user application, [the regression assertion being defined using a computer vision device capable of a pixel-by-pixel examination of the screenshots and natural language-based definitions and the plurality of screenshots being generated by previously-executed automated tests of the user application;] (Amintafreshi [0047], “In the following examples, an automated GUI test tool is used to initiate most of the automatic tests in a node or in a system. The GUI test tool can initiate a regression test based on each computer program GUI by analyzing the GUI showing on a screen. The most fundamental functions of the GUI test tool can be collected in two
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
Jun 06, 2025
Non-Final Rejection — §101, §103
Jun 19, 2025
Response Filed
Sep 08, 2025
Non-Final Rejection — §101, §103
Oct 09, 2025
Response Filed
Nov 17, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572351
INTEGRATION OF MACHINE LEARNING MODELS INTO SOFTWARE SYSTEMS USING SOFTWARE LIBRARY
2y 5m to grant Granted Mar 10, 2026
Patent 12541353
DEPLOYING AND UPDATING APPLICATIONS EXECUTED ON CONTROL SYSTEMS CONNECTED TO EDGE COMPUTE MODULES VIA A BACKPLANE
2y 5m to grant Granted Feb 03, 2026
Patent 12528429
ELECTRONIC CONTROL UNIT, VEHICLE CONTROL SYSTEM, AND VEHICLE CONTROL METHOD
2y 5m to grant Granted Jan 20, 2026
Patent 12524329
WEB APPLICATION OBSERVABILITY WITH DISTRIBUTED TRACKING AND CUSTOM HEADER
2y 5m to grant Granted Jan 13, 2026
Patent 12505026
OBJECT HISTORY TRACKING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+100.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month