Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,589

METHOD AND SYSTEM FOR MANAGING APPLICATIONS USING ARTIFICIAL INTELLIGENCE (AI)

Non-Final OA §101§102§103
Filed
Apr 12, 2024
Examiner
SHIMELES, BEZAWIT NOLAWI
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Hcl Technologies Limited
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
13 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/12/2024 and 06/17/2025 are being considered by the examiner. Drawing Objections The drawings are objected to because of the following informalities: In Fig. 2, reference number 200 is not found in the specification. In Fig. 2, “text case 202” should read “testcase 202” to align correctly with what is described in the specification. In Fig. 3, reference number 312 is not found in the specification. In Fig. 5, reference number “700” should instead read “500” since reference number 700 is not found in the specification. Appropriate correction is required. Specification Objections The specification is objected to because of the following informalities: In Paragraph [0003], line 12, “the exiting techniques” should read “the existing techniques” in order to correct the spelling error. In Paragraph [0021], line 10, “in some another embodiment” should read “in ” in order to be grammatically correct. In Paragraph [0028], line 5, “a text extraction model 206” should read “a text extraction module 206” in order to align with the correct label in Figure 2 of the drawings. In Paragraph [0032], line 6, “from the text case 202” should read “from the testcase 202.” In Paragraph [0046], line 2, “the at step 310” should read “then at step 310” in order to correct the spelling error. In Paragraph [0048], line 5, “extracted from the text case” should read “extracted from the testcase.” Appropriate correction is required. Claim Objections Claims 1 and 4 are objected to because of the following informalities: In claim 1, line 9, “text label is mapped the web element” should read “text label is mapped to the web element” In claim 4, line 1, “the method of claim 2, wherein identifying the text label comprises” should read “the method of claim1, wherein identifying the text label comprises” in order to map to the correct independent claim recalled by the disclosed limitation. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 1, 8, and 15 recite limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f): Claims 1, 8, 15; recite the limitation “…by a trained Artificial Intelligence (AI) model...” [Lines 2, 5, 4]. Claim 1; recites the limitation: “…by the trained AI model…” [Lines 5, 7, 10, 12] Claim 8; recites the limitation: “…by the trained AI model…” [Lines 8, 10, 14, 16] Claim 15; recites the limitation: “…by the trained AI model…” [Lines 7, 9, 12, 14] Claims 1, 8, 15; recite the limitation “…a testing unit for performing an action...” [Lines 12, 16, 14]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. After a careful analysis, as disclosed above, and a careful review of the specification the following limitations in claims 1, 8, and 15: “trained Artificial Intelligence (AI) model” (Figs 1-2, Paragraph [0028] – Figs. 1-2, Paragraph [0028] – “The trained AI model may include a text extraction model 206, an element detection module 208, and a mapping module 210. As will be appreciated, the trained AI model may correspond to a deep learning model (e.g., a region-based convolution neural network (R-CNN)).” See also Paragraphs [0018, 0041]. Thus, “trained AI model” has sufficient structure associated with it wherein it is a deep learning model or convolutional neural network.) “testing unit” (Fig. 2, #216, Paragraphs [0021-0022] – “The testing unit may correspond to a test automation framework. As will be appreciated, the test automation framework may correspond to any framework that supports automated testing of the application (i.e., the mobile application or the web application). Further, in some embodiment, the testing unit may be part of the computing device 102. Furthermore, in some another embodiment, the testing unit may be part of any external computing device, such as external devices 118. Examples of the computing device 102 may include, but is not limited to, a mobile phone, a laptop, a desktop, or a PDA, an application server, and so forth. The computing device 102 may further include the memory 104, a processor 106, and the Input/Output unit 108.” Thus, “testing unit” has sufficient structure associated with it wherein it is or is part of a computer.) If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because: Regarding independent claim 1 and its dependent claims 2-7, claim 1 is directed to a process (method), which falls within the four statutory categories. Claim 1 recites, in part: “identifying… a text label from a test case and a real-time image associated with an application…; determining… a positioning of each of a set of web elements within the real-time image; mapping… the text label to a web element from the set of web elements based on the determined positioning… generating… a segmented image comprising the text label and the web element, upon mapping; and transmitting… the segmented image to a testing unit for performing an action associated with the text label and the web element.” The limitations as drafted above, are processes that, under broadest reasonable interpretation (BRI) cover the performance of the limitation in the mind which falls within the “mental processes” grouping of abstract ideas. The limitations of “identifying… a text label from a test case and a real-time image associated with an application…; determining… a positioning of each of a set of web elements within the real-time image; mapping… the text label to a web element from the set of web elements based on the determined positioning…; generating… a segmented image comprising the text label and the web element, upon mapping; and transmitting… the segmented image to a testing unit for performing an action associated with the text label and the web element” are steps, under BRI, that a human can also perform through mental processes such as observation and evaluation, as it is merely reciting steps of collecting and analyzing information; a human could visually inspect a screen, identify a text label, determine nearby web elements, match them based on observed attributes, and provide the relevant region to a tester. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the following additional element(s)- “by a trained Artificial Intelligence (AI) model…” “using a mapping algorithm…” “wherein the application is one of the mobile application or a web application…” The additional element “by a trained Artificial Intelligence (AI) model…” is a generic well-known neural network model recited at a high level of generality for performing a series of data manipulation and data gathering steps, it is in the claim as a mere attempt to implement the abstract ideas/judicial exceptions using a generic neural network model without further limiting how, in detail, the model works to arrive at such an outcome; “using a mapping algorithm…” is an insignificant clause reciting a generic mapping algorithm at a high level of generality to perform a known function; “wherein the application is one of the mobile application or a web application…” is an insignificant clause of merely further specification of the element that it depends on, and not an indication of an integration of the abstract ideas into a practical application nor considered significantly more. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Please see MPEP 2106.04.(d).III.C. There are no additional elements, such as for these additional elements as indicated above, that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all of the foregoing reasons, claim 1 does not comply with the requirements of 35 U.S.C. 101. Accordingly, the dependent claims 2-7 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claims 2-4, 6, and 7 recite, in part, wherein clauses of merely further specification of the element which each of them depend on, therefore not an indication of an integration of the abstract ideas into a practical application nor considered significantly more. Claim 5 recites, in part, “capturing a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing.” These are limitations that are further specifying what the additional element “the method of claim 1” comprises, reciting merely generic steps of data filtering and manipulation. Accordingly, the dependent claims 2-7 are not patent eligible under 35 U.S.C. 101. Regarding independent claim 8 and its dependent claims 9-14, claim 8 is directed to a system (machine), which falls within one of the four statutory categories. The independent claim 8 recites analogous limitations to the independent claim 1. Hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the claim 1 analysis. Furthermore, claim 8 recites some additional features such as “a system for managing applications, the system comprising: a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which when executed by the processor, cause the processor to:…” The recited features are ones of generic computers and computer components recited at a high level of generality to perform generic well-known functions such as a processor processing instruction stored in a memory, etc. The dependent claims 9-14, each recite analogous limitations to the dependent claims 2-7, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons provided in the analysis above. The independent claim 15 recites analogous limitations to the independent claim 1. Hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the claim 1 analysis. Furthermore, claim 15 recites some additional features such as “a non-transitory computer-readable medium storing computer-executable instructions for managing applications, the stored instructions, when executed by a processor, causes the processor to perform operations comprising:…” The recited features are ones of generic computers and computer components recited at a high level of generality to perform generic well-known functions such as a processor processing instruction stored in a memory, etc. The dependent claims 16-20, each recite analogous limitations to the dependent claims 2-6, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons provided in the analysis above. Claim Rejections - 35 USC § 102. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6, 7-11, 13, 14-18, and 20 are rejected under 35 U.S.C.102(a)(1)/(a)(2) as being anticipated by KUMAR et al. (US 20190250891 A1), herein after referenced as KUMAR. Regarding claim 1, KUMAR teaches a method for managing applications (Fig. 6, Paragraph [0048] – KUMAR discloses the present disclosure generally relates to application development, and more particularly, to techniques for automating the development of a graphic user interface (GUI) for an application from design information for the GUI), the method comprising: identifying, by a trained Artificial Intelligence (AI) model, a text label from a testcase (Fig. 6, step 604, Paragraph [0135] – KUMAR discloses at 604, text regions that include text content may be detected and extracted from the GUI screen image. KUMAR further discloses a fully convolutional network model may be used to detect text regions in the GUI screen image.) and a real-time image associated with an application (Fig. 1, Paragraph [0061] – KUMAR discloses GUI screen images 104 may include an image that is a photograph captured using an image capture device such as a camera, a scanner, and the like.), wherein the application is one of the mobile application or a web application (Fig. 1, Paragraph [0062] – KUMAR discloses the application that is to be developed using GUI screen images 104 may be one of various types of applications including but not restricted to a mobile application (e.g., an application executable by a mobile device), a desktop application, a web application, an enterprise application, and the like.); determining, by the trained AI model, a positioning of each of a set of web elements (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses at 610, UI components may be detected and the corresponding locations in the GUI screen image may be determined. UI components may include, for example, buttons, check boxes, lists, text entry boxes, icons, containers, radio buttons, switch buttons, and the like. KUMAR further discloses a neural network may extract features from the GUI screen images as described above, and may implement an object detection technique (e.g., SSD or YOLO technique described above) to localize one or more UI components at one or more different locations of the GUI screen image uses the extracted features) within the real-time image (Fig. 1, Paragraph [0061]); mapping, by the trained AI model, the text label to a web element from the set of web elements (Fig. 1, Paragraph [0070] – KUMAR discloses model generation system 102 may extract text information 132 from GUI screen images 104. KUMAR further discloses based on the size and location of each UI component and the location information of the text content items, some text content items (e.g., text on a clickable button) may be associated with certain UI components (e.g., the clickable button)) based on the determined positioning (Fig. 1, Paragraph [0068] – KUMAR discloses this GUI model generation processing may include, for example, for a GUI screen, determining a set of user interface components (e.g., buttons, drop down lists, segments, and the like) and their attributes (e.g., labels, sizes, locations)) using a mapping algorithm (Fig. 2, Paragraph [0088] – KUMAR discloses various pieces of information may be extracted from the GUI screen image and may be used to identify the UI components and text content items for a screen. Certain attributes of the identified components may be determined, for example, using machine learning-based models. The attributes of the identified components may then be used to generate the GUI model and or the code for implementing the GUI.), wherein the text label is mapped the web element based on a corresponding set of attributes (Fig. 2, Paragraph [0084] – KUMAR discloses buttons 214 may include a text content item 218, such as “Next,” “Cancel,” or “OK” on the button. The UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.); generating, by the trained AI model, a segmented image comprising the text label and the web element, upon mapping (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses based on the location or the boundaries of each UI component, a sub-image [wherein sub-image is a segmented image] within the boundaries for each UI component may be extracted from the GUI screen image.); and transmitting, by the trained AI model (Fig. 4, Paragraph [0115] – KUMAR discloses each of the input GUI screen images may be processed to identify and extract GUI components including text content items and individual UI components, determine parameters of the text content items and the UI components (e.g., sizes, locations, colors, and the like), and classify the UI components to determine the types of the UI components, using the machine learning-based model(s)), the segmented image to a testing unit for performing an action associated with the text label and the web element (Fig. 6, Paragraph [0144] – KUMAR discloses a GUI model may be generated for the GUI based upon the text content items and corresponding locations, the classified UI components and corresponding locations, and the layout for the GUI screen. The GUI model may store information related to the processing performed at 604, 606, 610, 612, 614, and 616. The information stored in the GUI model can be used by a downstream consumer to generate an implementation of the GUI.). Regarding claim 2, KUMAR teaches the method of claim 1, KUMAR further teaches, wherein the corresponding set of attributes comprises a set of text attributes (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like.) and a set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Regarding claim 3, KUMAR teaches the method of claim 2, KUMAR further teaches, wherein the set of text attributes associated with the text label comprises an orientation of the text label, an alignment of the text label, font size, font color, and font style (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like. Paragraph [0139] – KUMAR further discloses the alignment of the text content item within the UI component can be reserved by the placeholder.). Regarding claim 4, KUMAR teaches the method of claim 2, KUMAR further teaches, wherein identifying the text label (Fig. 6, Paragraph [0138] – KUMAR discloses at 606, text content items and corresponding locations in the text regions may be extracted from the sub-images) comprises: extracting the text label using a text recognition technique (Fig. 6, Paragraph [0138] – KUMAR discloses an optical character recognition (OCR) process may be performed on each of the extracted sub-images to extract the text information associated with each text content item.); and validating the text label based on the set of text attributes using a text correction algorithm (Fig. 3, Paragraph [0103] – KUMAR discloses text analysis module 358 may analyze the text content items to identify clickable text content items in the GUI screen image. A clickable text content item may indicate some actions or functions and may usually include at least one verb (e.g., cancel, save, clear, etc.), and may not be associated with any UI component.). Regarding claim 6, KUMAR teaches the method of claim 2, KUMAR further teaches, wherein the set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses GUI screen 200 may include one or more UI components [wherein UI components are web elements]) comprises a type of the web element, an alignment of the web element, an orientation of the web element, a size of the web element, a shape of the web element, number of blocks within the web element, background of the real-time image comprising the web element (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Regarding claim 7, KUMAR teaches the method of claim 1, KUMAR further teaches, wherein mapping the text label to the web element (Fig. 1, Paragraph [0070] – KUMAR discloses model generation system 102 may extract text information 132 from GUI screen images 104. KUMAR further discloses based on the size and location of each UI component and the location information of the text content items, some text content items (e.g., text on a clickable button) may be associated with certain UI components (e.g., the clickable button)) comprises: selecting the web element from the set of web elements based on the determined positioning (Fig. 6, Paragraph [0140] – KUMAR discloses at 610, UI components may be detected and the corresponding locations in the GUI screen image may be determined in a manner similar to the processing described above with respect to 510. The UI components may include, for example, buttons, check boxes, lists, text entry boxes, icons, containers, radio buttons, switch buttons, and the like.) and the set of web elements attributes (Fig. 6, Paragraph [0140] – KUMAR discloses various contour detection techniques may be used to detect the boundaries of each of the UI components. For example, a neural network may extract features from the GUI screen images as described above, and may implement an object detection technique (e.g., SSD or YOLO technique described above) to localize one or more UI components at one or more different locations of the GUI screen image uses the extracted features. Based on the location or the boundaries of each UI component, a sub-image within the boundaries for each UI component may be extracted from the GUI screen image.), wherein a positioning of the web element is within a Region of Interest (ROI) associated with the text label (Fig. 6, Paragraph [0139] – KUMAR discloses a placeholder may indicate that the original UI component may include certain text information and thus is likely to be one of certain types of UI components, such as a clickable button or a text entry box. In addition, the alignment of the text content item within the UI component can be reserved by the placeholder. In some implementations, only texts that may overlap with UI components or adjacent to UI components may be replaced with placeholders.). Regarding claim 8, KUMAR teaches a system for managing applications (Fig. 1, Paragraph [0058] – KUMAR discloses FIG. 1 depicts a simplified high level diagram of an example of a system 100 for generating a graphic user interface (GUI) model for a GUI based upon design information for the GUI. See also Paragraph [0048].), the system comprising: a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor instructions (Fig. 1, Paragraph [0067] – KUMAR discloses model generation system 102 may include one or more subsystems that are configured to work together to generate GUI model 124. These subsystems may be implemented in hardware, in software (e.g., code, instructions, program) executed by one or more processing units (e.g., processors or cores) of a computer system, or combinations thereof. The software may be stored on a non-transitory storage medium (e.g., a memory device) such as memory 122.), which when executed by the processor, cause the processor to: identify, by a trained Artificial Intelligence (AI) model, a text label from a testcase (Fig. 6, step 604, Paragraph [0135] – KUMAR discloses at 604, text regions that include text content may be detected and extracted from the GUI screen image. KUMAR further discloses a fully convolutional network model may be used to detect text regions in the GUI screen image.) and a real-time image (Fig. 1, Paragraph [0061] – KUMAR discloses GUI screen images 104 may include an image that is a photograph captured using an image capture device such as a camera, a scanner, and the like) associated with an application (Fig. 1, Paragraph [0061] – KUMAR discloses GUI screen images 104 may include an image that is a screenshot, for example, a screenshot of a screen of an existing application, where the to-be-developed application is to have a similar GUI screen as the existing application) and wherein the application is one of the mobile application or a web application (Fig. 1, Paragraph [0062] - KUMAR discloses the application that is to be developed using GUI screen images 104 may be one of various types of applications including but not restricted to a mobile application (e.g., an application executable by a mobile device), a desktop application, a web application, an enterprise application, and the like.). determine, by the trained AI model, a positioning of each of a set of web elements (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses at 610, UI components may be detected and the corresponding locations in the GUI screen image may be determined. UI components may include, for example, buttons, check boxes, lists, text entry boxes, icons, containers, radio buttons, switch buttons, and the like. KUMAR further discloses a neural network may extract features from the GUI screen images as described above, and may implement an object detection technique (e.g., SSD or YOLO technique described above) to localize one or more UI components at one or more different locations of the GUI screen image uses the extracted features) within the real-time image (Fig. 1, Paragraph [0061] – KUMAR discloses GUI screen images 104 may include an image that is a photograph captured using an image capture device such as a camera, a scanner, and the like.); map, by the trained AI model, the text label to a web element from the set of web elements (Fig. 1, Paragraph [0070] – KUMAR discloses model generation system 102 may extract text information 132 from GUI screen images 104. KUMAR further discloses based on the size and location of each UI component and the location information of the text content items, some text content items (e.g., text on a clickable button) may be associated with certain UI components (e.g., the clickable button)) based on the determined positioning (Fig. 1, Paragraph [0068] – KUMAR discloses this GUI model generation processing may include, for example, for a GUI screen, determining a set of user interface components (e.g., buttons, drop down lists, segments, and the like) and their attributes (e.g., labels, sizes, locations)) using a mapping algorithm (Fig. 2, Paragraph [0088] – KUMAR discloses various pieces of information may be extracted from the GUI screen image and may be used to identify the UI components and text content items for a screen. Certain attributes of the identified components may be determined, for example, using machine learning-based models. The attributes of the identified components may then be used to generate the GUI model and or the code for implementing the GUI.), wherein the text label is mapped to the web element based on a corresponding set of attributes (Fig. 2, Paragraph [0084] – KUMAR discloses buttons 214 may include a text content item 218, such as “Next,” “Cancel,” or “OK” on the button. The UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.); generate, by the trained AI model, a segmented image comprising the text label and the web element, upon mapping (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses based on the location or the boundaries of each UI component, a sub-image [wherein sub-image is a segmented image] within the boundaries for each UI component may be extracted from the GUI screen image.); and transmit, by the trained AI model (Fig. 4, Paragraph [0115] – KUMAR discloses each of the input GUI screen images may be processed to identify and extract GUI components including text content items and individual UI components, determine parameters of the text content items and the UI components (e.g., sizes, locations, colors, and the like), and classify the UI components to determine the types of the UI components, using the machine learning-based model(s)), the segmented image to a testing unit for performing an action associated with the text label and the web element (Fig. 6, Paragraph [0144] – KUMAR discloses a GUI model may be generated for the GUI based upon the text content items and corresponding locations, the classified UI components and corresponding locations, and the layout for the GUI screen. The GUI model may store information related to the processing performed at 604, 606, 610, 612, 614, and 616. The information stored in the GUI model can be used by a downstream consumer to generate an implementation of the GUI.). Regarding claim 9, KUMAR teaches the system of claim 8, KUMAR further teaches wherein the corresponding set of attributes comprises a set of text attributes (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like) and a set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Regarding claim 10, KUMAR teaches the system of claim 9, KUMAR further teaches wherein the set of text attributes associated with the text label comprises an orientation of the text label, an alignment of the text label, font size, font color, and font style (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like. Paragraph [0139] – KUMAR further discloses the alignment of the text content item within the UI component can be reserved by the placeholder.). Regarding claim 11, KUMAR teaches the system of claim 9, KUMAR further teaches wherein, to identify the text label (Fig. 6, Paragraph [0138] – KUMAR discloses at 606, text content items and corresponding locations in the text regions may be extracted from the sub-images), the processor executable instructions (Fig. 1, Paragraph [0067]) further cause the processor to: extract the text label using a text recognition technique (Fig. 6, Paragraph [0138] – KUMAR discloses an optical character recognition (OCR) process may be performed on each of the extracted sub-images to extract the text information associated with each text content item.); and validate the text label based on the set of text attributes using a text correction algorithm (Fig. 3, Paragraph [0103] – KUMAR discloses text analysis module 358 may analyze the text content items to identify clickable text content items in the GUI screen image. A clickable text content item may indicate some actions or functions and may usually include at least one verb (e.g., cancel, save, clear, etc.), and may not be associated with any UI component.). Regarding claim 13, KUMAR teaches the system of claim 9, KUMAR further teaches wherein the set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses GUI screen 200 may include one or more UI components [wherein UI components are web elements]) comprises a type of the web element, an alignment of the web element, an orientation of the web element, a size of the web element, a shape of the web element, number of blocks within the web element, background of the real-time image comprising the web element (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Regarding claim 14, KUMAR teaches the system of claim 8, KUMAR further teaches wherein, to map the text label to the web element (Fig. 1, Paragraph [0070] – KUMAR discloses model generation system 102 may extract text information 132 from GUI screen images 104. KUMAR further discloses based on the size and location of each UI component and the location information of the text content items, some text content items (e.g., text on a clickable button) may be associated with certain UI components (e.g., the clickable button)), the processor executable instructions (Fig. 1, Paragraph [0067]) further cause the processor to: select the web element from the set of web elements based on the determined positioning (Fig. 6, Paragraph [0140] – KUMAR discloses at 610, UI components may be detected and the corresponding locations in the GUI screen image may be determined in a manner similar to the processing described above with respect to 510. The UI components may include, for example, buttons, check boxes, lists, text entry boxes, icons, containers, radio buttons, switch buttons, and the like) and the set of web elements attributes (Fig. 6, Paragraph [0140] – KUMAR discloses various contour detection techniques may be used to detect the boundaries of each of the UI components. For example, a neural network may extract features from the GUI screen images as described above, and may implement an object detection technique (e.g., SSD or YOLO technique described above) to localize one or more UI components at one or more different locations of the GUI screen image uses the extracted features. Based on the location or the boundaries of each UI component, a sub-image within the boundaries for each UI component may be extracted from the GUI screen image), wherein a positioning of the web element is within a Region of Interest (ROI) associated with the text label (Fig. 6, Paragraph [0139] – KUMAR discloses a placeholder may indicate that the original UI component may include certain text information and thus is likely to be one of certain types of UI components, such as a clickable button or a text entry box. In addition, the alignment of the text content item within the UI component can be reserved by the placeholder. In some implementations, only texts that may overlap with UI components or adjacent to UI components may be replaced with placeholders.). Regarding claim 15, KUMAR teaches a non-transitory computer-readable medium storing computer-executable instructions for managing applications (Fig. 1, Paragraph [0067] – KUMAR discloses the software may be stored on a non-transitory storage medium (e.g., a memory device) such as memory 122.), the stored instructions, when executed by a processor, causes the processor to perform operations (Fig. 1, Paragraph [0067] – KUMAR discloses these subsystems may be implemented in hardware, in software (e.g., code, instructions, program) executed by one or more processing units (e.g., processors or cores) of a computer system, or combinations thereof) comprising: identifying, by a trained Artificial Intelligence (AI) model, a text label from a testcase (Fig. 6, step 604, Paragraph [0135] – KUMAR discloses at 604, text regions that include text content may be detected and extracted from the GUI screen image. KUMAR further discloses a fully convolutional network model may be used to detect text regions in the GUI screen image.) and a real-time image associated with an application (Fig. 1, Paragraph [0061] – KUMAR discloses GUI screen images 104 may include an image that is a photograph captured using an image capture device such as a camera, a scanner, and the like.), wherein the application is one of the mobile application or a web application (Fig. 1, Paragraph [0062] – KUMAR discloses the application that is to be developed using GUI screen images 104 may be one of various types of applications including but not restricted to a mobile application (e.g., an application executable by a mobile device), a desktop application, a web application, an enterprise application, and the like.); determining, by the trained AI model, a positioning of each of a set of web elements (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses at 610, UI components may be detected and the corresponding locations in the GUI screen image may be determined. UI components may include, for example, buttons, check boxes, lists, text entry boxes, icons, containers, radio buttons, switch buttons, and the like. KUMAR further discloses a neural network may extract features from the GUI screen images as described above, and may implement an object detection technique (e.g., SSD or YOLO technique described above) to localize one or more UI components at one or more different locations of the GUI screen image uses the extracted features) within the real-time image (Fig. 1, Paragraph [0061]); mapping, by the trained AI model, the text label to a web element from the set of web elements (Fig. 1, Paragraph [0070] – KUMAR discloses model generation system 102 may extract text information 132 from GUI screen images 104. KUMAR further discloses based on the size and location of each UI component and the location information of the text content items, some text content items (e.g., text on a clickable button) may be associated with certain UI components (e.g., the clickable button)) based on the determined positioning (Fig. 1, Paragraph [0068] – KUMAR discloses this GUI model generation processing may include, for example, for a GUI screen, determining a set of user interface components (e.g., buttons, drop down lists, segments, and the like) and their attributes (e.g., labels, sizes, locations)) using a mapping algorithm (Fig. 2, Paragraph [0088] – KUMAR discloses various pieces of information may be extracted from the GUI screen image and may be used to identify the UI components and text content items for a screen. Certain attributes of the identified components may be determined, for example, using machine learning-based models. The attributes of the identified components may then be used to generate the GUI model and or the code for implementing the GUI.), wherein the text label is mapped to the web element based on a corresponding set of attributes (Fig. 2, Paragraph [0084] – KUMAR discloses buttons 214 may include a text content item 218, such as “Next,” “Cancel,” or “OK” on the button. The UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.); generating, by the trained AI model, a segmented image comprising the text label and the web element, upon mapping (Fig. 6, step 610, Paragraph [0140] – KUMAR discloses based on the location or the boundaries of each UI component, a sub-image [wherein sub-image is a segmented image] within the boundaries for each UI component may be extracted from the GUI screen image.); and transmitting, by the trained AI model (Fig. 4, Paragraph [0115] – KUMAR discloses each of the input GUI screen images may be processed to identify and extract GUI components including text content items and individual UI components, determine parameters of the text content items and the UI components (e.g., sizes, locations, colors, and the like), and classify the UI components to determine the types of the UI components, using the machine learning-based model(s)), the segmented image to a testing unit for performing an action associated with the text label and the web element (Fig. 6, Paragraph [0144] – KUMAR discloses a GUI model may be generated for the GUI based upon the text content items and corresponding locations, the classified UI components and corresponding locations, and the layout for the GUI screen. The GUI model may store information related to the processing performed at 604, 606, 610, 612, 614, and 616. The information stored in the GUI model can be used by a downstream consumer to generate an implementation of the GUI.). Regarding claim 16, KUMAR teaches the non-transitory computer-readable medium of claim 15, KUMAR further teaches wherein the corresponding set of attributes comprises a set of text attributes (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like) and a set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Regarding claim 17, KUMAR teaches the non-transitory computer-readable medium of claim 16, KUMAR further teaches wherein the set of text attributes associated with the text label comprises an orientation of the text label, an alignment of the text label, font size, font color, and font style (Fig. 2, Paragraph [0087] – KUMAR discloses text information may have associated attributes, such as fonts, sizes, colors, locations, and the like. Paragraph [0139] – KUMAR further discloses the alignment of the text content item within the UI component can be reserved by the placeholder.). Regarding claim 18, KUMAR teaches the non-transitory computer-readable medium of claim 16, KUMAR further teaches wherein identifying the text label (Fig. 6, Paragraph [0138] – KUMAR discloses at 606, text content items and corresponding locations in the text regions may be extracted from the sub-images) comprises: extracting the text label using a text recognition technique (Fig. 6, Paragraph [0138] – KUMAR discloses an optical character recognition (OCR) process may be performed on each of the extracted sub-images to extract the text information associated with each text content item.); and validating the text label based on the set of text attributes using a text correction algorithm (Fig. 3, Paragraph [0103] – KUMAR discloses text analysis module 358 may analyze the text content items to identify clickable text content items in the GUI screen image. A clickable text content item may indicate some actions or functions and may usually include at least one verb (e.g., cancel, save, clear, etc.), and may not be associated with any UI component.). Regarding claim 20, KUMAR teaches the non-transitory computer-readable medium of claim 16, KUMAR further teaches wherein the set of web elements attributes (Fig. 2, Paragraph [0084] – KUMAR discloses GUI screen 200 may include one or more UI components [wherein UI components are web elements]) comprises a type of the web element, an alignment of the web element, an orientation of the web element, a size of the web element, a shape of the web element, number of blocks within the web element, background of the real-time image comprising the web element (Fig. 2, Paragraph [0084] – KUMAR discloses UI components may have associated attributes, such as sizes, colors, locations, or associated actions or functions.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over KUMAR (US 20190250891 A1), hereinafter referenced as KUMAR in view of DWARAKANATH (US 20180189170 A1), hereinafter referenced as DWARAKANATH. Regarding claim 5, KUMAR teaches the method of claim 1, KUMAR fails to explicitly teach further comprising: capturing a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. However, DWARAKANATH explicitly teaches capturing a set of real-time images (Fig. 2, Paragraph [0058] – DWARAKANATH discloses test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces.) associated with a plurality of web pages of the application (Fig. 2, Paragraph [0082] – DWARAKANATH discloses test automation platform 205 enables a tester to create test scripts to test a web-based user interface.), wherein each of the set of real-time images (Fig. 2, Paragraph [0058]) may be captured based on test steps within the testcase (Fig. 4, Paragraph [0044] – DWARAKANATH discloses process 400 may include receiving a test script that includes information identifying an element of a user interface and/or a set of test steps to test the user interface (block 410). For example, test automation platform 205 may receive a test script that includes information identifying an element of a user interface and/or a set of test steps to perform related to testing the user interface.); analysing each of the set of real-time images based on the text label (Fig. 4, Paragraph [0053] - DWARAKANATH discloses process 400 may include processing the test script using a processing technique to identify the information included in the test script (block 420)); and selecting from the set of real-time images, the real-time image comprising the text label (Fig. 4, Paragraph [0057] – DWARAKANATH discloses process 400 may include identifying the element displayed on the user interface using the identified information included in the test script (block 430). For example, test automation platform 205 may identify the element on the user interface based on identifying a term, tag, and/or phrase included in the test script that identifies the element.) and the set of web elements, in response to analysing (Fig. 4, Paragraph [0058] – DWARAKANATH discloses the technique may permit test automation platform 205 to visually identify an element displayed by a user interface. For example, test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces to identify a button, a text box, a dropdown menu, a label with text, and/or the like displayed on the user interface based on the elements of the user interface having similar features to the other known elements.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KUMAR of having a method for managing applications, the method comprising: identifying, by a trained Artificial Intelligence (AI) model, a text label from a testcase and a real-time image associated with an application, wherein the application is one of the mobile application or a web application with the teachings of DWARAKANATH of having capturing a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. Wherein having KUMAR’s method for managing applications wherein further comprising: capturing a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. The motivation behind the modification would have been to obtain a method of managing applications that generates GUI implementations in an automated manner such that this level of automation can substantially speed up the application development cycle and reduce the development costs, since both KUMAR and DWARAKANATH relate to user interface development and testing processes, wherein KUMAR discloses techniques for automating the development of a graphic user interface (GUI) for an application from design documents, and DWARAKANATH describes methods and systems for automatic interaction with a user interface based on identifying elements displayed by the user interface, thereby reducing or eliminating the need for a tester to know, or have access to, program code underlying the user interface. Please see KUMAR (US 20190250891 A1), Paragraph [0083], and DWARAKANATH (US 20180189170 A1), Paragraph [0021]. Regarding claim 12, KUMAR teaches the system of claim 8, Although KUMAR further teaches wherein the processor executable instructions further cause the processor (Fig. 1, Paragraph [0067] – KUMAR discloses these subsystems may be implemented in hardware, in software (e.g., code, instructions, program) executed by one or more processing units (e.g., processors or cores) of a computer system, or combinations thereof) to: KUMAR fails to explicitly teach capture a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analyse each of the set of real-time images based on the text label; and select from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. However, DWARAKANATH explicitly teaches capture a set of real-time images (Fig. 2, Paragraph [0058] – DWARAKANATH discloses test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces.) associated with a plurality of web pages of the application (Fig. 2, Paragraph [0082] – DWARAKANATH discloses test automation platform 205 enables a tester to create test scripts to test a web-based user interface.), wherein each of the set of real-time images (Fig. 2, Paragraph [0058]) may be captured based on test steps within the testcase (Fig. 4, Paragraph [0044] – DWARAKANATH discloses process 400 may include receiving a test script that includes information identifying an element of a user interface and/or a set of test steps to test the user interface (block 410). For example, test automation platform 205 may receive a test script that includes information identifying an element of a user interface and/or a set of test steps to perform related to testing the user interface.); analyse each of the set of real-time images based on the text label (Fig. 4, Paragraph [0053] - DWARAKANATH discloses process 400 may include processing the test script using a processing technique to identify the information included in the test script (block 420)); and select from the set of real-time images, the real-time image comprising the text label (Fig. 4, Paragraph [0057] – DWARAKANATH discloses process 400 may include identifying the element displayed on the user interface using the identified information included in the test script (block 430). For example, test automation platform 205 may identify the element on the user interface based on identifying a term, tag, and/or phrase included in the test script that identifies the element.) and the set of web elements, in response to analysing (Fig. 4, Paragraph [0058] – DWARAKANATH discloses the technique may permit test automation platform 205 to visually identify an element displayed by a user interface. For example, test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces to identify a button, a text box, a dropdown menu, a label with text, and/or the like displayed on the user interface based on the elements of the user interface having similar features to the other known elements.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KUMAR of having a system for managing applications, the system comprising: a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which when executed by the processor, cause the processor to: identify, by a trained Artificial Intelligence (AI) model, a text label from a testcase and a real-time image associated with an application, with the teachings of DWARAKANATH of having capture a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analyse each of the set of real-time images based on the text label; and select from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. Wherein having KUMAR’s system for managing applications wherein the processor executable instructions further cause the processor to: capture a set of real-time images associated with a plurality of web pages of the application, wherein each of the set of real-time images may be captured based on test steps within the testcase; analyse each of the set of real-time images based on the text label; and select from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. The motivation behind the modification would have been to obtain a system of managing applications that generates GUI implementations in an automated manner such that this level of automation can substantially speed up the application development cycle and reduce the development costs, since both KUMAR and DWARAKANATH relate to user interface development and testing processes, wherein KUMAR discloses techniques for automating the development of a graphic user interface (GUI) for an application from design documents, and DWARAKANATH describes methods and systems for automatic interaction with a user interface based on identifying elements displayed by the user interface, thereby reducing or eliminating the need for a tester to know, or have access to, program code underlying the user interface. Please see KUMAR (US 20190250891 A1), Paragraph [0083], and DWARAKANATH (US 20180189170 A1), Paragraph [0021]. Regarding claim 19, KUMAR teaches the non-transitory computer-readable medium of claim 15, KUMAR fails to explicitly teach further comprising: capturing a set of real-time images associated with a plurality of web pages of the application; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. However, DWARAKANATH explicitly teaches further comprising: capturing a set of real-time images (Fig. 2, Paragraph [0058] – DWARAKANATH discloses test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces.) associated with a plurality of web pages of the application (Fig. 2, Paragraph [0082] – DWARAKANATH discloses test automation platform 205 enables a tester to create test scripts to test a web-based user interface.); analysing each of the set of real-time images based on the text label (Fig. 4, Paragraph [0053] - DWARAKANATH discloses process 400 may include processing the test script using a processing technique to identify the information included in the test script (block 420)); and selecting from the set of real-time images, the real-time image comprising the text label (Fig. 4, Paragraph [0057] – DWARAKANATH discloses process 400 may include identifying the element displayed on the user interface using the identified information included in the test script (block 430). For example, test automation platform 205 may identify the element on the user interface based on identifying a term, tag, and/or phrase included in the test script that identifies the element.) and the set of web elements, in response to analysing (Fig. 4, Paragraph [0058] – DWARAKANATH discloses the technique may permit test automation platform 205 to visually identify an element displayed by a user interface. For example, test automation platform 205 may perform a comparison of an image of the user interface and images of known elements from other user interfaces to identify a button, a text box, a dropdown menu, a label with text, and/or the like displayed on the user interface based on the elements of the user interface having similar features to the other known elements.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of KUMAR of having a non-transitory computer-readable medium storing computer-executable instructions for managing applications, the stored instructions, when executed by a processor, causes the processor to perform operations comprising: identifying, by a trained Artificial Intelligence (AI) model, a text label from a testcase and a real-time image associated with an application, with the teachings of DWARAKANATH of having further comprising: capturing a set of real-time images associated with a plurality of web pages of the application; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. Wherein having KUMAR’s non-transitory computer readable medium for managing applications wherein further comprising: capturing a set of real-time images associated with a plurality of web pages of the application; analysing each of the set of real-time images based on the text label; and selecting from the set of real-time images, the real-time image comprising the text label and the set of web elements, in response to analysing. The motivation behind the modification would have been to obtain a system of managing applications that generates GUI implementations in an automated manner such that this level of automation can substantially speed up the application development cycle and reduce the development costs, since both KUMAR and DWARAKANATH relate to user interface development and testing processes, wherein KUMAR discloses techniques for automating the development of a graphic user interface (GUI) for an application from design documents, and DWARAKANATH describes methods and systems for automatic interaction with a user interface based on identifying elements displayed by the user interface, thereby reducing or eliminating the need for a tester to know, or have access to, program code underlying the user interface. Please see KUMAR (US 20190250891 A1), Paragraph [0083], and DWARAKANATH (US 20180189170 A1), Paragraph [0021]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure. Dixon et al. (US 20180349730 A1) - User interface creation from screenshots is described. Initially, a user captures a screenshot of an existing graphical user interface (GUI). In one or more implementations, the screenshot is processed to generate different types of templates that are modifiable by users to create new GUIs. These different types of templates can include a snapping template, a wireframe template, and a stylized template. The described templates may aid GUI development in different ways depending on the type selected. To generate a template, the screenshot serving as the basis for the template is segmented into groups of pixels corresponding to components of the existing GUI. A type of component is identified for each group of pixels and locations in the screenshot are determined. Based on the identified types of GUI components and determined locations, the user-modifiable template for creating a new GUI is generated… Figs. 1, 2, 7, Abstract. Saini et al. (US 20210294621 A1) - Disclosed is a method and system for improving accessibility of software applications on mobile devices. The method comprises capturing in background, images of different user interfaces of a software application when the software application is browsed on a mobile device, using an accessibility helper tool. A pre-trained data model may be used to identify, elements and metadata of the elements present in the images. Based on the metadata, accessibility parameters of the elements may be analysed to generate a report for validation… Figs. 1-3, Abstract. Aggarwal et al. (US 11610054 B1) - Techniques for template generation from image content includes extracting information associated with an input image. The information comprises: 1) layout information indicating positions of content corresponding to a content type of a plurality of content types within the input image; and 2) text attributes indicating at least a font of text included in the input image. A user-editable template having the characteristics of the input image is generated based on the layout information and the text attributes… Figs. 1, 6, Abstract. Singh et al. (US 11644960 B1) - A computer system configured to augment images of software objects is provided. The computer system includes a memory and at least one processor coupled to the memory. The at least one processor is configured to iteratively select an attribute value from a predetermined set of attribute values; modify an attribute of a software object according to the attribute value; and generate a respective augmented image of the software object with the attribute modified according to the attribute value. The software object may comprise an executable software object… Fig. 11, Abstract Rao et al. (US 20230195825 A1) - Described are methods and corresponding systems for generating and using selectors during web development. In some implementations, one or more natural language statements are obtained as input to a software application, for example, a web browser extension. The one or more input statements are analyzed, using natural language processing, to identify a first web element of a webpage and an action to be performed with respect to the first web element. A selector is then generated based on one or more attributes of the first web element. The selector operates as an address of the first web element and can, for example, be an XPath or CSS selector… Figs. 2, 3, Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month