Prosecution Insights
Last updated: April 19, 2026
Application No. 18/670,659

ELECTRONIC SYSTEMS GENERATING PRODUCT TESTING INSTRUCTIONS AND FOR PROVIDING AUTOMATED PRODUCT TESTING

Non-Final OA §101§103§112
Filed
May 21, 2024
Examiner
CRANDALL, RICHARD W.
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Rainforest Qa Inc.
OA Round
3 (Non-Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
90 granted / 301 resolved
-22.1% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
343
Total Applications
across all art units

Statute-Specific Performance

§101
34.6%
-5.4% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 301 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to correspondence received February 10, 2026. Claims 1 and 34 are amended. Claims 1-34 are pending and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 10, 2026 has been entered. Drawings New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because the following Figures are screenshots with illegible text and shading that is not used to indicate surface or shape, or showing parts in perspective (therefore should be removed): 3D, 4C, 6A-B, 7A-B, 8A-8B. See 37 CFR 84 (a)(1): Drawings. There are two acceptable categories for presenting drawings in utility and design patent applications. Black ink. Black and white drawings are normally required. India ink, or its equivalent that secures solid black lines, must be used for drawings. (l) Character of lines, numbers, and letters. All drawings must be made by a process which will give them satisfactory reproduction characteristics. Every line, number, and letter must be durable, clean, black (except for color drawings), sufficiently dense and dark, and uniformly thick and well-defined. The weight of all lines and letters must be heavy enough to permit adequate reproduction. This requirement applies to all lines however fine, to shading, and to lines representing cut surfaces in sectional views. Lines and strokes of different thicknesses may be used in the same drawing where different thicknesses have a different meaning. (m) Shading. The use of shading in views is encouraged if it aids in understanding the invention and if it does not reduce legibility. Shading is used to indicate the surface or shape of spherical, cylindrical, and conical elements of an object. Flat parts may also be lightly shaded. Such shading is preferred in the case of parts shown in perspective, but not for cross sections. See paragraph (h)(3) of this section. Spaced lines for shading are preferred. These lines must be thin, as few in number as practicable, and they must contrast with the rest of the drawings. As a substitute for shading, heavy lines on the shade side of objects can be used except where they superimpose on each other or obscure reference characters. Light should come from the upper left corner at an angle of 45°. Surface delineations should preferably be shown by proper shading. Solid black shading areas are not permitted, except when used to represent bar graphs or color. Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Unless applicant is otherwise notified in an Office action, objections to the drawings in a utility or plant application will not be held in abeyance, and a request to hold objections to the drawings in abeyance will not be considered a bona fide attempt to advance the application to final action (§ 1.135(c) ). The drawings, Figs 9A-C, are objected to because black ink and black lines are not used. See 37 CFR 84 (a)(1): Drawings. There are two acceptable categories for presenting drawings in utility and design patent applications. Black ink. Black and white drawings are normally required. India ink, or its equivalent that secures solid black lines, must be used for drawings. (l) Character of lines, numbers, and letters. All drawings must be made by a process which will give them satisfactory reproduction characteristics. Every line, number, and letter must be durable, clean, black (except for color drawings), sufficiently dense and dark, and uniformly thick and well-defined. The weight of all lines and letters must be heavy enough to permit adequate reproduction. This requirement applies to all lines however fine, to shading, and to lines representing cut surfaces in sectional views. Lines and strokes of different thicknesses may be used in the same drawing where different thicknesses have a different meaning. Further, Figs 3A-D, 4A-D, 5A-H should be re-done so as to have the words going left to right when upright: Arrangement of views . One view must not be placed upon another or within the outline of another. All views on the same sheet should stand in the same direction and, if possible, stand so that they can be read with the sheet held in an upright position. If views wider than the width of the sheet are necessary for the clearest illustration of the invention, the sheet may be turned on its side so that the top of the sheet, with the appropriate top margin to be used as the heading space, is on the right-hand side. Words must appear in a horizontal, left-to-right fashion when the page is either upright or turned so that the top becomes the right side, except for graphs utilizing standard scientific convention to denote the axis of abscissas (of X) and the axis of ordinates (of Y). Here, this is simply text, and it is not necessary for the sheet to be turned on its side. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. INFORMATION ON HOW TO EFFECT DRAWING CHANGES Replacement Drawing Sheets Drawing changes must be made by presenting replacement sheets which incorporate the desired changes and which comply with 37 CFR 1.84. An explanation of the changes made must be presented either in the drawing amendments section, or remarks, section of the amendment paper. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). A replacement sheet must include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of the amended drawing(s) must not be labeled as “amended.” If the changes to the drawing figure(s) are not accepted by the examiner, applicant will be notified of any required corrective action in the next Office action. No further drawing submission will be required, unless applicant is notified. Identifying indicia, if provided, should include the title of the invention, inventor’s name, and application number, or docket number (if any) if an application number has not been assigned to the application. If this information is provided, it must be placed on the front of each sheet and within the top margin. Annotated Drawing Sheets A marked-up copy of any amended drawing figure, including annotations indicating the changes made, may be submitted or required by the examiner. The annotated drawing sheet(s) must be clearly labeled as “Annotated Sheet” and must be presented in the amendment or remarks section that explains the change(s) to the drawings. Timing of Corrections Applicant is required to submit acceptable corrected drawings within the time period set in the Office action. See 37 CFR 1.85(a). Failure to take corrective action within the set period will result in ABANDONMENT of the application. If corrected drawings are required in a Notice of Allowability (PTOL-37), the new drawings MUST be filed within the THREE MONTH shortened statutory period set for reply in the “Notice of Allowability.” Extensions of time may NOT be obtained under the provisions of 37 CFR 1.136 for filing the corrected drawings after the mailing of a Notice of Allowability. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: All in the format: [nonce word] + “configured to” (“plus”) + function. In claim 1, a product retriever configured to access a product; one or more processing units configured to generate prompts for input to a neural network An image capturer configured to capture one or more images… In claim 2, a first prompt generator configured to generate the first set of prompts In claim 3, a second prompt generator configured to generate the second set of prompts In claim 4, a third prompt generator configured to generate the third set of prompts In claim 5, the one or more processing units are configured to obtain OCR data and/or DOM data, and to provide the OCR data and/or the DOM data along with the third set of prompts In claim 6, the one or more processing units are configured to access the neural network. In claim 8, the one or more processing units are configured to access respective ones of the neural network models In claim 11, the one or more processing units are configured to obtain feedback from the neural network In claim 12, the one or more processing units are configured to generate additional prompts based on the feedback In claim 13, a product testing device configured to execute the product testing instruction to perform testing of the product; the product testing device is configured to perform the testing of the product by simulating human actions In claim 14, the product testing device is configured to move a cursor without input from a cursor control. In claim 15, the product testing device is configured to make a selection of an object without input from a cursor control. In claim 16, the product testing device is configured to insert a text in a field without input from a keyboard. In claim 17, an interpreter configured to interpret the product testing instruction. In claim 18, the one or more processing units are configured to access a first set of items and a second set of items for the neural network… In claim 27, the product testing device is configured to check if an element is visible after the product testing device performs a testing action In claim 28, the product testing device is configured to check if an element is not visible after the product testing device performs a testing action In claim 29, the product testing device is configured to check if a specified page has loaded after the product testing device performs a testing action In claim 30: the product testing device is configured to: obtain a first image that is associated with the testing of the product, obtain a second image Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Structural support for these elements in the specification must be found, and where found are noted below (paragraphs from published application): The product retriever is described as: “In some cases, the product retriever 110 may include a web-browser configured to access a website.” Par 0178. One or more processing units are described as: “In further embodiments, the processing units 130, 140, 150 may include respective neural networks.” Par 0215. Processing unit also described in pars 0290-0291 as processing system which includes bus, RAM, ROM, etc. See also par 0302, where it may refer to hardware, computer systems. Prompt generator: “In some cases, one or more of the processing units 130, 140, 150 may be implemented as one or more prompt generators.” Par 0222. No structure was found for the interpreter or image capturer, they were only described in terms of their functions in the specification. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-34 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In claim 1, Applicant recites an “an image capturer configured to capture one or more images….” This is interpreted as a means plus function substitute, see interpretation section above. However, there is no structure found for the image capturer in the specification, which is a requirement. Therefore there is a lack of written description. See MPEP 2163.03: A claim limitation expressed in means- (or step-) plus-function language "shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof." 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. If the specification fails to disclose sufficient corresponding structure, materials, or acts that perform the entire claimed function, then the claim limitation is indefinite because the applicant has in effect failed to particularly point out and distinctly claim the invention as required by 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In re Donaldson Co., 16 F.3d 1189, 1195, 29 USPQ2d 1845, 1850 (Fed. Cir. 1994) (en banc). Such a limitation also lacks an adequate written description as required by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because an indefinite, unbounded functional limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See also MPEP § 2181 Without a showing of structure Applicant may either correct and show where the structure is in the specification for image capturer, or may amend (remove the means plus function aspect of the limitation). Similarly claim 17 is rejected for not having structure for the interpreter. Similar ways to overcome as the image capturer, above. Claims 2-34 are rejected for being dependent on claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Per claim 1 and 17, claim limitations “image capturer” and “interpreter” (see above, 112(a) rejection) invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 2-34 are rejected for being dependent on claim 1. Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites “DOM” data but there is no accepted one definition of what DOM data is. OCR is being interpreted as Optical Character Recognition. DOM data does not have a definition or clarification in the specification. Therefore it is unclear what DOM data is and the scope of the claim is unclear. Claim 10 contains the trademark/trade name ChatGPT. Where a trademark or trade name is used in a claim as a limitation to identify or describe a particular material or product, the claim does not comply with the requirements of 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. See Ex parte Simpson, 218 USPQ 1020 (Bd. App. 1982). The claim scope is uncertain since the trademark or trade name cannot be used properly to identify any particular material or product. A trademark or trade name is used to identify a source of goods, and not the goods themselves. Thus, a trademark or trade name does not identify or describe the goods associated with the trademark or trade name. In the present case, the trademark/trade name is used to identify/describe the neural network. and, accordingly, the identification/description is indefinite. Therefore, claims 1-34 are rejected under 35 USC 112. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) Claim 1 recites: access a product; one or more images of the product; generate prompts for input to a neural network, wherein the prompts are configured to prompt the neural network to determine a feature of the product based on at least one of the one or more captured images, determine a suggested testing comprising a suggested-action identifier created by the neural network for the feature of the product, perform a verification to determine whether the suggested testing for the feature of the product can be performed, and generate product testing instruction after the verification is performed. Claim 34 recites: A method performed, the method comprising: accessing a product; one or more images of the product and generating prompts for input to a neural network, wherein the prompts are configured to prompt the neural network to: determine a feature of the product based on at least one of the one or more captured images, determine a suggested testing for the feature of the product, wherein the determined suggested testing comprises a suggested-action identifier created by the neural network, perform a verification to determine whether the suggested testing for the feature of the product can be performed, and generate product testing instruction after the verification is performed. Claims 1 and 34 recite an abstract idea that is a mental process because the steps access (retrieve) information, wherein accessing a product is interpreted as accessing information about the product, and retrieving images of the product. One could perform these steps mentally by looking at a product and images of a product. Then, prompts are generated for a neural network. Prompts are simply plain English instructions that one writes for a neural network. The neural network is not positively claimed, it is only claimed in terms of what the prompt is configured to do (the prompt is configured to prompt the neural network, the neural network is not itself prompted). One could generate a prompt mentally by thinking of what one wants a neural network to do. These steps can easily be performed in the mind—receiving information about something then composing a prompt is a mental process of taking in some info then coming up with what one wants ChatGPT to do for them. Therefore, the steps are a mental process. The steps also describe a certain method of organizing human activity – following rules or instructions. This is because the steps of taking in information then writing a prompt that does certain steps to generate product testing instruction (all of which is the requested desire put into ChatGPT) are rules or instructions for the neural network. This is similar to considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018); and a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982). Therefore, the steps also describe a certain method of organizing human activity. This judicial exception is not integrated into a practical application. The additional elements alone and in combination are instructions to apply the abstract idea to computers or other machinery. See MPEP 2106.05(f)(2). The additional elements are: Claim 1: An electronic system, comprising: a product retriever configured to an image capturer configured to capture and one or more processing units configured to Claim 34: by an electronic system capturing by an image capturer of the electronic system; by one or more processing units of the electronic system, The electronic system is taught by a generic computer; the product retriever a web browser; image capturer has no structure but for purposes here could be screen grab; and processing units are neural network. In combination, they amount to a generic computing device like a smartphone or laptop with conventional “machinery” such as screen grab (native to iOS, for example) and neural network, which is also conventional considering ChatGPT teaches this and is accessible by app. In combination therefore this is taught by a macbook with ChatGPT loaded into a web browser, which is ordinary machinery and therefore “apply it.” There is not a practical application of an abstract idea because the additional elements are recited at a high level and in combination, amount to a computer and ordinary machinery on that computer. See MPEP 2106.05(f)(2). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because for the reasons that there is not a practical application, there is not significantly more than the abstract idea. The additional elements in combination are instructions to apply the abstract idea to a generic computer running ChatGPT and a web browser open, which is also not significantly more than the abstract idea. Per the dependent claims: Claims 2-4 recites that there is a first/second/third set of prompts, which is further abstract idea detail, and causing the neural network to determine something is applying the neural network to the abstract idea of the prompt. There is no technical detail as to how the neural network performs this function: claims 2-4 recites a desired outcome or result of the neural network, see MPEP 2106.05(f)(1). Claim 5 recites OCR and DOM data being obtained which is provided along with prompts. This data is insignificant extra solution activity as it is mere data gathering See MPEP 2106.05(g) and is well-understood, routine, and conventional because OCR is electronically scanning or extracting data from a physical document, see Content Extraction, MPEP 2106.05(d). Similar to claims 2-4 this “causes the neural network to generate” which is a desired outcome or result of the neural network. Therefore this is not a practical application or significantly more than the abstract idea of claim 1. Claims 6-11 recite various arrangement of processing units and neural networks. As these elements are “apply it” elements to the abstract idea of writing a prompt, they are in combination reciting that units and neural networks are connected or comprise each other. This is similar to using an abstract idea on the internet, Ultramercial v Hulu, which is also a network of computing elements. Therefore, these are apply it limitations. The limitation that the neural network is ChatGPT just narrows the neural network to a company. Claims 12-16 recite further abstract idea details such as obtaining feedback determine another feature of the product suggesting testing. These are performed by the apply it elements in that, similar to above analysis there is no technical detail claimed as to how these steps are performed, only that they are performed by the elements. The steps of simulating human actions or move a cursor, selection of object, insert text or are apply it steps because there is no detail as to how these steps are performed, only that they are performed. These are claims solely to the desired outcome or result without limitations as to how they are accomplished. See MPEP 2106.05(f)(1). Claim 17 recites further detail of the abstract idea with interpreting a result which is something that is a mental process (a mental judgment). Claim 18 recites accessing items that are action or object identifiers, which further describes the abstract idea. These steps are performed by apply it elements without technical detail as to how they are performed. Claims 19-25 recite further details of object and action identifiers which further define the abstract idea because they are observable information elements that are a part of the mental process and/or certain method of organizing human activity. Claim 26 recites storing the instruction on a medium which is an apply it step of storing data. See MPEP 2106.05(f)(2). Claims 27-29 recites a further abstract idea step of checking an element is visible/not visible/has loaded because one could do that by mental observation. Claim 30 recites retrieving information, comparing information steps which are abstract idea steps and similar to Electric Power Group, collecting, analyzing, and displaying the results of the analysis. Claims 31-33 further define the abstract idea with description of information elements, which further describes the mental process of observation (one could observe mentally these elements, such as image of completion of product testing task or a web page or past testing information). Providing information to a neural network is an apply it step of the neural network as there is no technical detail about how the neural network “guides an operation of the neural network.” Therefore, the dependent claims, if incorporated into the independent claim 1, would not overcome the 101 rejection. Therefore, claims 1-34 are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 7, and 13-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gryka, US PGPUB 20220237110 A1 (“Gryka”) in view of Liu et al., “Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing,” published May 16, 2023, available at: < https://arxiv.org/pdf/2305.09434 > (“Liu”), further in view of Fernando et al., “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution,” published September 28, 2023, available at: < https://arxiv.org/pdf/2309.16797 > (“Fernando”). Per claims 1 and 34, which are similar in scope, Gryka teaches An electronic system, comprising: a product retriever configured to access a product in par 0199: “In some cases, if there are multiple electronic files in the non-transitory medium 1520 that are associated with the same tested product, the retrieving module 1530 may then be configured to select one of the electronic file for use by the product testing machine 1540. For example, there may be a first electronic file having data regarding tracked actions of a first tester 14 who performed product testing on a product, and a second electronic file having data regarding tracked actions of a second tester 14 who performed product testing on the same product. In such cases, the retrieving module 1530 may be configured to select one of the electronic files in the non-transitory medium 1520 having a latest time stamp for use by the product testing machine 1540.” Gryka then teaches an image capturer configured to capture one or more images of the product in par 0222: “In one implementation, the image capturing feature described above may be performed by an image capturer 1550 (shown in FIG. 12).” Gryka then teaches determine a suggested testing comprising a suggested-action identifier for the feature of the product in par 0386: “In such cases, the product testing device 1540 may provide a suggestion comprising a portion of an image (input image) of the page. In particular, in some embodiments, the product testing device 1540 may be configured to provide a screenshot (screen capture) of an image of an object as suggestion of an element to be searched for in a product testing. For example, the product testing device 1540 may fail to detect a “Login” button with certain specific visual features (e.g., size, shape, color, font size, font type, etc.), but the product testing system may find something very similar (e.g., a “Login” button with a slightly larger size, different color, and/or different font size).” perform a verification to determine whether the suggested testing for the feature of the product can be performed in par 0386: “In such cases, there is a good chance that the application providing the webpage has changed, and accordingly, the testing parameter (e.g., the image of the design of the “Login” button) for testing the webpage will need to be updated. The product testing device 1540 makes this easy for user by providing a suggestion of an image of an object being searched for, which may be a screenshot of an image of the new “Login” button in the above example. The product testing device 1540 may inform the user that the original design of the “Login” button cannot be found, but the product testing system found a similar object (shown as the suggestion). If the user accepts the suggestion, the product testing system then stores the image of the suggestion as the new target object to be searched for in future product testing of the product. This feature provides a convenient and effective way for product testing parameters to be updated without requiring user to perform a significant amount of work.” Gryka then teaches and generate product testing instruction after the verification is performed in par 0390: “If the user accepts the suggestion, then the system 10 may update testing parameter that is stored as a part of the testing instruction (item 3516). For example, the system 10 may store the suggested image as a new reference image, which the product testing device 1540 may use to match against image of the page in future testing for determining whether the page contains the reference image.” Gryka does not teach wherein the prompts are configured to prompt the neural network to determine a feature of the product based on at least one of the one or more captured images, determine a suggested testing comprising a suggested-action identifier created by the neural network for the feature of the product, Liu teaches using an LLM to test content on a GUI page. See abstract. Liu teaches wherein the prompts are configured to prompt the neural network to determine a feature of the product based on at least one of the one or more captured images in page 4: “Page GUI information provides the semantics of the current page under testing during the interactive process, which facilitates the LLM to capture the current snapshot. We extract the activity name of the page, all the widgets represented by the “text” field or “resource-id” field (the first non-empty one in order), and the widget position of the page. For the position, inspired by the screen reader [83, 88, 103], we first obtain the coordinates of each widget in order from top to bottom and from left to right, and the widgets whose ordinate is below the middle of the page is marked as lower, and the rest is marked as upper.” See also Table 1: Widgets and Position. Neural network is taught in page 6. Liu then teaches determine a suggested testing comprising a suggested-action identifier created by the neural network for the feature of the product in page 6: “After inputting the generated prompt, LLM would output a natural language sentence describing the operation steps for the testing, e.g., click the save button. We need to convert the natural language described operation steps to the GUI events (i.e., widgets) of the app to enable it to be automatically executed. This is non-trivial considering the natural language description can be arbitrary, and inherently imprecise. We design a neural matching network to predict which widget can be most likely to be mapped to the operation step. Since training the neural network usually requires a large amount of labeled data, we develop a heuristic-based automated training data generation method to facilitate the model training in Section 3.3.2.” Neural Network taught in page 6 (“neural matching network”). It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing of Gryka with the LLM prompt based testing because Liu teaches in page 2 that the prompt based GUI testing can perform testing “without any training data or corresponding computational resources for training the model.” This would enable faster results, essentially as fast as the prompts are given, while eliminating training steps and computational use. Therefore this is more efficient. Further, as evaluated on page 10, in the conclusion, this replaces human like actions and therefore automates the testing process. For these reasons one would be motivated to combine Gryka with Liu. Gryka does not teach and one or more processing units configured to generate prompts for input to a neural network. Fernando teaches a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain, driven by an LLM. See abstract. Fernando teaches and one or more processing units configured to generate prompts for input to a neural network in page 5: “Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P. To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand. Fernando generates task-prompts according to an evolutionary algorithm. The mutation operator for this algorithm is itself an LLM, conditioned on a mutation-prompt M. That is, a mutated task prompt P′ is defined by P′ = LLM(M + P) where ‘+‘ corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing teaching of Gryka as modified by the prompt testing teaching of Liu with the prompt generation teaching of Fernando because Fernando teaches in page 9 that: “We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.” This teaches an improvement that would motivate one ordinarily skilled as it would not only automate the human task of prompting but also would improve the prompts entered into the system. For these reasons one would be motivated to modify Gryka as modified by Liu with Fernando. Per claim 2, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach wherein the prompts comprise a first set of prompts, and wherein; the first set of prompts to cause the neural network to determine the feature of the product based on the at least one of the one or more captured images. Liu teaches wherein the prompts comprise a first set of prompts, in Table 2 page 5, items 1-3. Liu then teaches and wherein; the first set of prompts to cause the neural network to determine the feature of the product based on the at least one of the one or more captured images in page 4: “Page GUI information provides the semantics of the current page under testing during the interactive process, which facilitates the LLM to capture the current snapshot. We extract the activity name of the page, all the widgets represented by the “text” field or “resource-id” field (the first non-empty one in order), and the widget position of the page. For the position, inspired by the screen reader [83, 88, 103], we first obtain the coordinates of each widget in order from top to bottom and from left to right, and the widgets whose ordinate is below the middle of the page is marked as lower, and the rest is marked as upper.” See also Table 1: Widgets and Position. Neural network is taught in page 6. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing of Gryka with the LLM prompt based testing of Liu because Liu teaches in page 2 that the prompt based GUI testing can perform testing “without any training data or corresponding computational resources for training the model.” This would enable faster results, essentially as fast as the prompts are given, while eliminating training steps and computational use. Therefore this is more efficient. Further, as evaluated on page 10, in the conclusion, this replaces human like actions and therefore automates the testing process. For these reasons one would be motivated to combine Gryka with Liu. Gryka does not teach and wherein the one or more processing units comprises a first prompt generator configured to generate the first set of prompts. Fernando teaches and wherein the one or more processing units comprises a first prompt generator configured to generate the first set of prompts in page 5: “Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P. To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand. Fernando generates task-prompts according to an evolutionary algorithm. The mutation operator for this algorithm is itself an LLM, conditioned on a mutation-prompt M. That is, a mutated task prompt P′ is defined by P′ = LLM(M + P) where ‘+‘ corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing teaching of Gryka as modified by the prompt testing teaching of Liu with the prompt generation teaching of Fernando because Fernando teaches in page 9 that: “We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.” This teaches an improvement that would motivate one ordinarily skilled as it would not only automate the human task of prompting but also would improve the prompts entered into the system. For these reasons one would be motivated to modify Gryka as modified by Liu with Fernando. Per claim 3, Gryka, Liu, and Fernando teach the limitations of claim 2, above. Gryka further teaches to perform the verification to determine whether the suggested testing for the feature of the product can be performed in par 0386: “In such cases, there is a good chance that the application providing the webpage has changed, and accordingly, the testing parameter (e.g., the image of the design of the “Login” button) for testing the webpage will need to be updated. The product testing device 1540 makes this easy for user by providing a suggestion of an image of an object being searched for, which may be a screenshot of an image of the new “Login” button in the above example. The product testing device 1540 may inform the user that the original design of the “Login” button cannot be found, but the product testing system found a similar object (shown as the suggestion). If the user accepts the suggestion, the product testing system then stores the image of the suggestion as the new target object to be searched for in future product testing of the product. This feature provides a convenient and effective way for product testing parameters to be updated without requiring user to perform a significant amount of work.” Gryka does not teach the prompts comprise a second set of prompts; and wherein; the second set of prompts to cause the neural network to perform a step. Liu teaches wherein the prompts comprise a second set of prompts, in Table 2 page 5, item 4. Liu then teaches and wherein; the second set of prompts to cause the neural network to perform a step in page 4: “Page GUI information provides the semantics of the current page under testing during the interactive process, which facilitates the LLM to capture the current snapshot. We extract the activity name of the page, all the widgets represented by the “text” field or “resource-id” field (the first non-empty one in order), and the widget position of the page. For the position, inspired by the screen reader [83, 88, 103], we first obtain the coordinates of each widget in order from top to bottom and from left to right, and the widgets whose ordinate is below the middle of the page is marked as lower, and the rest is marked as upper.” See also Table 1: Widgets and Position. Neural network is taught in page 6. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing of Gryka with the LLM prompt based testing of Liu because Liu teaches in page 2 that the prompt based GUI testing can perform testing “without any training data or corresponding computational resources for training the model.” This would enable faster results, essentially as fast as the prompts are given, while eliminating training steps and computational use. Therefore this is more efficient. Further, as evaluated on page 10, in the conclusion, this replaces human like actions and therefore automates the testing process. For these reasons one would be motivated to combine Gryka with Liu. Gryka does not teach and wherein the one or more processing units comprises a second prompt generator configured to generate the second set of prompts. Fernando teaches and wherein the one or more processing units comprises a second prompt generator configured to generate the second set of prompts in page 5: “Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P. To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand. Fernando generates task-prompts according to an evolutionary algorithm. The mutation operator for this algorithm is itself an LLM, conditioned on a mutation-prompt M. That is, a mutated task prompt P′ is defined by P′ = LLM(M + P) where ‘+‘ corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing teaching of Gryka as modified by the prompt testing teaching of Liu with the prompt generation teaching of Fernando because Fernando teaches in page 9 that: “We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.” This teaches an improvement that would motivate one ordinarily skilled as it would not only automate the human task of prompting but also would improve the prompts entered into the system. For these reasons one would be motivated to modify Gryka as modified by Liu with Fernando. Per claim 4, Gryka, Liu, and Fernando teach the limitations of claim 3, above. Gryka further teaches and generate product testing instruction in par 0390: “If the user accepts the suggestion, then the system 10 may update testing parameter that is stored as a part of the testing instruction (item 3516). For example, the system 10 may store the suggested image as a new reference image, which the product testing device 1540 may use to match against image of the page in future testing for determining whether the page contains the reference image.” Gryka does not teach the prompts comprise a third set of prompts; and wherein; the third set of prompts to cause the neural network to perform a step. Liu teaches wherein the prompts comprise a third set of prompts, in Table 2 page 5, item 5 and 6. Liu then teaches and wherein; the third set of prompts to cause the neural network to perform a step in page 4: “Page GUI information provides the semantics of the current page under testing during the interactive process, which facilitates the LLM to capture the current snapshot. We extract the activity name of the page, all the widgets represented by the “text” field or “resource-id” field (the first non-empty one in order), and the widget position of the page. For the position, inspired by the screen reader [83, 88, 103], we first obtain the coordinates of each widget in order from top to bottom and from left to right, and the widgets whose ordinate is below the middle of the page is marked as lower, and the rest is marked as upper.” See also Table 1: Widgets and Position. Neural network is taught in page 6. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing of Gryka with the LLM prompt based testing of Liu because Liu teaches in page 2 that the prompt based GUI testing can perform testing “without any training data or corresponding computational resources for training the model.” This would enable faster results, essentially as fast as the prompts are given, while eliminating training steps and computational use. Therefore this is more efficient. Further, as evaluated on page 10, in the conclusion, this replaces human like actions and therefore automates the testing process. For these reasons one would be motivated to combine Gryka with Liu. Gryka does not teach and wherein the one or more processing units comprises a third prompt generator configured to generate the third set of prompts. Fernando teaches and wherein the one or more processing units comprises a third prompt generator configured to generate the third set of prompts in page 5: “Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P. To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand. Fernando generates task-prompts according to an evolutionary algorithm. The mutation operator for this algorithm is itself an LLM, conditioned on a mutation-prompt M. That is, a mutated task prompt P′ is defined by P′ = LLM(M + P) where ‘+‘ corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing teaching of Gryka as modified by the prompt testing teaching of Liu with the prompt generation teaching of Fernando because Fernando teaches in page 9 that: “We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.” This teaches an improvement that would motivate one ordinarily skilled as it would not only automate the human task of prompting but also would improve the prompts entered into the system. For these reasons one would be motivated to modify Gryka as modified by Liu with Fernando. Per claim 7, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the one or more processing units comprise the neural network. Fernando teaches the one or more processing units comprise the neural network in page 2: “PB generates variations of the task-prompts and mutation-prompts, exploiting the fact that LLMs can be prompted to act as mutation operators (Meyerson et al., 2023)” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing teaching of Gryka as modified by the prompt testing teaching of Liu with the prompt generation teaching of Fernando because Fernando teaches in page 9 that: “We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.” This teaches an improvement that would motivate one ordinarily skilled as it would not only automate the human task of prompting but also would improve the prompts entered into the system. For these reasons one would be motivated to modify Gryka as modified by Liu with Fernando. Per claim 13, Gryka, Liu, and Fernando, teach the limitations of claim 1, above. Gryka further teaches a product testing device configured to execute the product testing instruction to perform testing of the product based on the product testing instruction, wherein the product testing device is configured to perform the testing of the product by simulating human actions based on the product testing instruction in par 0197: “The tracker 1300 may also track a movement of a finger swipe, or a simulated finger swipe.” See also par 0203, Table 1. Par 0207. Per claim 14, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to move a cursor without input from a cursor control in par 0207: “For example, the product testing machine 1540 may be configured to virtually move a cursor with respect to a testing interface (e.g., the testing interface 402) without input from a cursor control.” Per claim 15, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to make a selection of an object without input from a cursor control in par 0207: “As another example, the product testing machine 1540 may be configured to virtually make a selection in a testing interface without input from a user control” Per claim 16, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to insert a text in a field without input from a keyboard in par 0207: “As a further example, the product testing machine 1540 may be configured to virtually type a text in a field of a testing interface without input from a keyboard.” Per claim 17, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device comprises an interpreter configured to interpret the product testing instruction in par 0306: “Also, in some embodiments, the processing unit of the product testing device 1540 may include an interpreter configured to interpret the product testing instruction in the electronic file. In one implementation, the interpreter is configured to identify pre-defined words (e.g., commands) such as action identifiers and object identifiers, and the processing unit of the product testing device 1540 then executes a corresponding function or routine to perform a task to test the product based on the interpreted words. The processing unit of the product testing device 1540 may include a selector that is configured to select the function or routine based on a map (e.g., a table) that maps or associates pre-defined words with respective functions or routines.” Per claim 18, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka further teaches the one or more processing units are configured to access a first set of items and a second set of items or to provide the first set of items and the second set of items for access, wherein the first set of items comprises a plurality of action identifiers, and the second set of items comprises a plurality of objects in par 0309: “In some embodiments, the product testing instruction in the electronic file has a data structure that associates an action identifier with a corresponding object identifier: The action identifier identifies an action to be performed by the testing device, and the object identifier identifies an object on which the action is to be performed by the testing device.” Gryka does not teach providing items for the neural network and access by the neural network. Liu teaches providing items for the neural network and access by the neural network in page 5: “required. And for the feedback question, after deciding the previous operation is not applicable (as described in Section 3.3), we inform the LLM that there is no such widget on the current page, and let it re-try.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the product testing of Gryka with the LLM prompt based testing of Liu because Liu teaches in page 2 that the prompt based GUI testing can perform testing “without any training data or corresponding computational resources for training the model.” This would enable faster results, essentially as fast as the prompts are given, while eliminating training steps and computational use. Therefore this is more efficient. Further, as evaluated on page 10, in the conclusion, this replaces human like actions and therefore automates the testing process. For these reasons one would be motivated to combine Gryka with Liu. Per claim 19, Gryka, Liu, and Fernando teach the limitations of claim 18, above. Gryka further teaches one of the action identifiers identifies an action to be performed by the product testing device, and one of the object identifiers identifies an object on which the action is to be performed by the product testing device in par 0309: “In some embodiments, the product testing instruction in the electronic file has a data structure that associates an action identifier with a corresponding object identifier: The action identifier identifies an action to be performed by the testing device, and the object identifier identifies an object on which the action is to be performed by the testing device.” Per claim 20, Gryka, Liu, and Fernando teach the limitations of claim 18, above. Gryka further teaches one of the action identifiers identifies a click action, a fill action, a type action, a press key action, a hover action, a dropdown select action, a checkbox check action, a checkbox uncheck action, a refresh action, a navigate action, a new tab action, a close tab action, a scroll cation, a drag and drop action, or a click and hold action in par 0321: “The examples of the action identifiers are for a click action, a fill action, a type action, a press key action, a hover action, a dropdown select action, a checkbox check action, a checkbox uncheck action, a refresh action, a navigate action, a new tab action, a close tab action, a scroll cation, a drag and drop action, and a click and hold action.” Per claim 21, Gryka, Liu, and Fernando teach the limitations of claim 18, above. Gryka further teaches one of the object identifiers identifies a button, a field, a dropdown menu, a dropdown option, a link, an icon, a checkbox, a header, a window, a text, a modal, or an user interface element in par 0321: “The examples of the object identifiers are for a button, a field, a dropdown menu, a dropdown option, a link, an icon, a checkbox, a header, a window, a text, a modal, and other user interface element.” Per claim 22, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka further teaches the product testing instruction has a data structure that associates an action identifier with a corresponding object identifier in par 0309: “In some embodiments, the product testing instruction in the electronic file has a data structure that associates an action identifier with a corresponding object identifier” Per claim 23, Gryka, Liu, and Fernando teach the limitations of claim 22, above. Gryka further teaches the action identifier identifies an action to be performed by the product testing device, and the object identifier identifies an object on which the action is to be performed by the product testing device in par 0309: “In some embodiments, the product testing instruction in the electronic file has a data structure that associates an action identifier with a corresponding object identifier: The action identifier identifies an action to be performed by the testing device, and the object identifier identifies an object on which the action is to be performed by the testing device.” Per claim 24, Gryka, Liu, and Fernando teach the limitations of claim 22, above. Gryka further teaches the action identifier identifies a click action, a fill action, a type action, a press key action, a hover action, a dropdown select action, a checkbox check action, a checkbox uncheck action, a refresh action, a navigate action, a new tab action, a close tab action, a scroll cation, a drag and drop action, or a click and hold action in par 0321: “The examples of the action identifiers are for a click action, a fill action, a type action, a press key action, a hover action, a dropdown select action, a checkbox check action, a checkbox uncheck action, a refresh action, a navigate action, a new tab action, a close tab action, a scroll cation, a drag and drop action, and a click and hold action.” Per claim 25, Gryka, Liu, and Fernando teach the limitations of claim 22, above. Gryka further teaches the object identifier identifies a button, a field, a dropdown menu, a dropdown option, a link, an icon, a checkbox, a header, a window, a text, a modal, or an user interface element in par 0321: “The examples of the object identifiers are for a button, a field, a dropdown menu, a dropdown option, a link, an icon, a checkbox, a header, a window, a text, a modal, and other user interface element.” Per claim 26, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka further teaches a non-transitory medium storing the product testing instruction in association with an identity of the product in par 0310: “Also, in some embodiments, the database 2404 may be configured to store the electronic file (with the product testing instruction) in association with an identity of the product. The database 2404 may be one or more non-transitory mediums.” Per claim 27, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to check if an element is visible after the product testing device performs a testing action in par 0311: “In some embodiments, the product testing device 1540 may include a checker configured to check if an element is visible after the processing unit of the product testing device 1540 performs a testing action.” Per claim 28, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to check if an element is not visible after the product testing device performs a testing action in par 0311: “Also, in some embodiments, the checker of the product testing device 1540 may be configured to check if an element is not visible after the processing unit of the product testing device 1540 performs a testing action.” Per claim 29, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to check if a specified page has loaded after the product testing device performs a testing action in par 0311: The checker of the product testing device 1540 may also be configured to check if a specified page has loaded after the processing unit performs a testing action.” Per claim 30, Gryka, Liu, and Fernando teach the limitations of claim 13, above. Gryka further teaches the product testing device is configured to: obtain a first image that is associated with the testing of the product, obtain a second image, the second image being a reference image that is pre- determined before the first image is obtained, perform a comparison based on the first image and the second image, and determine whether the product passes or fails a product testing task based on a result of the comparison in par 0312: “obtaining a first image that is associated with the testing of the product, obtaining a second image, and comparing the first and second images to determine if there is a match or not. The first image is based on a completion of a first task performed during the testing of the product. For example, the first image may comprise a first content of the product, the first content indicating a first result of a first task for testing the product. The second image may be a reference image that was obtained previously (e.g., via screen capture).” See also par 0380: “If the page contains text matching the reference text, then the product testing device 1540 may determine that the test (to determine the presence of the reference text) passes (item 3308). Otherwise, the product testing device 1540 may determine that the test fails (item 3310).” Per claim 31, Gryka, Liu, and Fernando teach the limitations of claim 30, above. Gryka further teaches the first image is based on a completion of the product testing task performed during the testing of the product in par 0238: “In some embodiments, with respect to the method 1900, the first image is based on a completion of a first task performed during the testing of the first product” Per claim 32, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka further teaches the product comprises a web page, a web site, a computer application, a mobile device application, or a processor application in par 0249: “with respect to the method 1900, the first product comprises a web page, a web site, a computer application, a mobile device application, or a processor application.” Claim(s) 5, 6, 11, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gryka, US PGPUB 20220237110 A1 (“Gryka”) in view of Liu et al., “Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing,” published May 16, 2023, available at: < https://arxiv.org/pdf/2305.09434 > (“Liu”), further in view of Fernando et al., “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution,” published September 28, 2023, available at: < https://arxiv.org/pdf/2309.16797 > (“Fernando”), further in view of Jenson et al., US PGPUB 20230092488 A1 (“Jenson”). Per claim 5, Gryka, Liu, and Ferando teach the limitations of claim 4, above. Gryka does not teach the one or more processing units are configured to obtain OCR data and/or DOM data, and to provide the OCR data and/or the DOM data along with the third set of prompts to cause the neural network to generate the product testing instruction based on at least a part of the OCR data and/or at least a part of the DOM data. Jenson teaches the one or more processing units are configured to obtain OCR data and/or DOM data, and to provide the OCR data and/or the DOM data along with the third set of prompts to cause the neural network to generate the product testing instruction based on at least a part of the OCR data and/or at least a part of the DOM data in par 0101: “For instance, the one or more conditions 102 may include that the user 100 posts a quote from a famous author on a social media page, where the condition data 104 submitted as evidence is a uniform resource locator (URL) (e.g., a link) to the posting and/or screenshot. The text of the html page pointed to by the URL may be digested, and/or an optical character recognition process may be applied to the screenshot, where the name of the famous author is read from the quote and compared to a preexisting list stored in a database. A match may indicate at least one of the one or more conditions 102 exists and cause issuance of one or more existence values 109.1.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the testing of features teaching of Gryka with the verification of whether features can be tested teaching of Jenson because in par 003 the problem in the field is that technology changes over time with different inputs from different people. Jenson’s teaching would overcome these inefficiencies and therefore make testing more efficient, see par 004. These taught motivations in pars 003-004 would motivate one ordinarily skilled to combine Jenson with Gryka to prevent inefficiencies from humans performing all of these steps. Per claim 6, Gryka, Liu, and Ferando teach the limitations of claim 1, above. Gryka does not teach the neural network is separate from the one or more processing units, and wherein the one or more processing units are configured to access the neural network. Jenson teaches the neural network is separate from the one or more processing units, and wherein the one or more processing units are configured to access the neural network in Fig 1.1 where the client devices and device (200A…200N) access through the network the coordination server 300 through the network 101. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the testing of features teaching of Gryka with the verification of whether features can be tested teaching of Jenson because in par 003 the problem in the field is that technology changes over time with different inputs from different people. Jenson’s teaching would overcome these inefficiencies and therefore make testing more efficient, see par 004. These taught motivations in pars 003-004 would motivate one ordinarily skilled to combine Jenson with Gryka to prevent inefficiencies from humans performing all of these steps. Per claim 11, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the one or more processing units are configured to obtain feedback from the neural network indicating that the suggested testing for the feature of the product cannot be performed. Jenson teaches the one or more processing units are configured to obtain feedback from the neural network indicating that the suggested testing for the feature of the product cannot be performed in par 079: “Continuing with the present example, the evaluation node 120.3 may define one or more determinations 122.3A which may be one or more indeterminate outcomes, which may each return along the reassessment reference 126 to the evaluation node 120.3. Reapplication of the evaluation node 120.3 may again generate a call to the panel coordination engine 340, either initiating a new panel and/or requiring any panel that reached at least one of the one or more determinations 122.3A” The indeterminate outcomes are those where a test cannot be performed. See par 0101 where a call to the neural network is then returned to the evaluation engine including an indeterminate value: “Other automated processes may be more complex or utilize more sophisticated tools, for example a call to the artificial neural network 352. The one or more determination values 108.1 (e.g., the one or more existence values 109, the one or more non-existence values 111, and/or one or more indeterminate values) may be returned to the condition evaluation engine 304 for comparison to the determination 122, initiating one or more response actions 419, and/or possible progression through to another evaluation tier 112 of the evaluation hierarchy data 115.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the testing of features teaching of Gryka with the verification of whether features can be tested teaching of Jenson because in par 003 the problem in the field is that technology changes over time with different inputs from different people. Jenson’s teaching would overcome these inefficiencies and therefore make testing more efficient, see par 004. These taught motivations in pars 003-004 would motivate one ordinarily skilled to combine Jenson with Gryka to prevent inefficiencies from humans performing all of these steps. Per claim 12, Gryka, Liu, Fernando, and Jenson teach the limitations of claim 11, above. Gryka does not teach the one or more processing units are configured to generate additional prompts based on the feedback to cause the neural network to determine another feature of the product and/or to determine another suggested testing. Jenson teaches the one or more processing units are configured to generate additional prompts based on the feedback to cause the neural network to determine another feature of the product and/or to determine another suggested testing in par 0101: “Other automated processes may be more complex or utilize more sophisticated tools, for example a call to the artificial neural network 352. The one or more determination values 108.1 (e.g., the one or more existence values 109, the one or more non-existence values 111, and/or one or more indeterminate values) may be returned to the condition evaluation engine 304 for comparison to the determination 122, initiating one or more response actions 419, and/or possible progression through to another evaluation tier 112 of the evaluation hierarchy data 115.” Progression through to another evaluation tier is based on the feedback which was taught in par 0101 above, see claim 11. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the testing of features teaching of Gryka with the verification of whether features can be tested teaching of Jenson because in par 003 the problem in the field is that technology changes over time with different inputs from different people. Jenson’s teaching would overcome these inefficiencies and therefore make testing more efficient, see par 004. These taught motivations in pars 003-004 would motivate one ordinarily skilled to combine Jenson with Gryka to prevent inefficiencies from humans performing all of these steps. Claim(s) 8 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gryka, US PGPUB 20220237110 A1 (“Gryka”), in view of Liu et al., “Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing,” published May 16, 2023, available at: < https://arxiv.org/pdf/2305.09434 > (“Liu”), further in view of Fernando et al., “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution,” published September 28, 2023, available at: < https://arxiv.org/pdf/2309.16797 > (“Fernando”), further in view of Chandra et al., US PGPUB 20200134381 A1 (“Chandra”). Per claim 8, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the neural network comprises a plurality of neural network models, and wherein the one or more processing units are configured to access respective ones of the neural network models. Chandra teaches a method in evaluating test subjects. See abstract. Chandra teaches the neural network comprises a plurality of neural network models, and wherein the one or more processing units are configured to access respective ones of the neural network models in Fig 1 and par 0035: “FIG. 1 an example of an embodiment of a test subject evaluation system 100 in computing environments. In an example embodiment, multiple images of the test subject 140 are obtained. The images (i.e., Image 1, Image 2 . . . Image N) are taken of the test subject 140 from multiple angles. The multiple images are used to create a three-dimensional image 150 of the test subject 140. A first machine learning system, such as a convolutional neural network 120, generates test subject features. A second machine learning system, such as a generative adversarial network 110, analyzes the test subject to detect distinguishing features of the test subject such as normal or abnormal features. A third machine learning system, such as a recurrent neural network 130, performs natural language processing on the test subject features to create evaluation information, such as a test subject evaluation 160, associated with the test subject 140. The test subject evaluation system 100 provides an evaluation of the test subject 140 based on the distinguishing features and the evaluation information.” See Fig 6 where the one or more processing units (the laptop) accesses item 120 and 130 the different neural networks. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the test instruction generation teaching of Gryka with the neural network determining a feature of the product and performing other steps teaching of Chandra because Chandra teaches in par 016 that manual inspection is time consuming and relies on human skill and therefore Chandra’s teaching would remove these cost and people inefficiencies by using neural networks. For these reasons one would be motivated to modify Gryka with Chandra. Per claim 33, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the one or more processing units comprise an action retriever configured to obtain past testing information regarding past human-testing action and/or past machine-testing action, and to provide the past testing information to the neural network for guiding an operation of the neural network. Chandra teaches the one or more processing units comprise an action retriever configured to obtain past testing information regarding past human-testing action and/or past machine-testing action, and to provide the past testing information to the neural network for guiding an operation of the neural network in par 038: “In an example embodiment, an algorithm is used to train the convolutional neural network 120 by passing features of, for example, an example test subject into the convolutional neural network 120, such as dimensions of the example test subject, etc. In an example embodiment, the algorithm detects features associated with the example test subject. During the training, the convolutional neural network 120 is trained to detect test subject features of the example test subject. For example, the algorithm may train the convolutional neural network 120 to detect test subject features such as color, dimensions, measurements, sharp edges, etc.” The training steps teach past testing information including information regarding past human testing and/or machine testing because the training is the features that one would look for in a device. It guides the operation of the neural network because it trains the neural network on what to look for. It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the test instruction generation teaching of Gryka with the neural network determining a feature of the product and performing other steps teaching of Chandra because Chandra teaches in par 016 that manual inspection is time consuming and relies on human skill and therefore Chandra’s teaching would remove these cost and people inefficiencies by using neural networks. For these reasons one would be motivated to modify Gryka with Chandra. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gryka, US PGPUB 20220237110 A1 (“Gryka”) in view of Liu et al., “Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing,” published May 16, 2023, available at: < https://arxiv.org/pdf/2305.09434 > (“Liu”), further in view of Fernando et al., “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution,” published September 28, 2023, available at: < https://arxiv.org/pdf/2309.16797 > (“Fernando”), further in view of Krizhevsky et al., US PGPUB 20140180989 A1 (“Krizhevsky”). Per claim 9, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the neural network comprises a plurality of neural network models, and wherein the one or more processing units comprise respective ones of the neural network models. Krizhevsky teaches a CNN with a plurality of convolutional neural networks each on a respective processing node. See abstract. Krizhevsky teaches the neural network comprises a plurality of neural network models, and wherein the one or more processing units comprise respective ones of the neural network models in Fig 2 and Pars 031-034: “Referring now to FIG. 2, in one aspect, a system for parallelizing a CNN is provided. The system comprises a plurality of CNNs instantiated on a plurality of computing nodes. Each computing node is a processor such as a CPUs or GPUs. It will be appreciated that a set of nodes may comprise combinations of CPUs and GPUs as well as other processors. It will also be appreciated that the described CNN need not be applied only to image processing, but can be applied to other suitable tasks. In one aspect, the system comprises interconnections initiated at a predetermined subset of layers for which activations will be communicated to other CNNs. The activations may be communicated to the subsequent adjacent layer of the other CNNs. For example, activations of nodes at layer i are communicated to cells of layer i+1 in other nodes. In the example shown in FIG. 2, for example, activations of layer 2 and 4 in each node are communicated to layer 3 and 5, respectively, of the other nodes. The layers selected for interconnection are a subset of all layers. In an example, which is to be understood as non-limiting, activations may be communicated across all nodes of particular pair of adjacent layers at predetermined intervals (i.e., nodes of layer xi+k are communicated to nodes of layer xi+k+1, where x is an integer and k is an offset constant to define the first such interconnected layer). In a specific example, the selected layers are every third or fourth layer (i.e., x=3 or 4). In another example, the interval of such layers is irregular, such that the layers whose activations are to be communicated are selected arbitrarily, or selected based on additional considerations. In another aspect, activations of a particular node may be communicated to a subset of the other nodes. For example, when the number of computing nodes is large, such as being greater than 10 for example, the cost of communicating the activation of every CNN at the predetermined layers to each other CNN at the respective subsequent layers may be impractically or prohibitively expensive. In such a case, the activations may be communicated to a predetermined subset (that may be selected randomly or in some other way prior to training) of the other CNNs. In an example, activations for node 1 layer 1 may be interconnected to node 2 layer 2 but not node 3 layer 2.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the testing teaching of Gryka with the plurality of neural networks each on processing units teaching of Krizhevsky because Krizhevsky teaches in par 004 that communication cost can be minimized through parallelizing CNN across processors but Krizhevsky teaches in par 012 that various issues discussed in pars 004-010 can be minimized by interconnecting between processing nodes to create a subset that is not so interconnected. This would prevent a limitation of attainable acceleration. See par 010. As this would improve processing one would be motivated to modify Gryka with Krizhevsky. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gryka, US PGPUB 20220237110 A1 (“Gryka”) in view of Liu et al., “Chatting with GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing,” published May 16, 2023, available at: < https://arxiv.org/pdf/2305.09434 > (“Liu”), further in view of Fernando et al., “Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution,” published September 28, 2023, available at: < https://arxiv.org/pdf/2309.16797 > (“Fernando”), further in view of Stonehocker et al., US PGPUB 20250029114 A1 (“Stonehocker”). Per claim 10, Gryka, Liu, and Fernando teach the limitations of claim 1, above. Gryka does not teach the neural network comprises ChatGPT. Stonehocker teaches an automated answering system for providing customer service. See abstract. Stonehocker teaches the neural network comprises ChatGPT in par 024: “GAI engine 120 may be an existing generative neural network, such as ChatGPT-3, ChatGPT-4, or other known models.” It would have been obvious to one ordinarily skilled in the art before the effective filing date of the claimed invention to modify the interactive testing teaching of Gryka with the ChatGPT teaching of Stonehocker because Stonehocker teaches in par 0024 that: “These models have been trained on extensive datasets and possess the ability to generate coherent and contextually relevant text based on provided input.” Stonehocker’s teaching therefore would motivate one because it would give Gryka the ability to generate coherent and contextually relevant text based on provided input which would improve Gryka. Therefore for these reasons one would be motivated to modify Gryka with Stonehocker. Therefore, claims 1-34 are rejected under 35 USC 103. Response to arguments 35 USC 103 Applicant argues that in Gryka (the prior art), an alternative object is used instead of a previous one, the testing action itself remains the same. Applicant then argues that suggesting a new object for testing using the same prior testing action in Gryka is not the same as creating an identifier of a suggested action in a newly suggested testing . Applicant’s arguments are unpersuasive. First, Gryka teaches a suggestion, par 0386. Therefore suggestion is taught. Applicant then characterizes the prior art by introducing terms and ideas that are not found in it. “Alternative object” is nowhere found in the cited art. What is actually taught is that the object being tested is slightly different than expected but is still the same, as can be clearly shown in par 0386 (Login = Login). There is no new object, it is the object that is being looked for (Login = Login) but it is not exactly where the testing procedure thinks it is, or not quite the same shape. Gryka (prior art) renders the above limitations obvious. At any rate, Liu teaches this limitation in page 6. Therefore the rejection is maintained. Applicant then argues Chandra however new art was found for the new limitations and applied. Therefore, Applicant’s arguments against Chandra in the independent claims are moot. The dependent claims arguments are dependent on the independent claim only and therefore are also unpersuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD W. CRANDALL whose telephone number is (313)446-6562. The examiner can normally be reached M - F, 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at (571) 270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD W. CRANDALL/ Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Jul 09, 2025
Non-Final Rejection — §101, §103, §112
Oct 13, 2025
Response Filed
Dec 06, 2025
Final Rejection — §101, §103, §112
Feb 10, 2026
Response after Non-Final Action
Feb 20, 2026
Request for Continued Examination
Mar 11, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602666
INFORMATION HANDLING SYSTEM MICRO MANUFACTURING CENTER FOR REUSE AND RECYCLING FACTORING INVENTORY
2y 5m to grant Granted Apr 14, 2026
Patent 12591589
DECENTRALIZED WILL MANAGEMENT APPARATUS, SYSTEMS AND RELATED METHODS OF USE
2y 5m to grant Granted Mar 31, 2026
Patent 12541382
USER PERSONA INJECTION FOR TASK-ORIENTED VIRTUAL ASSISTANTS
2y 5m to grant Granted Feb 03, 2026
Patent 12537090
METHOD AND SYSTEM FOR RULE-BASED ANONYMIZED DISPLAY AND DATA EXPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12530694
USING ENTITLEMENTS DEPLOYED ON BLOCKCHAIN TO MANAGE CUSTOMER EXPERIENCES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
64%
With Interview (+33.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 301 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month