Prosecution Insights
Last updated: April 19, 2026
Application No. 17/907,157

SYSTEM AND METHOD FOR PROVIDING ROBUST ARTIFICIAL INTELLIGENCE INFERENCE IN EDGE COMPUTING DEVICES

Non-Final OA §103§112
Filed
Sep 23, 2022
Examiner
JOHNSON, CEDRIC D
Art Unit
2186
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 645 resolved
+27.0% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.9%
-19.1% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office Action is a first Office Action on the merits of the application. Claims 1 - 20 are presented for examination. Claims 1 - 20 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings Objection The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Element 144 in FIG. 1 is not disclosed in the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification Objection The disclosure is objected to because of the following informalities: Page 9 recites paragraph [0034] “Block 216 of…”, but the next paragraph number is [0001] instead of [0035]. There is a paragraph numbering issue in the specification, and it is recommended the paragraphs are numbered correctly. Appropriate correction is required. Claim Objections Claims 2, 3, 12, and 13 are objected to because of the following informalities: Claims 2, 3, 12, and 13 recites “failure data”, but it is recommended to remove the quotations around the phrase. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: neural network training module configured to train, neural network testing module configured to assess, neural network testing module is configured to provide…and validate in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 - 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitations “a neural network training module configured to train at least one neural network model for deployment to the edge computing device based on data comprising baseline training data and field data received from the edge computing device”, “a neural network testing module configured to assess a readiness of the trained neural network model prior to deployment to the edge computing device”, “a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant”, “wherein the neural network testing module is configured to: provide a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data, and validate the trained neural network model based on a simulation output obtained from the digital twin” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed functions. There is no disclosure of any particular structure, either explicitly or inherently, to perform the claimed functions. As would be recognized by those of ordinary skill in the art, the claimed functions can be performed in any number of ways in hardware, software or a combination of the two. The specification does not provide sufficient details such that one of ordinary skill in the art would understand which structure or structures perform(s) the claimed functions. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Dependent claims 2 - 10 are rejected due to inherited claim deficiencies of claim 1. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 - 10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above, the disclosure does not provide adequate structure to perform the claimed functions. The specification does not demonstrate that applicant has made an invention that achieves the claimed functions because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Dependent claims 2 - 10 are rejected due to inherited claim deficiencies of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4, 10, 11, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yoginath et al (“On the Effectiveness of Recurrent Neural Networks for Live Modeling of Cyber-Physical Systems”), hereinafter “Yoginath”, in view of Waschneck et al. (“Deep Reinforcement Learning for Semiconductor Production Scheduling”), hereinafter “Waschneck”, and further in view of Naser et al (EP 3611578 A1), hereinafter “Naser” As per claim 1, Yoginath discloses: a system for supporting artificial intelligence inference in an edge computing device associated with a physical process or plant (Yoginath, page 314, left column, lines 20 - 24 discloses using data to build neural network models for efficiently predict variables of a physical system (Canal Lock CPS) in terms of water levels, provided in scenarios.) the system comprising: a neural network training module configured to train at least one neural network model for deployment to the edge computing device based on data comprising baseline training data and field data received from the edge computing device (Yoginath, page 313, left column, lines 6 - 8 discloses collecting data, with page 313, left col, ln 23 - 24 through rt col, ln 1 - 11 adds using collected information to train a model to obtain neural network model parameters, with state data computed in a layer to use in proceeding computations, and page 314, left column, lines 12 - 35 adds training models to a certain consistent loss value, and the trained models are found to be sufficient for predictions based on the data from certain scenarios.) Yoginath does not expressly disclose: a neural network testing module configured to assess a readiness of the trained neural network model prior to deployment to the edge computing device; and a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant; wherein the neural network testing module is configured to: provide a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data, and validate the trained neural network model based on a simulation output obtained from the digital twin. Waschneck however discloses: a neural network testing module configured to assess a readiness of the trained neural network model prior to deployment to the edge computing device and a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant (Waschneck, page 304, left col, ln 14 - 16 discloses a digital twin of a factory, and page 304, left column, lines 20 - 28 discloses training an algorithm offline in a simulation to find an optimum solution, and sent to a neural network brought online, with a digital twin indicating an update for the neural network based on changes regarding the production in the factory.) wherein the neural network testing module is configured to provide a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data (Waschneck, page 304, FIG. 2 discloses the production control of a factory with a neural network sending trained data to a simulation of the factory in a training layer, and a deployment layer with the simulation sending data to the digital twin of the factory.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath with the simulation of a factory, digital twin, and neural network teaching of Waschneck. The motivation to do so would have been because Waschneck discloses the benefit of a Deep Q Network environment including a digital twin of a factory, a simulation, and neural network, with agents continuing to learn after deployment of results if a significant deviation between simulation and reality are determined in production, to account for and adapt to differences (Waschneck, page 304, left column, lines 16 - 19). The combination of Yoginath and Waschneck does not expressly disclose: validate the trained neural network model based on a simulation output obtained from the digital twin. Naser however discloses: validate the trained neural network model based on a simulation output obtained from the digital twin (Naser, par [0066] discloses an improvement in the virtual control of a machine determined in a neural network algorithm (AI2) according to the simulation performed with the digital twin.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath and the simulation of a factory, digital twin, and neural network teaching of Waschneck with the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser. The motivation to do so would have been because Naser discloses the benefit of testing high risk operations of the physical factory device using the digital twin, avoiding the use of the high-risk operation, thereby improving security (Naser, par [0031]). As per claim 11, Yoginath discloses: a computer-implemented method for supporting artificial intelligence inference in an edge computing device associated with a physical process or plant (Yoginath, page 314, left column, lines 20 - 24 discloses using data to build neural network models for efficiently predict variables of a physical system (Canal Lock CPS) in terms of water levels, provided in scenarios.) the method comprising training at least one neural network model for deployment to the edge computing device based on data comprising baseline training data and field data received from the edge computing device (Yoginath, page 313, left column, lines 6 - 8 discloses collecting data, with page 313, left col, ln 23 - 24 through rt col, ln 1 - 11 adds using collected information to train a model to obtain neural network model parameters, with state data computed in a layer to use in proceeding computations, and page 314, left column, lines 12 - 35 adds training models to a certain consistent loss value, and the trained models are found to be sufficient for predictions based on the data from certain scenarios.) Yoginath does not expressly disclose: assessing a readiness of the trained neural network model prior to deployment to the edge computing device by employing a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant; wherein assessing the readiness of the trained neural network model comprises: providing a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data, and validating the trained neural network model based on a simulation output obtained from the digital twin. Waschneck however discloses: assessing a readiness of the trained neural network model prior to deployment to the edge computing device by employing a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant (Waschneck, page 304, left col, ln 14 - 16 discloses a digital twin of a factory, and page 304, left column, lines 20 - 28 discloses training an algorithm offline in a simulation to find an optimum solution, and sent to a neural network brought online, with a digital twin indicating an update for the neural network based on changes regarding the production in the factory.) wherein assessing the readiness of the trained neural network model comprises providing a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data (Waschneck, page 304, FIG. 2 discloses the production control of a factory with a neural network sending trained data to a simulation of the factory in a training layer, and a deployment layer with the simulation sending data to the digital twin of the factory.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath with the simulation of a factory, digital twin, and neural network teaching of Waschneck. The motivation to do so would have been because Waschneck discloses the benefit of a Deep Q Network environment including a digital twin of a factory, a simulation, and neural network, with agents continuing to learn after deployment of results if a significant deviation between simulation and reality are determined in production, to account for and adapt to differences (Waschneck, page 304, left column, lines 16 - 19). The combination of Yoginath and Waschneck does not expressly disclose: validating the trained neural network model based on a simulation output obtained from the digital twin. Naser however discloses: validating the trained neural network model based on a simulation output obtained from the digital twin (Naser, par [0066] discloses an improvement in the virtual control of a machine determined in a neural network algorithm (AI2) according to the simulation performed with the digital twin.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath and the simulation of a factory, digital twin, and neural network teaching of Waschneck with the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser. The motivation to do so would have been because Naser discloses the benefit of testing high risk operations of the physical factory device using the digital twin, avoiding the use of the high-risk operation, thereby improving security (Naser, par [0031]). As per claim 20, Yoginath discloses: a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer (Yoginath, page 309, right column, lines 15 - 22 discloses emulation and use of software for monitoring a cyber physical system using a digital twin, along with page 310, right column, lines 24 - 26 recites a specific software for use in implementing a logic controller, with emulation and using software is interpreted as being used in a computing device, which typically includes at least one type of processor and one form of memory for storing data.) cause the computer to: train at least one neural network model for deployment to the edge computing device associated with a physical process or plant based on data comprising baseline training data and field data received from the edge computing device (Yoginath, page 313, left column, lines 6 - 8 discloses collecting data from a PLC, as well as data associated with a Canal Lock CPS (cyber-physical system), with page 313, left col, ln 23 - 24 through rt col, ln 1 - 11 adds using collected information to train a model to obtain neural network model parameters, with state data computed in a layer to use in proceeding computations, and page 314, left column, lines 12 - 35 adds training models to a certain consistent loss value, and the trained models are found to be sufficient for predictions based on the data from certain scenarios.) Yoginath does not expressly disclose: assess a readiness of the trained neural network model prior to deployment to the edge computing device by employing a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant, wherein assessing the readiness of the trained neural network model comprises: providing a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data, and validating the trained neural network model based on a simulation output obtained from the digital twin. Waschneck however discloses: assess a readiness of the trained neural network model prior to deployment to the edge computing device by employing a digital twin of the physical process or plant, the digital twin comprising a simulation platform configured to execute a simulation of the physical process or plant (Waschneck, page 304, left col, ln 14 - 16 discloses a digital twin of a factory, and page 304, left column, lines 20 - 28 discloses training an algorithm offline in a simulation to find an optimum solution, and sent to a neural network brought online, with a digital twin indicating an update for the neural network based on changes regarding the production in the factory.) wherein assessing the readiness of the trained neural network model comprises: providing a simulation input to the digital twin, the simulation input comprising one or more test scenarios involving the trained neural network model, the test scenarios being generated exploiting the field data (Waschneck, page 304, FIG. 2 discloses the production control of a factory with a neural network sending trained data to a simulation of the factory in a training layer, and a deployment layer with the simulation sending data to the digital twin of the factory.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath with the simulation of a factory, digital twin, and neural network teaching of Waschneck. The motivation to do so would have been because Waschneck discloses the benefit of a Deep Q Network environment including a digital twin of a factory, a simulation, and neural network, with agents continuing to learn after deployment of results if a significant deviation between simulation and reality are determined in production, to account for and adapt to differences (Waschneck, page 304, left column, lines 16 - 19). The combination of Yoginath and Waschneck does not expressly disclose: validating the trained neural network model based on a simulation output obtained from the digital twin. Naser however discloses: validating the trained neural network model based on a simulation output obtained from the digital twin (Naser, par [0066] discloses an improvement in the virtual control of a machine determined in a neural network algorithm (AI2) according to the simulation performed with the digital twin.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath and the simulation of a factory, digital twin, and neural network teaching of Waschneck with the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser. The motivation to do so would have been because Naser discloses the benefit of testing high risk operations of the physical factory device using the digital twin, avoiding the use of the high-risk operation, thereby improving security (Naser, par [0031]). For claim 4: The combination of Yoginath, Waschneck, and Naser discloses claim 4: The system of claim 1, wherein the digital twin is configured to generate the simulation output by executing a simulation of the physical process or plant based on an inference generated by the trained neural network model in connection with the one or more test scenarios (Yoginath, page 314, left column, lines 20 - 28 discloses efficiently predicting water level variables of a physical system (Canal Lock CPS) in scenarios, to create neural network models, with page 314, left column, lines 37 - 40 adding the models created are trained to provide predictions, and page 314, right col, lines 42 - 44 adds using the scenarios in the digital twin to determine any anomalies.) For claim 10: The combination of Yoginath, Waschneck, and Naser discloses claim 10: The system of claim 1, wherein the system is implemented in a cloud computing environment (Naser, par [0088] discloses receiving control data instructions from the cloud.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath and the simulation of a factory, digital twin, and neural network teaching of Waschneck with the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, along with the additional teaching of receiving control data from the cloud, also found in Naser. The motivation to do so would have been because Naser discloses the benefit of testing high risk operations of the physical factory device using the digital twin, avoiding the use of the high-risk operation, thereby improving security (Naser, par [0031]). As per claim 14, note the rejections of claim 4 above. The instant claim 14 recite substantially the same limitations as the above rejected claim 4 and are therefore rejected under the same prior art teachings. Claims 2, 3, 9, 12, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yoginath et al (“On the Effectiveness of Recurrent Neural Networks for Live Modeling of Cyber-Physical Systems”), in view of Waschneck et al. (“Deep Reinforcement Learning for Semiconductor Production Scheduling”), in view of Naser et al (EP 3611578 A1), and further in view of Jain et al. (“Towards Smart Manufacturing with Virtual Factory and Data Analytics”), hereinafter “Jain”. As per claim 2, the combination of Yoginath, Waschneck, and Naser discloses the system of claim 1. The combination of Yoginath, Waschneck, and Naser does not expressly disclose: wherein the field data received from the edge computing device comprises at least field data identified as "failure data" by the edge computing device. Jain however discloses: wherein the field data received from the edge computing device comprises at least field data identified as "failure data" by the edge computing device (Jain, page 3022, lines 8 - 12 discloses recording data from experiments to train a neural network model, with a simulation performed to produce data for training and validating the model, with page 3023, lines 12 - 13 discloses generating files obtained during a simulation of a virtual factory, and page 3024, lines 1 - 3 adds files of the simulation pertaining to data including machines with failures and machines without failure.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the scenario including failure, a neural network trained and validated regarding a virtual factory teaching of Jain. The motivation to do so would have been because Jain discloses the benefit of an integration of a virtual factory and neural network application capable of adapting to changing configurations of a production floor by updating a virtual factory model, regenerating scenarios and retraining the neural network with the data (Jain, page 3019, lines 19 - 22). For claim 3: The combination of Yoginath, Waschneck, Naser, and Jain discloses claim 3: The system of claim 2, wherein the test scenarios are generated using the "failure data" (Jain, page 3025, lines 26 - 32 discloses validating a neural network for predicting values using data collected, with prediction values compared to original in a scenario without failures, and lines 39 - 41 through page 3026, lines 1 - 6 adds a scenario with failures used to provide a comparison between predicted and original values in FIG. 6, with the failures regarding the machine provides a less accurate prediction.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the scenario including failure, a neural network trained and validated regarding a virtual factory teaching of Jain, and the additional teaching of scenarios associated with that includes failures and scenarios without failures, also found in Jain. The motivation to do so would have been because Jain discloses the benefit of an integration of a virtual factory and neural network application capable of adapting to changing configurations of a production floor by updating a virtual factory model, regenerating scenarios and retraining the neural network with the data (Jain, page 3019, lines 19 - 22). For claim 9: The combination of Yoginath, Waschneck, Naser, and Jain discloses claim 9: The system of claim 1, wherein the digital twin is configured to directly provide a high-accuracy inference to the edge computing device, responsive to a request from the edge computing device, the high-accuracy inference being generated utilizing one or more undeployed neural network models in conjunction with a simulation of the physical process or plant by the digital twin (Jain, page 3019, lines 12 - 22 discloses using a virtual factory to provide data to train a neural network, with updates to the virtual factory model provide additional scenarios and data to retrain the neural network, with page 3021, lines 38 - 39 adds building a type of neural network to make a prediction for a new function.) Page 3019, lines 29 - 32 discloses virtual factory, digital factory and digital twin being similar phrases in the teaching of Jain. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the scenario including failure, a neural network trained and validated regarding a virtual factory teaching of Jain, and the additional teaching of new neural networks built and virtual factories, also found in Jain. The motivation to do so would have been because Jain discloses the benefit of an integration of a virtual factory and neural network application capable of adapting to changing configurations of a production floor by updating a virtual factory model, regenerating scenarios and retraining the neural network with the data (Jain, page 3019, lines 19 - 22). As per claims 12, 13, and 19, note the rejections of claims 2, 3, and 9 above. The instant claims 12, 13, and 19 recite substantially the same limitations as the above rejected claims 2, 3, and 9 and are therefore rejected under the same prior art teachings. Claims 5 - 8 and 15 - 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yoginath et al (“On the Effectiveness of Recurrent Neural Networks for Live Modeling of Cyber-Physical Systems”), in view of Waschneck et al. (“Deep Reinforcement Learning for Semiconductor Production Scheduling”), in view of Naser et al (EP 3611578 A1), and further in view of Han et al. (U.S. PG Pub 20200019858 A1), hereinafter “Han”. As per claim 5, the combination of Yoginath, Waschneck, and Naser discloses the system of claim 1. The combination of Yoginath, Waschneck, and Naser does not expressly disclose: wherein the simulation output from the digital twin comprises a performance metric of the trained neural network model, and wherein the neural network testing module is configured to validate trained neural network model by determining whether the performance metric is acceptable according to a defined threshold. Han however discloses: wherein the simulation output from the digital twin comprises a performance metric of the trained neural network model, and wherein the neural network testing module is configured to validate trained neural network model by determining whether the performance metric is acceptable according to a defined threshold (Han, par [0030] discloses a trained model is evaluated on its performance indicators and predicted performance indicators, and the model is retrained if there difference is greater than a threshold value.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the performance indicator to determine if a trained model needs to be retrained teaching of Han. The motivation to do so would have been because Han discloses the benefit of using up to date data to retrain the machine learning model (neural network) when actual performance indicators and predicted performance indicators deviate from each other (Han, par [0043]). For claim 6: The combination of Yoginath, Waschneck, Naser, and Han discloses claim 6: The system of claim 5, wherein, in the event that the performance metric is not acceptable, the neural network testing module is configured to request a re-training of the neural network model by the neural network training module based on a re-training specification (Han, par [0036], in which a state of a cyber-physical system is changed, and a difference between the actual and predicted performance indicators exceed a threshold, and current parameters of the model are interpreted to be updated based on the changing of the state of the cyber-physical system.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the performance indicator to determine if a trained model needs to be retrained teaching of Han, and the additional teaching of retraining a model based on its performance indicator, also found in Han. The motivation to do so would have been because Han discloses the benefit of using up to date data to retrain the machine learning model (neural network) when actual performance indicators and predicted performance indicators deviate from each other (Han, par [0043]). For claim 7: The combination of Yoginath, Waschneck, Naser, and Han discloses claim 7: The system of claim 6, wherein the re-training specification comprises a request to perform data augmentation on under-performing data in the simulation input (Han, par [0032] - [0036] discloses steps of training a model, using a parameter indicating the number of blocks in the model and historical data to obtain optimized parameters, with optimized parameters and current parameters obtained to compare actual performance indicators and predicted performance indicators, and the model is to be retained if a difference is found to exceed a threshold, with paragraphs [0031] - [0032] providing the amount of blocks are in the model for training, which are set to train the model and can be changed due to collected data being obtained after the historical data collection for the previous training of the model.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the performance indicator to determine if a trained model needs to be retrained teaching of Han, and the additional teaching of using collected data after the use of historical data in training and retraining a model, also found in Han. The motivation to do so would have been because Han discloses the benefit of using up to date data to retrain the machine learning model (neural network) when actual performance indicators and predicted performance indicators deviate from each other (Han, par [0043]). For claim 8: The combination of Yoginath, Waschneck, Naser, and Han discloses claim 8: The system of claim 6, wherein the neural network testing module is configured to iteratively request re-training of the neural network model and validate the re-trained neural network model prior to deployment to the edge computing device, until: a specified number of iterations is performed, or an acceptable performance metric is achieved (Han, par [0030] discloses the retraining a model if the difference in performance indicators are greater than a threshold value, with FIG. 3 shows a repetition of retraining of the model, in which the retraining is interpreted to end when the difference in performance indicators fall below the threshold value.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the simulation model, neural network, and digital twin of a physical system of Yoginath, the simulation of a factory, digital twin, and neural network teaching of Waschneck, the simulation of a digital twin indicating a performance of a neural network algorithm providing the improvement in a virtual control teaching of Naser, and the performance indicator to determine if a trained model needs to be retrained teaching of Han, and the additional teaching of repeating steps of training a model until a threshold is met, also found in Han. The motivation to do so would have been because Han discloses the benefit of using up to date data to retrain the machine learning model (neural network) when actual performance indicators and predicted performance indicators deviate from each other (Han, par [0043]). As per claims 15 - 18, note the rejections of claims 5 - 8 above. The instant claims 15 - 18 recite substantially the same limitations as the above rejected claims 5 - 8, and are therefore rejected under the same prior art teachings. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CEDRIC D JOHNSON whose telephone number is (571)270-7089. The examiner can normally be reached M-Th 4:30am - 2:00pm, F 4:30am - 11:30am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached at 571-270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Cedric Johnson/Primary Examiner, Art Unit 2186 January 10, 2026
Read full office action

Prosecution Timeline

Sep 23, 2022
Application Filed
Jan 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596852
COMPUTING DEVICE AND METHOD FOR UPDATING A MODEL OF A BUILDING
2y 5m to grant Granted Apr 07, 2026
Patent 12579335
OVERALL HYDRAULIC PERFORMANCE PREDICTION METHOD FOR SINK-TYPE DISHWASHER
2y 5m to grant Granted Mar 17, 2026
Patent 12554900
AUDIT OF COMPUTER-AIDED DESIGN DOCUMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12547794
VIRTUAL INTEGRATION TEST SYSTEM AND METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12536344
AI-BASED METHOD FOR GENERATING BUILDING BLOCK IN COMMERCIAL DISTRICT
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+23.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month