Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
This communication is in response to the application filed on 07/11/2023 in which Claims 1-19 are presented for examination.
Drawings
The applicant’s drawings submitted on 07/11/2023 are acceptable for examination purposes.
Objections
Claims 7 and 8 are objected to because of the following informalities: The term " RNN and LSTM“ are unclear what they are stand for. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Fan US 11984910 B1 in view of Masmoudi US 11922314 B1; in further view of Broaddus US 12131539 B1.
As to claim 1, Fan teaches a system for augmenting a neural network comprising: the neural network further comprising (Fan Col. 6 Col. 25-36) [neural networks]: an input layer; a hidden layer connected to the input layer; and an output layer joined to the hidden layer (Fan Col. 15 Col. 44-65) [deep neural network (DNN) that includes an input layer, an output layer, and one or more hidden intermediate layers positioned between the input layer]; a pre-input layer attached to the input layer comprising (Fan Col. 15 Col. 44-65) [pre-input layer (e.g., embedding and/or averaging network)]:
It is noted that Fan fails to disclose a layer for computing physics equations connected to the output layer.
However, Masmoudi discloses a layer for computing physics equations (Masmoudi Col. 1 Col. 27-44) [equilibrium equations are computed by the physics solver] connected to the output layer (Masmoudi Col. 7 Col. 47-61) [A topology of a neural network may include several layers with several cells (or units, nodes etc.) in each layer. These layers may include an input layer, one or more hidden layers and an output layer. An example topology of the neural network may comprise only three layers which can be enough to model the computation of a physical solver or a dynamic characteristic of a physical system]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Masmoudi to the neural network system of Fan would have yield predictable results and resulted in an improved system, namely, a system that would provide a trained computing structure to determine future output data of the physical system in real time. (Masmoudi Abstract)
It is noted the combination of Fan and Masmoudi fails to disclose a first encoder for handling spatial inputs; and a second encoder for handling temporal inputs.
However, Broaddus discloses a first encoder for handling spatial inputs; and a second encoder for handling temporal inputs (Broaddus Claim 1) [providing each of the first sequence of spatial-temporal features and the second sequence of spatial-temporal features as inputs to a transformer executed by the computer system]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Broaddus to the neural network system of Fan and Masmoudi would have yield predictable results and resulted in an improved system, namely, a system that would detect events from image features that are generated from sequences of images (Broaddus Col. 1)
As to claim 2, Fan teaches wherein the neural network is an unsupervised learning neural network (Fan Col. 15 Col. 44-65) [unsupervised learning,]
As to claim 3, the combination of Fan, Masmoudi and Broaddus teaches wherein pre-input layer (Fan Col. 15 Col. 44-65) [pre-input layer (e.g., embedding and/or averaging network)]is a transformer (Broaddus Claim 1) [providing each of the first sequence of spatial-temporal features and the second sequence of spatial-temporal features as inputs to a transformer executed by the computer system]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Broaddus to the neural network system of Fan and Masmoudi would have yield predictable results and resulted in an improved system, namely, a system that would detect events from image features that are generated from sequences of images (Broaddus Col. 1)
As to claim 4, the combination of Fan, Masmoudi and Broaddus teaches wherein the first encoder is concatenated to the second encoder (Broaddus Claim 1) [wherein the at least one layer of the transformer encoder has a multi-head self-attention module and a feedforward network]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Broaddus to the neural network system of Fan and Masmoudi would have yield predictable results and resulted in an improved system, namely, a system that would detect events from image features that are generated from sequences of images (Broaddus Col. 1)
As to claim 5, the combination of Fan, Masmoudi and Broaddus teaches wherein the system further comprising, computing the physics equation iteratively (Masmoudi Col. 1 Col. 27-44) [equilibrium equations are computed by the physics solver]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Masmoudi to the neural network system of Fan would have yield predictable results and resulted in an improved system, namely, a system that would provide a trained computing structure to determine future output data of the physical system in real time. (Masmoudi Abstract)
As to claim 6, the combination of Fan, Masmoudi and Broaddus teaches wherein the physics equations comprise at least a partial differential equation (Masmoudi Col. 7 Col. 15-36) [A physical solver could be any set of partial derivative equations discretized in time solver]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Masmoudi to the neural network system of Fan would have yield predictable results and resulted in an improved system, namely, a system that would provide a trained computing structure to determine future output data of the physical system in real time. (Masmoudi Abstract)
As to claim 7, Fan teaches wherein the pre-input layer is a RNN (Fan Col. 15 Col. 44-65) [recurrent neural network (RNN)]
As to claim 8, Fan teaches wherein the pre-input layer is a LSTM (Fan Col. 4 Col. 5-15) [neural networks LSTM can be based on discrete times]
As to claim 9, Fan teaches further comprising processing data input to the input layer parallelly (Fan) [the LDPC decoder processes the data to generate LDPC state information]
As to claims 10 and 19, claims 10 and 19 recite the claimed that contain similar limitations as claim 1; therefore, they are rejected under the same rationale.
As to claims 11-12, claims 11-12 recite the claimed that respectively contain similar limitations as claims 2-3; therefore, they are is rejected under the same rationale.
As to claims 13-14, the combination of Fan, Masmoudi and Broaddus teaches wherein the encoders further comprising a first encoder to handle time inputs; wherein the encoders further comprising a second encoder to handle space inputs (Broaddus Col. 8 lines 33-54) [provided as an input to the transformer encoder, note: the inputs can be time and/or space]
Thus, it would have been recognized by one of ordinary skill in the art before the effective filing date of the claimed invention, that applying the known technique taught by Broaddus to the neural network system of Fan and Masmoudi would have yield predictable results and resulted in an improved system, namely, a system that would detect events from image features that are generated from sequences of images (Broaddus Col. 1)
As to claims 15-18, claims 15-18 recite the claimed that respectively contain similar limitations as claims 6-9; therefore, they are is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVANS DESROSIERS whose telephone number is (571)270-5438. The examiner can normally be reached Monday -Friday 8:00 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Korzuch can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EVANS DESROSIERS/Primary Examiner, Art Unit 2491