Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Final Rejection is filed in response to Applicant Arguments/Remarks Made in an amendment filed 10/23/2025.
Claims 2-4, 12, and 17-20 are cancelled.
Claims 1, 5, 6, 10, and 11 are amended.
New Claims 21-30 are added.
Claims 1, 5-11, 13-16, and 21-30 remain pending.
Response to Arguments
Argument 1, applicant argues in Applicant Arguments/Remarks Made in an amendment filed 10/23/2025, pg. 9-11, that the claims provide an improved encoder module to efficiently find a target balance between positive and negative training samples, which is used for robotic navigation, and as such overcome the U.S.C. 101 rejection.
Response to Argument 1, Applicants arguments have been fully considered and are persuasive in light of the amendments. The U.S.C. 101 rejections are respectfully withdrawn.
Argument 2, applicant argues in Applicant Arguments/Remarks Made in an amendment filed 10/23/2025, pg. 12-15, that prior art Perez fails to teach the primary claim limitations, “Determine a location of the navigating robot for propulsion control”, and “wherein the encoder module is trained to determine a target balance between positive and negative samples using the hyperparameters… generate a distance matrix including distance values between the candidate responses, respectively, and the input query… and a results module configured to select one of the candidate responses as a response to the input query based on the distance values”.
Response to Argument 2, in light of the amendments a newly found combination of prior art (U.S. Patent Application Publication NO. 20210174161 “Perez” and further in light of U.S. Patent Application Publication NO. 20210141383 “Silander”) is applied to updated rejections.
However, the examiner notes that Perez teaches in para. [0017, 0034, 0054], “The neural network model, trained using distant supervision and distance based ranking loss, combines an interaction matrix, Weaver blocks and adaptive subsampling to map to a fixed size representation vector, which is used to emit a score… the input question and input sentence are tokenized using word embeddings (i.e., a set of language modeling and feature learning techniques in natural language processing where words from the vocabulary, and possibly phrases thereof, are mapped to vectors of real numbers in a low dimensional space… at runtime the distance model 205 is applied to all candidate sentences 203 to obtain estimated distances that are used as scores 402. After computing a runtime answer in step 404 based on the sentences associated with the highest of the runtime scores computed in step 402, the predicted answer, which includes identified sentences 203 associated with the documents 202 from the corpus 20, is returned in response to the runtime question”. The examiner notes that a target balance encompasses an optimization of a model trained on positive and negative samples which is constrained by the dimension size n×m×e, where e is the dimension of the concatenated vector m.sub.ij. Wherein the estimated distances for all candidate sentences and the input query are used as scores and a highest runtime scoring candidate is selected as an answer in response to the input runtime query question.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5-6, 8, 11, 13-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20210174161 “Perez” and further in light of U.S. Patent Application Publication NO. 20210141383 “Silander”.
Claim 6:
Perez teaches a search system of a navigating robot, comprising: an encoder module configured to:
generate encodings based on an input query and candidate responses using parameters trained using hyperparameters (i.e. para. [0030], “distance based ranking loss (i.e., training loss) is computed …This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein a distance ranking module generates first positive encodings and second negative encodings for candidate responses to a user query using training loss parameters. Wherein the BRI for generating encoding encompasses how raw input data is converted into embeddings that mapped to vectors of real numbers in a low dimensional space ) optimized using coordinate descent (i.e. para. [0030], “This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein the BRI for coordinate descent encompasses how model parameters are optimize by minimizing loss) and line searching (i.e. para. [0053], “retrieving documents 202 concerning the runtime question using search engine 24; identifying sentences 102, or more generally portions of text, in the retrieved documents; computing a runtime score 402 for the identified sentences using the neural network model”, wherein the BRI for line searching encompasses searching and identifying sentences containing portions of text);
wherein the encoder module is trained to determine a target balance between positive and negative samples using the hyperparameters (i.e. para. [0017], “The neural network model, trained using distant supervision and distance based ranking loss, combines an interaction matrix, Weaver blocks and adaptive subsampling to map to a fixed size representation vector, which is used to emit a score”, wherein the BRI for a target balance encompasses a balance of positive and negative examples is optimized by an distance based ranking loss matrix that is computed using both positive and negative ranking loss scores);
a distance module configured to, based on the encodings (i.e. para. [0034], “the input question and input sentence are tokenized using word embeddings (i.e., a set of language modeling and feature learning techniques in natural language processing where words from the vocabulary, and possibly phrases thereof, are mapped to vectors of real numbers in a low dimensional space”, wherein the BRI for encodings encompasses the numerical representations the positive and negative samples which are used in the distance based ranking loss, generate a distance matrix including distance values between the candidate responses, respectively, and the input query (i.e. para. [0036-0041], “The input of the neural network model (i.e., scoring model) 205 is an interaction tensor M between input question matrix Q and input sentence matrix S, it may be calculated as m.sub.ij=[q.sub.i; s.sub.j; q.sub.i⊙s.sub.j], where: [0037] 0<i<n, [0038] 0<i<m, [0039] u.sub.k=u[k, :] for u being q or s, [0040] the operator ⊙ represents the element-wise product, and [0041] ; represents the concatenation over the last dimension, giving a tensor having shape n×m×e, where e is the dimension of the concatenated vector m.sub.ij”, wherein the BRI for a distance module encompasses a scoring module that generates a matrix calculating a score for candidate sentence embeddings likely or not to contain an answer to a input query); and a results module configured to select one of the candidate responses as a response to the input query based on the distance values (i.e. para. [0053], computing a runtime score 402 for the identified sentences using the neural network model 205 trained using distant supervision and distance based ranking loss; selecting the sentence corresponding to the highest score to provide an answer 404; and sending the answer to the client device 11).
While Perez teaches generating encodings for input data and using optimized hyperparameter and contrastive losses to calculate a distance between positive and negative examples to find an optimal balance of similarity metrics to use, Perez may not explicitly teach to
Determine a location of the navigating robot for propulsion control.
However, Silander teaches to
Determine a location of the navigating robot for propulsion control (i.e. para. [0024, 0046], a navigating robot is described and includes: a camera configured to capture images within a field of view in front of the navigating robot… the control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the right by the predetermined angle in response to the output of the trained model 116 being in the second state. The control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the left by the predetermined angle in response to the output of the trained model 116 being in the third state. The control module 112 may not actuate the propulsion devices 108 to not move the navigating robot 100 in response to the output of the trained model 116 being in the fourth state”, wherein the propulsion is in a certain direction is based on the identified state of the robot at a presently updated and analyzed camera location).
It would have been obvious to one of ordinary skill in the art at the time of filing to add to determine a location of the navigating robot for propulsion control, to Perez’s hyperparameter optimization and candidate response selection, with the base chassis for a robot and propulsion methods, as taught by Silander. One would have been motivated candidate decision making formulas of Perez and image machine learning model for recognition of Silander in order create a better autonomous decision making robot a with less supervision, thus saving a user time and effort.
Claim 1:
Perez and Silander teach a training system comprising: the search system of claim 6. Perez further teaches
a training module configured to: train the parameters using the hyperparameters (i.e. para. [0017], “The method presented here generally comprises these aspects: (i) learning parameters of the neural network model (i.e., a scoring model); and (ii) using the scoring model to determine the most relevant portion of text to answer a given question”, wherein the BRI for hyperparameters encompasses parameters that control how the model learns); wherein the hyperparameters include: a first hyperparameter indicative of a first weight value to apply based on positive interactions of entries of a distance matrix based on encodings (i.e. para. [0054], “More specifically at training time, a distance model is applied using one selected positive sentence 206a while at runtime the distance model 205 is applied to all candidate sentences 203 to obtain estimated distances that are used as scores 402”, wherein the BRI for positive interactions encompasses sentence entries of potential responses that rank as close on a distance matrix for likelihood of containing an answer response to the question);
a second hyperparameter indicative of a second weight value to apply based on negative interactions of entries of the distance matrix generated (i.e. para. [0054], “and one selected negative sentence 206b to obtain estimated distances”, wherein the BRI for negative interactions encompasses sentence entries of potential responses that rank as far on a distance matrix for likelihood of containing an answer response to the question);
and a third hyperparameter corresponding to a dimension of the distance matrix (i.e. para. [0034, 0054], “e embeddings for each word of the input question are gathered in a matrix Q=[q.sub.0, . . . , q.sub.n] where n is the number of words in the input question and q.sub.i is the i-th word of the input question, and the token embeddings for each word of the input sentence are gathered in a matrix S=[s.sub.0, . . . , s.sub.m] where m is the number of words in the input sentence and q.sub.i is the i-th word of the input sentence”, wherein the BRI for a dimension of a distance matrix encompasses how distance scores for each of the positive and negative sentences are calculated using a size dependent on the input query words).
Claim 5:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein the parameter are trained: based on minimizing a total contrastive loss determined based on a positive loss and an entropy loss (i.e. para. [0028], “a distance based ranking loss (i.e., training loss) is computed in using the scores of the positive sentences 204a and the negative sentences 204b. This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein the BRI for a positive loss is a ranking loss that is a higher score for sentences containing a correct answer and entropy loss encompasses a ranking loss that is a lower score for sentences not containing an answer to the given question. It is noted the model training is optimized based on a comparison of the two losses as both ranking losses are used and backpropagated into as part of optimizing the model); and balance the positive loss and the entropy loss based on one of the hyperparameters corresponding to a dimension of the distance matrix (i.e. para. [0017], “The neural network model, trained using distant supervision and distance based ranking loss, combines an interaction matrix, Weaver blocks and adaptive subsampling to map to a fixed size representation vector, which is used to emit a score”, wherein the BRI for balance encompasses how a distance based ranking loss is computed using both positive and negative ranking loss scores).
Claim 8:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein the encoder module includes a neural network configured to generate the encodings (i.e. para. [0030], “distance based ranking loss (i.e., training loss) is computed …This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein a neural network model generates first positive encodings and second negative encodings for candidate responses to a user query using training loss parameters) using the parameters trained using hyperparameters (i.e. para. [0017], “The method presented here generally comprises these aspects: (i) learning parameters of the neural network model (i.e., a scoring model); and (ii) using the scoring model to determine the most relevant portion of text to answer a given question”, wherein the BRI for hyperparameters encompasses parameters that control how the model learns) optimized using coordinate descent (i.e. para. [0030], “This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein the BRI for coordinate descent encompasses how model parameters are optimize by minimizing loss) and line searching (i.e. para. [0054], “More specifically at training time, a distance model is applied using one selected positive sentence 206a while at runtime the distance model 205 is applied to all candidate sentences 203 to obtain estimated distances that are used as scores 402”, wherein the BRI for positive interactions encompasses sentence entries of potential responses that rank as close on a distance matrix for likelihood of containing an answer response to the question).
Claim 11:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein the hyperparameters include:
a first hyperparameter indicative of a first weight value to apply based on positive interactions of entries of the distance matrix (i.e. para. [0054], “More specifically at training time, a distance model is applied using one selected positive sentence 206a while at runtime the distance model 205 is applied to all candidate sentences 203 to obtain estimated distances that are used as scores 402”, wherein the BRI for positive interactions encompasses sentence entries of potential responses that rank as close on a distance matrix for likelihood of containing an answer response to the question);
a second hyperparameter indicative of a second weight value to apply based on negative interactions of entries of the distance matrix (i.e. para. [0054], “and one selected negative sentence 206b to obtain estimated distances”, wherein the BRI for negative interactions encompasses sentence entries of potential responses that rank as far on a distance matrix for likelihood of containing an answer response to the question); and
a third hyperparameter corresponding to a dimension of the distance matrix(i.e. para. [0034, 0054], “e embeddings for each word of the input question are gathered in a matrix Q=[q.sub.0, . . . , q.sub.n] where n is the number of words in the input question and q.sub.i is the i-th word of the input question, and the token embeddings for each word of the input sentence are gathered in a matrix S=[s.sub.0, . . . , s.sub.m] where m is the number of words in the input sentence and q.sub.i is the i-th word of the input sentence”, wherein the BRI for a dimension of a distance matrix encompasses how distance scores for each of the positive and negative sentences are calculated using a size dependent on the input query words).
Claim 13:
Perez and Silander teach the search system of claim 6.
Silander further teaches
wherein the candidate responses include images (i.e. para. [0024, 0046], a navigating robot is described and includes: a camera configured to capture images within a field of view in front of the navigating robot… the control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the right by the predetermined angle in response to the output of the trained model 116 being in the second state. The control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the left by the predetermined angle in response to the output of the trained model 116 being in the third state. The control module 112 may not actuate the propulsion devices 108 to not move the navigating robot 100 in response to the output of the trained model 116 being in the fourth state”, wherein the propulsion is in a certain direction is based on the identified state of the robot at a presently updated and analyzed camera location).
Claim 14:
Perez teaches the search system of claim 6 wherein the candidate responses include text (i.e. para. [0054], After computing a runtime answer in step 404 based on the sentences associated with the highest of the runtime scores computed in step 402, the predicted answer, which includes identified sentences 203 associated with the documents 202 from the corpus 20, is returned in response to the runtime question).
Claim 15:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein: the encoder module is configured to receive the input query from a computing device via a network (i.e. para. [0023], The client equipment 11 has one or more question for querying the large-scale text or corpus stored in the first server 10a to obtain answers thereto in an identified collection of documents); and the search system further includes a transceiver module configured to transmit the response including the one of the candidate responses to the computing device via the network (i.e. para. [0054], is returned in response to the runtime question (for example, from server 10a to client equipment 11)).
Claim 16:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein the results module is configured to select one of the candidate responses as a response to the input query based on the distance values (i.e. para. [0030], this distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205).
Claim(s) 21, 23, 26-30, is/are rejected under 35 U.S.C. 103 as being unpatentable over in light of U.S. Patent Application Publication NO. 20210141383 “Silander”, in light of U.S. Patent Application Publication NO. 20210174161 “Perez”, and further in light of U.S. Patent Application Publication NO. 20210109537 “Li”.
Claim 21:
Silander teaches a navigating robot (i.e. para. [0030], FIG. 1 is a functional block diagram of an example implementation of a navigating robot), comprising:
a camera configured to capture one or more images within a predetermined field of view in front of the navigating robot (i.e. para. [0024], a navigating robot is described and includes: a camera configured to capture images within a field of view in front of the navigating robot);
an encoder module configured to generate encodings based on an input query and candidate responses (i.e. para. [0050]. “The Visual navigation may be modeled as a Partially Observed Markov Decision Process (POMDP) as a tuple P:=custom-characterS,A,Ω,R,T,O,P.sub.Ocustom-character, where S is the set of states, A is the set of actions, Ω, is the set of observations, all of which may be finite sets”, wherein the BRI for an input query encompasses the input image encoded as observations and wherein the BRI for candidate responses encompasses a set of actions that may be candidate movement responses for the robot) using parameters trained using hyperparameters optimized using coordinate descent and line searching, wherein the encoder module is configured to identify a (i.e. para. [0045], “The trained model 116 may generate an output each time the input from the camera 104 is updated. The trained model 116 may be configured to set the output at a given time to one of a group consisting of: a first state (corresponding to moving forward by a predetermined distance, such as 1 foot or ⅓ of a meter), a second state (corresponding to turning right by a predetermined angle, such as 45 or 90 degrees), a third state (corresponding to turning left by a predetermined angle, such as 45 or 90 degrees), and a fourth state (corresponding to not moving)”, wherein an updated camera image is input to the model which encodes observations to determine a present location on the route from a starting location to a goal location, and calculate movement state for the propulsion motor);
results module configured to select one of the candidate responses as a response to the input query (i.e. para. [0045], “The trained model 116 may generate an output indicative of an action to be taken by the navigating robot 100 based on the input from the camera 104”, wherein a propulsion direction may be determined based on the identified state of the robot based on a current camera location)
a control module configured to control propulsion of the navigating robot based on the present location of the navigating robot (i.e. para. [0046], “the control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the right by the predetermined angle in response to the output of the trained model 116 being in the second state. The control module 112 may actuate the propulsion devices 108 to turn the navigating robot 100 to the left by the predetermined angle in response to the output of the trained model 116 being in the third state. The control module 112 may not actuate the propulsion devices 108 to not move the navigating robot 100 in response to the output of the trained model 116 being in the fourth state”, wherein the propulsion in a certain direction is based on the identified state of the robot at a present location).
While Silander teaches, an encoder module configured to generate encodings based on an input query images and candidate propulsion responses and selecting a control propulsion based on an identified location en route to a goal location, Silander may not explicitly teach
using parameters trained using hyperparameters optimized using coordinate descent and line searching, wherein the encoder module is configured to identify a closest image to the one or more images captured from the camera, wherein the closet image is used to determine a present location of the navigating robot;
a distance module configured to generate a distance matrix including distance values between the candidate responses, respectively, and the input query; and a
results module configured to select one of the candidate responses as a response to the input query based on the distance values.
However, Perez teaches to
generate encodings based on an input query and candidate responses using parameters trained using hyperparameters (i.e. para. [0030], “distance based ranking loss (i.e., training loss) is computed …This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein a distance ranking module generates first positive encodings and second negative encodings for candidate responses to a user query using training loss parameters. Wherein the BRI for generating encoding encompasses how raw input data is converted into embeddings that mapped to vectors of real numbers in a low dimensional space ) optimized using coordinate descent (i.e. para. [0030], “This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein the BRI for coordinate descent encompasses how model parameters are optimize by minimizing loss) and line searching (i.e. para. [0053], “retrieving documents 202 concerning the runtime question using search engine 24; identifying sentences 102, or more generally portions of text, in the retrieved documents; computing a runtime score 402 for the identified sentences using the neural network model”, wherein the BRI for line searching encompasses searching and identifying sentences containing portions of text).
It would have been obvious to one of ordinary skill in the art at the time of filing to add using parameters trained using hyperparameters optimized using coordinate descent and line searching, to Silander’s image encoding and propulsion determination, with the specific hyper parameter optimization formulas, as taught by Perez. One would have been motivated to combine the coordinate decent and line searching of text of Perez and the image machine learning model for recognition of Silander in order to further cover different types of visual found in images, such as text, and thus have faster object detection in the field of image recognition.
While Silander-Perez teach an encoder module configured to generate encodings based on an input query and candidate responses using parameters trained using hyperparameters optimized using coordinate descent and line searching, Silander-Perez may not explicitly teach
wherein the encoder module is configured to identify a closest image to the one or more images captured from the camera, wherein the closet image is used to determine a present location of the navigating robot;
a distance module configured to generate a distance matrix including distance values between the candidate responses, respectively, and the input query; and a
results module configured to select one of the candidate responses as a response to the input query based on the distance values.
However, Li teaches
wherein the encoder module is configured to identify a closest image to the one or more images captured from the camera, wherein the closet image is used to determine a present location of the navigating robot (i.e. para. [0013-0014], “obtaining a Next-Best-View and planning a global path from the robot to the Next-Best-View, which comprises: 2.1) obtaining an edge e closest to a current location of the robot and two nodes v.sub.e.sup.1 and v.sub.e.sup.2 of the e in the topological map G by taking all leaf nodes V.sub.leaf in the topological map G as initial candidate frontiers”, wherein the BRI for an encoding module encompasses how camera image data for an autonomous robot has features encoded as distance feature vectors to determine a robots current location when trying to find a current path with a number of candidate frontiers ) ;
a distance module configured to generate a distance matrix including distance values between the candidate responses, respectively, and the input query (i.e. para. [0012, 0018], “ recording edge lengths in a distance matrix M.sub.dist, to obtain a topological map G={V, E, M.sub.dist} of the passable region… valuating each candidate point in the candidate point set P.sub.candidate by a Multi-Criteria-Decision-Making approach based on a fuzzy measure function, taking the candidate point with the highest score as the Next-Best-View p.sub.NBV”, wherein a distance candidate point with a highest score is used to obtain a next best view when planning the global path); and a
results module configured to select one of the candidate responses as a response to the input query based on the distance values (i.e. para. [0018], obtaining the global path R={r.sub.0, r.sub.1, r.sub.2, . . . , p.sub.NBV} from the current location of the robot to the Next-Best-View by tracing back in the result of in 2.2)).
It would have been obvious to one of ordinary skill in the art at the time of filing to add wherein the encoder module is configured to identify a closest image to the one or more images captured from the camera, wherein the closet image is used to determine a present location of the navigating robot; a distance module configured to generate a distance matrix including distance values between the candidate responses, respectively, and the input query; and a results module configured to select one of the candidate responses as a response to the input query based on the distance values, to Silander-Perez’s hyperparameter optimization and image encoding with propulsion determination, with the distance matrix and distance calculations between potential candidate image data points that results in a candidate direction for the robot being selected as a result of the distance calculations, as taught by Li. One would have been motivated to combine the distance matrix calculations of Li and image machine learning model for recognition of Silander-Perez in order to achieve further process different features found in input images by a navigational robot which can enable the indoor autonomous robotic exploration to be faster.
Claim 23:
Silander, Perez, and LI teach the navigating robot of claim 21.
Perez further teaches
wherein the encoder module includes a neural network that generates the encodings using the parameters trained using hyperparameters optimized using coordinate descent (i.e. para. [0030], “This distance based ranking loss aims at giving a higher score to sentences containing an answer to the given question than sentences not containing an answer to the given question, in order to use it to optimize the scoring model 205”, wherein the BRI for coordinate descent encompasses how model parameters are optimize by minimizing loss) and line searching (i.e. para. [0053], “retrieving documents 202 concerning the runtime question using search engine 24; identifying sentences 102, or more generally portions of text, in the retrieved documents; computing a runtime score 402 for the identified sentences using the neural network model”, wherein the BRI for line searching encompasses searching and identifying sentences containing portions of text).
Claim 26:
Claim 26 is the robot claim reciting similar limitations to Claim 1 and is rejected for similar reasons.
Claim 27:
Claim 27 is the robot claim reciting similar limitations to Claim 13 and is rejected for similar reasons.
Claim 28:
Claim 28 is the robot claim reciting similar limitations to Claim 14 and is rejected for similar reasons.
Claim 29:
Claim 29 is the robot claim reciting similar limitations to Claim 15 and is rejected for similar reasons.
Claim 30:
Claim 30 is the robot claim reciting similar limitations to Claim 16 and is rejected for similar reasons.
Claim(s) 7 & 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20210174161 “Perez” in light of U.S. Patent Application Publication NO. 20210141383 “Silander”, as applied to Claims 1 and 21 above, and further in light of U.S. Patent Application Publication NO. 20220235721 “Williams”.
Claim 7:
Perez and Silander teach the search system of claim 6.
Perez may not explicitly teach
wherein the line searching includes bounded golden section line searching.
However, Williams teaches
wherein the line searching includes bounded golden section line searching (i.e. para. [0102], “a golden-section line search method may be used to locate the minima. A golden-section line search is a form of sectioning algorithm wherein the golden ratio ((1+√5)/2) is used to select the next point (group of actuator setpoints) to be evaluated”, wherein the BRI for bounded golden section line searching encompasses how an optimizer module may have a line searching direction or vector within a setpoint search space and execute golden line search method).
It would have been obvious to one of ordinary skill in the art at the time of filing to add wherein the line searching includes bounded golden section line searching, to Perez-Silander’s contrastive loss optimization, with wherein the line searching includes bounded golden section line searching, as taught by Williams. One would have been motivated to combine the optimization searching techniques of Williams and the line searching of Perez-Silander in order to optimize line searching in a computationally efficient manner.
Claim 22:
Claim 22 is the robot claim reciting similar limitations to Claim 7 and is rejected for similar reasons.
Claim(s) 9-10 & 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20210174161 “Perez” and further in light of U.S. Patent Application Publication NO. 20210141383 “Silander”, as applied to claim 6 above, and further in light of U.S. Patent Application Publication NO. 20220114444 “Weinzaepfel”.
Claim 9:
Perez and Silander teach the search system of claim 6.
Perez further teaches wherein the neural network is a (i.e. para. [0035], “FIG. 2 is adapted from the Weaver model for machine reading, where the answering parts are removed and a different pooling layer is added for reducing a variable-size tensor into a fixed-size tensor, which is followed by a fully connected neural network (FCNN), for example a multilayer perceptron (MLP)”, wherein Perez sets the stage for a convolutional neural network as the scoring model multi-layered neural network for classifying and optimizing lines of data).
While Perez teaches a training system with a neural network utilizing contrastive losses, Perez may not explicitly teach that the neural network is a
convolutional neural network
However, Weinzaepfel teaches that the neural network is a
Convolutional neural network (i.e. para. [0110], “Regarding object detection, the SuperLoss function may be applied on the box classification component of two object detection frameworks, such as the faster recursive convolutional neural network (Faster R-CNN) framework”, wherein a convolutional neural network may be used for faster classification).
It would have been obvious to one of ordinary skill in the art at the time of filing to add a convolutional neural network, to Perez-Silander’s contrastive loss optimization, with using a CNN, as taught by Weinzaepfel. One would have been motivated to combine the use of a CNN of Weinzaepfel and the contrastive loss optimization of Perez-Silander in order to achieve faster object detection in the field of image recognition.
Claim 10:
Perez, Silander, and Weinzaepfel teach the search system of claim 8.
Weinzaepfel further teaches wherein the neural network includes a ResNet-18 neural network (i.e. para. [0118], A ResNet-18 model (with a single output) is used, initialized on ImageNet as predictor and trained for 100 epochs using SGD).
Claim 24:
Claim 24 is the robot claim reciting similar limitations to claim 9 and is rejected for similar reasons.
Claim 25:
Claim 25 is the robot claim reciting similar limitations to claim 10 and is rejected for similar reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent Application Publication NO. 20210374603 “Xia”, teaches in para. [0030], that the contrastive loss is to constrain the model to generate the positive example with a higher probability than the negative example with a certain margin. With the contrastive loss, the CLANG model 130 is regularized to focus on the given domain and intent and the probability of generating negative examples is reduced .
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.T./ Examiner, Art Unit 2145
/CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145