Prosecution Insights
Last updated: April 19, 2026
Application No. 18/322,898

MACHINE LEARNING FOR OPERATING A MOVABLE DEVICE

Non-Final OA §101§102§103
Filed
May 24, 2023
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Ford Global Technologies LLC
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. T his action is responsive to the Application filed on May 24, 2023 . Claims 1-20 are pending in the case. Claims 1 and 13 are the independent claims. This action is non-final. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim s 1 -20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental steps) without significantly more. This judicial exception is not integrated into a practical application because any additional elements amount to implementing the abstract idea on a generic computer. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1 and 1 3 , and relying on the evaluation flowchart in MPEP 2106: Step 1 (Is the claim to a process, machine, manufacture, or composition of matter?) : Yes. Claim 1 is a system ( machine ). Claim 1 3 is a method ( process ). Step 2a Prong One (Does the claim recite an abstract idea?) : Yes. Claims 1 and 1 3 recite: …outputs a first prediction based on the image…outputting the first prediction (a mental process of observation, such as of an observed image, and a mental process of determination, such as making a prediction based on the image ) ; … in a first neural network that outputs a first prediction…wherein weights applied to layers in the first neural network are determined by minimizing a sum of a first loss function and a second loss function ( a recitation of a mathematical formula /concept; see paragraph 0038 of the specification of the instant application , describing the corresponding equation ); wherein the first loss function is determined from first features determined in the first neural network trained to output a first prediction and from second features determined in a second neural network trained to output a second prediction (a recitation of a mathematical formula /concept; see paragraph 0038 of the specification of the instant application , describing the corresponding equation ); wherein the second loss function is determined based on comparing the first prediction to ground truth (a recitation of a mathematical formula /concept; see paragraph 0038 of the specification of the instant application , describing the corresponding equation ) . Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations and mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?) : No. Claims 1 and 1 3 additionally recite: a computer that includes a processor and a memory, the memory including instructions executable by the processor to… (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); receiving an image (insignificant extra-solution activity as discussed in MPEP 2106.05(g)) . Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to siqnificantly more than the judicial exception) : No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…) ), claims 1 and 1 3 do not recite any additional elements that amount to significantly more than the abstract idea. As discussed above, Claims 1 and 13 additionally recite: a computer that includes a processor and a memory, the memory including instructions executable by the processor to… (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); receiving an image (insignificant extra-solution activity as discussed in MPEP 2106.05(g), reevaluated as including well-understood, routine, conventional activity such as receiving or transmitting data over a network as discussed in MPEP 2106.03(d)). The additional elements as discussed above , in combination with the abstract idea , are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea . Regarding dependent claims 2 and 1 4 : Step 2a Prong One : incorporates the rejection of claims 1 and 1 3 . T he claims additionally recite wherein determining weights includes backpropagating the sum of the first loss function and the second loss function to layers of the first neural network while varying the weights (a recitation of a mathematical calculation/concept). Step 2a Prong Two: the claims do not recite any additional limitations. Step 2b: the claims do not recite any additional limitations. Regarding dependent claims 3 and 1 5 : Step 2a Prong One : incorporates the rejection of claims 1 and 1 3. Step 2a Prong Two: the claims additionally recite wherein the image received in the first neural network is a video image, and the second neural network receives a second video image and one or more of a thermal infrared image or a gated infrared image (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the image received in the first neural network is a video image, and the second neural network receives a second video image and one or more of a thermal infrared image or a gated infrared image (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claims 4 and 1 6 : Step 2a Prong One : incorporates the rejection of claims 1 and 1 3. Step 2a Prong Two: the claims additionally recite wherein the first neural network includes a first backbone that includes one or more convolutional layers and a first head that includes one or more fully connected layers (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the first neural network includes a first backbone that includes one or more convolutional layers and a first head that includes one or more fully connected layers (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claims 5 and 1 7 : Step 2a Prong One : incorporates the rejection of claims 4 and 1 6 . Step 2a Prong Two: the claims additionally recite wherein the second neural network includes a second backbone that includes one or more convolutional layers and a second head that includes one or more fully connected layers (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the second neural network includes a second backbone that includes one or more convolutional layers and a second head that includes one or more fully connected layers (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claims 6 and 1 8 : Step 2a Prong One : incorporates the rejection of claims 5 and 1 7 . Step 2a Prong Two: the claims additionally recite wherein the first features are output by the first backbone of the first neural network and the second features are output by the second backbone of the second neural network (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the first features are output by the first backbone of the first neural network and the second features are output by the second backbone of the second neural network (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claims 7 and 1 9 : Step 2a Prong One : incorporates the rejection of claims 1 and 1 3 ; the claims further wherein a first location at which the first features are output by the first neural network and a second location at which the second features are output by the second neural network are determined by comparing rates at which the of the sum of the first loss function and second loss function converges on a minimal value (a mental process of evaluation, such as a human mentally determining locations at which features are output by neural networks (such as during design of the networks) based on a mental comparison of loss function convergence rates ). Step 2a Prong Two: the claims do not recite any additional limitations . Step 2b: the claims do not recite any additional limitations . Regarding dependent claims 8 and 20 : Step 2a Prong One : incorporates the rejection of claims 1 and 1 3; the claims additionally recite wherein determining the first and second locations includes determining a rate at which the sum of the first loss function and the second loss function is minimized (a mental process of evaluation/determination, such as a human mentally determining locations by determining a rate at which loss functions are minimized) . Step 2a Prong Two: the claims do not recite any additional limitations. Step 2b: t he claims do not recite any additional limitations. Regarding dependent claim 9: Step 2a Prong One : incorporates the rejection of claim 1 ; the claim additionally recite wherein the first loss function is determined by determining a mean square error between the first features and the second features (a recitation of a mathematical formula /concept ; see, e.g. paragraph 0017 of the specification of the instant application, providing the mathematical equation utilized to determine the mean square error ) . Step 2a Prong Two: the claims do not recite any additional limitations. Step 2b: the claims do not recite any additional limitations. Regarding dependent claim 10: Step 2a Prong One : incorporates the rejection of claim 1 . T he claims additionally recite wherein the first loss function is determined by a binary classifier that determines binary cross entropy between the first features and the second features ( a recitation of a mathematical calculation/concept ) . Step 2a Prong Two: the claim recites no additional limitations . Step 2b: the claim recites no additional limitations. Regarding dependent claim 11: Step 2a Prong One : incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the trained second neural network is output to a second computing system included in a vehicle ( insignificant extra-solution activity as discussed in MPEP 2106.05( g )). Step 2b: the claims additionally recite wherein the trained second neural network is output to a second computing system included in a vehicle ( insignificant extra-solution activity as discussed in MPEP 2106.05(g) , reevaluated as including well-understood, routine, conventional activity such as receiving or transmitting data over a network as discussed in MPEP 2106.03(d) ). Regarding dependent claim 12: Step 2a Prong One : incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the trained second neural network is used to operate the vehicle (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the trained second neural network is used to operate the vehicle (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and field of use and technological environment as discussed in MPEP 2106.05(h)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as recited in the dependent claims discussed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components, and limitations describing a field of use or technological environment. The additional elements as discussed above , in combination with the abstract idea , are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea , and limitations describing a field of use or technological environment. Claim Rejections – 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim s 1, 2, 9, and 11-14 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Kumar et al. (US 20230042750 A1) . With respect to claims 1 and 13, Kumar teaches a system, comprising: a computer that includes a processor and a memory, the memory including instructions executable by the processor to perform a method ( e.g. paragraph 0016, hardware processor executing code/instructions; memory storing data and the code ) ; and the method, comprising: receiving an image in a first neural network that outputs a first prediction based on the image ( e.g. paragraph 0077-0078, teacher network making scaled depth predictions based on input images; encoding input image into depth prediction; paragraph 0081, student network receiving monocular image/images and producing depth prediction ; i.e. where a neural network, such as a student neural network, is configured to receive images and produce/output corresponding depth predictions ) , wherein weights applied to layers in the first neural network are determined by minimizing a sum of a first loss function and a second loss function ( e.g. paragraph 0079, adjusting weights and biases of network by minimizing gradient of loss function with respect to each weight and bias; paragraph 0089, image consistency loss 832 and scale loss 834 combined by weighted sum to generate total loss function 836; paragraph 0090, identifying parameters for which total loss function 836 is minimized for particular input image; paragraph 0091, calculating gradient of total loss function with to parameters of each node in network, and altering parameters to minimize the value of the gradient ; i.e. where parameters of the student neural network are determined/altered/adjusted based on minimizing a total loss function, which is a sum of image consistency loss and scale loss (first and second loss functions); further ad described in paragraph 0079, the parameters adjusted according to loss functions in neural networks include adjusting/determining weights and biases of the layers of the neural network, such that the determining/adjusting of the parameters of the student network by minimizing the sub of first and second loss functions includes determining/adjusting of at least weights applied to layers of the student neural network ) ; wherein the first loss function is determined from first features determined in the first neural network trained to output a first prediction and from second features determined in a second neural network trained to output a second prediction ( e.g. paragraph 0081, student network 804 produces depth prediction 822; paragraph 0083, teacher network 806B generates scaled depth prediction 810B; paragraph 0088, comparing depth prediction 810B and 822 to generate scale loss 834 ) ; wherein the second loss function is determined based on comparing the first prediction to ground truth ( e.g. paragraph 0077, teacher network trained with ground truth LiDAR supervision to make scaled depth predictions based on input images; output of frozen teacher network then used to train student network; paragraph 0078, network trained using LiDAR data supervision, by minimizing loss function between predicted depth and ground truth data; paragraph 0081, output of teacher network 802 used to train student network 804; paragraph 008 3, network 804 trained to generate scaled depth predictions utilizing outputs from teacher network; teacher network generates scaled depth prediction given input image; paragraph 0084, output of depth prediction network 804 compared to output of networks 806B and 824 (i.e. including at least outputs of the teacher network), and network 804 is then altered based on this comparison; paragraph 0086, utilizing pose prediction 828 (i.e. generated by separate pose prediction network) and depth prediction 822 (i.e. of student network) to recreate input image and comparing recreated image to calculate image consistency loss 832 ; i.e. where the first and second loss functions are the image consistency loss and scale loss as cited above, and these are summed to generate a total loss function, one of these loss functions, such as the image consistency loss, is analogous to the second loss function as recited in the claims, and this second loss function is determined based on a comparison of a prediction to ground truth, such as a comparison of a recreated image (i.e. a prediction of the neural network ) to a ground truth image (i.e. the original input image or other image serving as ground truth data ) and/or a comparison of an output/prediction of the student network (i.e. such as a depth prediction of the student neural network) with ground truth data (i.e. such as an output of a separate neural network such as a depth prediction of the teacher network which is ultimately based upon additional ground truth sensor data and/or pose predictions of another network, as described in paragraph 0085) ) ; and outputting the first prediction ( e.g. paragraph 0077-0078, teacher network making scaled depth predictions based on input images; encoding input image into depth prediction; paragraph 0081, student network receiving monocular image/images and producing depth prediction ) . With respect to claims 2 and 14, Kumar teaches all of the limitations of claims 1 and 13 as previously discussed, and further teaches wherein determining weights includes backpropagating the sum of the first loss function and the second loss function to layers of the first neural network while varying the weights ( e.g. paragraph 0077, backpropagation or error/loss functions for training teacher network/student network; paragraph 0090, backpropagation of total loss function 836 into student network; paragraph 0091, altering values of network parameters in order to minimize value of gradient of total loss function ) . With respect to claim 9, Kumar teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the first loss function is determined by determining a mean square error between the first features and the second features ( e.g. paragraph 0088, mean squared error function utilized to calculate image consistency loss 832 and scale loss 834 ) . With respect to claim 11, Kumar teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the trained second neural network is output to a second computing system included in a vehicle ( e.g. paragraph 0047, vehicle computer includes depth perception model 220; paragraph 0049, depth perception model 220 received from vehicle communications server and utilized by vehicle; paragraph 0060, teacher vehicle data utilized to generate model 220; paragraph 0064, trained student network (which may comprise a portion of model 220) uploaded to vehicle and used to predict depth information in scenes captured by single camera of student vehicle; paragraph 0065, periodically updating and uploading updated network to vehicle ; i.e. both student and teacher models may be provided for use in a vehicle ) . With respect to claim 12, Kumar t eaches all of the limitations of claims 11 as previously discussed, and further teaches wherein the trained second neural network is used to operate the vehicle ( e.g. paragraph 0057, control module of vehicle using depth perception model 220 to determine steering for moving vehicle, etc.; paragraph 0064, following upload student model utilized to predict depth information in scenes captured by single camera of student vehicle; i.e. once provided the trained model is utilized to operate the vehicle ) . Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co. , 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102 (e) , (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of O’Connor (US 20230073669 A1) . With respect to claim 10, Kumar teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the first loss function is determined by a using a variety of different error functions ( e.g. paragraph 0088, any applicable error function can be utilized ) . Kumar does not explicitly disclose wherein the first loss function is determined by a binary classifier that determines binary cross entropy between the first features and the second features . However, O’Connor teaches wherein the first loss function is determined by a binary classifier that determines binary cross entropy between the first features and the second features ( e.g. paragraph 0024, various types of neural networks used to perform classification of image data; paragraphs 0039-0045, image classification neural network; previously trained neural network generating image classifications; previously trained neural network considered to represent teacher network; student neural network; second data input to previously trained neural network and also into student neural network; previously trained neural network generates reference output data, i.e. image classifications, and student neural network generates second output data, i.e. image classifications; difference between reference output data and second output data computed, such as using binary cross-entropy loss; i.e. a classification network determines a binary (i.e. where the classifier is therefore binary) cross-entropy loss corresponding to a classification difference between outputs of two neural networks (and corresponding features classified by the networks), such as between teacher and student networks ). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and O’Connor in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle ) , to incorporate the teachings of O’Connor (directed to optimizing a neural network) to inc lude the capability to utilize a classification network/classifier to determine the loss function (i.e. loss/difference between outputs of two networks according to corresponding features) using binary cross entropy (i.e. where the classifier is therefore a binary classifier). One of ordinary skill would have been motivated to perform such a modification in order to optimize a neural network as described in O’Connor ( paragraph 0009 ). Claim 7, 8, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Mavroeidis et al. (US 20200251224 A1) . With respect to claims 7 and 19, Kumar teaches all of the limitations of claims 1 and 13 as previously discussed . Kumar does not explicitly disclose wherein a first location at which the first features are output by the first neural network and a second location at which the second features are output by the second neural network are determined by comparing rates at which the of the sum of the first loss function and second loss function converges on a minimal value. However, Mevroeidis teaches wherein a first location at which the first features are output by the first neural network and a second location at which the second features are output by the second neural network are determined by comparing rates at which the of the sum of the first loss function and second loss function converges on a minimal value ( e.g. paragraph 0090, deep learning algorithm comprises input layer, output layer, and plurality of hidden layers; paragraph 0095, computing convergence rate of loss function of deep learning algorithm and selecting number of hidden layers based on the convergence rate; paragraph 0098, training several different architecture choices and using convergence rate of the loss function as criterion to select the optimal one, with faster convergence signifying better architecture; paragraph 0117, neural network of Fig. 2 comprising input layer 250, plurality of hidden layers 260, and output layer 270; i.e. where loss function convergence rates of different architecture configurations, such as a number of hidden layers, of a neural network are compared in order to find an optimal one, where this may include varying a number of hidden layers between an input layer and an output layer of a neural network, such that the location of the output layer with respect to the input layer within the architecture (and therefore location at which features of the network are output) is determined based on the analysis/comparison of the convergence rates of the loss functions for the different architectures ) . Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and Mevroeidis in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle) , to incorporate the teachings of Mevroeidis (directed to evaluating input data using a deep learning algorithm) to inc lude the capability to determine an output location for neural network features (i.e. location of an output layer of the neural network within the network’s architecture) based on an analysis/comparison of loss function convergence rates for a given set of neural network architectures (i.e. including first and second networks as taught by Kumar). One of ordinary skill would have been motivated to perform such a modification in order to train a deep learning algorithm to produce consistently accurate results in the absence of a large set of labelled training data, and to obtain user input regarding progress of a deep learning algorithm without requiring extensive effort on the user’s part as described in Mevroeidis (paragraph 000 5 ). With respect to claims 8 and 20, Kumar in view of Mevroeidis teaches all of the limitations of claims 7 and 19 as previously discussed, and Mevroeidis further teaches wherein determining the first and second locations includes determining a rate at which the sum of the first loss function and the second loss function is minimized ( e.g. paragraph 0090, deep learning algorithm comprises input layer, output layer, and plurality of hidden layers; paragraph 0095, computing convergence rate of loss function of deep learning algorithm and selecting number of hidden layers based on the convergence rate; paragraph 0098, training several different architecture choices and using convergence rate of the loss function as criterion to select the optimal one, with f aster convergence signifying better architecture ; paragraph 0117, neural network of Fig. 2 comprising input layer 250, plurality of hidden layers 260, and output layer 270; i.e. where loss function convergence rates of different architecture configurations, such as a number of hidden layers, of a neural network are compared in order to find an optimal one, where this may include varying a number of hidden layers between an input layer and an output layer of a neural network, such that the location of the output layer with respect to the input layer within the architecture (and therefore location at which features of the network are output) is determined based on the analysis/comparison of the convergence rates of the loss functions for the different architectures, where these convergence rates indicate a rate at which the loss functions are minimized; therefore, it is understood that a determination of a given convergence rate of a loss function is also indicative of a determination of a rate at which the loss function is minimized ) . Although Mevroeidis does not explicitly disclose that its loss functions are a sum of a first loss function and a second loss function, Kumar is already as citing a loss function which is a sum of a first loss function and a second loss function (as cited above). Therefore, when the teachings of Mevroeidis are applied to those of Kumar, the loss function convergence rate/rate of minimization determination as taught by Mevroeidis would be applied to the total loss function of Kumar (which is a sum of a first loss function and a second loss function). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and Mevroeidis in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle) , to incorporate the teachings of Mevroeidis (directed to evaluating input data using a deep learning algorithm) to inc lude the capability to determine an output location for neural network features (i.e. location of an output layer of the neural network within the network’s architecture) based on an analysis/comparison of loss function convergence rates for a given set of neural network architectures (i.e. including first and second networks as taught by Kumar). One of ordinary skill would have been motivated to perform such a modification in order to train a deep learning algorithm to produce consistently accurate results in the absence of a large set of labelled training data, and to obtain user input regarding progress of a deep learning algorithm without requiring extensive effort on the user’s part as described in Mevroeidis (paragraph 0005). Claim s 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Nagauker et al. (US 202 20392054 A1) . With respect to claims 3 and 15, Kumar teaches all of the limitations of claims 1 and 13 as previously discussed, and further teaches wherein the image received in the first neural network is a video image and the second neural network receives a second video image ( e.g. paragraph 0046, obtaining video/image data and utilizing it to generate and train models for use in autonomous vehicles; paragraph 0063, student network trained using video data; paragraph 0065, using video data to further fine tune networks; paragraph 0068, image sourced from database of video data; paragraph 0083, receiving monocular images such as frames of video data ) . Kumar does not explicitly disclose that the second neural network receives one or more of a thermal infrared image or a gated infrared image. However, Nagauker teaches that the second neural network receives one or more of a thermal infrared image or a gated infrared image ( e.g. paragraph 0065, neural network for performing target detection on images provided with infrared images, including thermal infrared images ) . Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and Nagauker in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle) , to incorporate the teachings of Nagauker (directed to testing analytics of imaging systems) to inc lude the capability for the second neural network to receive thermal infrared images (as taught by Nagauker ). One of ordinary skill would have been motivated to perform such a modification in order to facilitate testing analytics of imaging systems, including vehicle imaging systems as described in Nagauker (abstract, paragraph 0031 ). Claim s 4-6 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar in view of Chen et al. (US 20230186436 A1) . With respect to claims 4 and 16, Kumar teaches all of the limitations of claims 1 and 13 as previously discussed, and further teaches wherein the first neural network includes a first backbone that includes one or more convolutional layers ( e.g. paragraph 0078, network includes backbone network having deformable convolutional layers ) . Kumar does not explicitly disclose that the first neural network includes a first head that includes one or more fully connected layers. However, Chen teaches that the first neural network includes a first head that includes one or more fully connected layers ( e.g. paragraph 0072, projection head of model including fully connected layers; paragraph 0075, image inputted to backbone network to obtain feature maps, feature maps inputted into projection head to obtain feature vectors ) . Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and Chen in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle) , to incorporate the teachings of Chen (directed to fine-grained detection of driver distraction based on unsupervised learning ) to inc lude the capability for the first neural network to include a head with fully connected layers (as taught by Chen). One of ordinary skill would have been motivated to perform such a modification in order to overcome deficiencies in prior art solutions, such as CNNs which achieve string local perception but weak global perception which results in difficulties in various scenarios including characterization of driving states of drivers as described in Chen (paragraph 00 07-0009 ). With respect to claims 5 and 17, Kumar teaches all of the limitations of claims 4 and 16 as previously discussed, and further teaches wherein the second neural network includes a second backbone that includes one or more convolutional layers ( e.g. paragraph 0064, trained student network comprises at least a portion of model 220; paragraph 0078, network includes backbone network having deformable convolutional layers; i.e. both the teacher and student models may include the same portions, including the backbone network and its convolutional layers ). K umar does not explicitly disclose that the second neural network includes a second head that includes one or more fully connected layers. However, Chen teaches that the second neural network includes a second head that includes one or more fully connected layers ( e.g. paragraph 0072, projection head of model including fully connected layers; paragraph 0075, image inputted to backbone network to obtain feature maps, feature maps inputted into projection head to obtain feature vectors ) . Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Kumar and Chen in front of him to have modified the teachings of Kumar (directed to training a self-supervised ego vehicle) , to incorporate the teachings of Chen (directed to fine-grained detection of driver distraction based on unsupervised learning) to inc lude the capability for the second neural network to include a head with fully connected layers (as taught by Chen). One of ordinary skill would have been motivated to perform such a modification in order to overcome deficiencies in prior art solutions, such as CNNs which achieve string local perception but weak global perception which results in difficulties in various scenarios including characterization of driving states of drivers as described in Chen (paragraph 0007-0009). With respect to claims 6 and 18, Kumar teaches all of the limitations of claims 5 and 17 as previously discussed, and further teaches wherein the first features are output by the first backbone of the first neural network and the second features are output by the second backbone of the second neural network ( e.g. paragraph 0064, trained student network comprises at least a portion of model 220; paragraph 0078, backbone network encodes input image into depth prediction ; i.e. both the teacher and student models may include the same portions, including backbone networks, which each output the features/predictions utilized in the loss function ) . It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson , 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher -Smith Labs. v. Pamlab , LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International C o rp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Value for firstName-middleName-lastName?" \* MERGEFORMAT JEREMY L STANLEY whose telephone number is FILLIN "Insert your individual area code and phone number." \* MERGEFORMAT (469)295-9105 . The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar , can be reached at telephone number (571) 270-3169 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 21 27
Read full office action

Prosecution Timeline

May 24, 2023
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month