Prosecution Insights
Last updated: April 19, 2026
Application No. 17/868,267

SYSTEM AND METHOD FOR TEST-TIME ADAPTATION VIA CONJUGATE PSEUDOLABELS

Non-Final OA §103
Filed
Jul 19, 2022
Examiner
SITIRICHE, LUIS A
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Carnegie Mellon University
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
363 granted / 468 resolved
+22.6% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
24 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
39.1%
-0.9% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 4-5, 7-8, 11-12, 14-16 are amended. Claims 3, 10, 17 and 20 are cancelled. Claim 21 is newly added. Claims 1-2, 4-9, 11-16, 18-19, 21 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/15/2025 has been entered. Response to Arguments Applicant’s arguments filed on 08/15/2025 have been fully considered. In reference to Applicant’s arguments: - Claim rejections under 35 USC 101. Examiner’s response: Rejections are withdrawn in view of amendments and applicant’s arguments. In reference to Applicant’s arguments: - Claim rejections under 35 USC 102 and 103. Examiner’s response: These arguments have been fully considered, but are moot in view of new grounds of rejection. Claim Rejection - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4-5, 7, 15-16, 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Caine et al (NPL: “Pseudo-labeling for Scalable 3D Object Detection”- hereinafter Caine) in view of Murez, et al. (US 2019/0244107) (hereafter referred to as “Murez”). Regarding Claim 1, Caine teaches “A computer-implemented method for adapting a machine learning system that is trained with training data in a first domain to operate with sensor data in a second domain, the computer-implemented method comprising:” (Caine at Abstract: “Lastly, we show that these student models generalize better than supervised models to a new domain in which we only have unlabeled data, making pseudo-label training an effective form of unsupervised domain adaptation) “obtaining the sensor data from the second domain via one or more sensors;” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”. Further at section 3.4 Pseudo-Label Training: “Once we train the teacher, we select the best teacher model based on validation set performance on the Waymo Open Dataset and use to pseudo-label the unlabeled run segments. Next, we train a student model on the same labeled data the teacher saw, plus all the pseudo-labeled run segments. The mixing ratio of labeled to pseudo-labeled data is determined by the percentage of data the teacher was trained on”) [The unlabeled run segments are the second domain, as they are used to train the student model for a new domain using data from Lidar and camera, being the sensor data] “generating, via the machine learning system, prediction data based on the sensor data;” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”. Further at section 3.4 Pseudo-Label Training: “Once we train the teacher, we select the best teacher model based on validation set performance on the Waymo Open Dataset and use to pseudo-label the unlabeled run segments. Next, we train a student model on the same labeled data the teacher saw, plus all the pseudo-labeled run segments. The mixing ratio of labeled to pseudo-labeled data is determined by the percentage of data the teacher was trained on”) [the student model for a new domain using data from Lidar and camera, being the sensor data, corresponds to the prediction data based on the sensor data] “generating pseudo-reference data as a conjugate pseudo-label for the sensor data, the conjugate pseudo-label being generated via a gradient of a predetermined loss function evaluated with the prediction data, the conjugate pseudo-label being generated with respect to the sensor data that is unlabeled in the second domain using the same predetermined loss function that was used to train the machine learning system with the training data in the first domain;” (Caine at Abstract: “We demonstrate that pseudo-labeling for 3D object detection is an effective way to exploit less expensive and more widely available unlabeled data, and can lead to performance gains across various architectures, data augmentation strategies, and sizes of the labeled dataset”. Further at p. 3, Figure 2: “We use the teacher to pseudo-label all unseen run segments, and then train a student on the union of labeled and pseudo-labeled run segments. Finally, we evaluate both teacher and student models on the original Waymo Open Dataset and Kirkland validation splits”) [after pseudo labeling, both models are validated using the same original validation splits, interpreted as the same predetermined loss function]. However, Caine fails to teach: “generating pseudo-reference data as a conjugate pseudo-label for the sensor data, the conjugate pseudo-label being generated via a gradient of a predetermined loss function evaluated with the prediction data, the conjugate pseudo-label being generated with respect to the sensor data that is unlabeled in the second domain using the same predetermined loss function that was used to train the machine learning system with the training data in the first domain;” “generating loss data based on the pseudo-reference data and the prediction data;” “updating parameter data of the machine learning system based on the loss data;” “performing, via the machine learning system, a task in the second domain after the parameter data has been updated; and” “controlling an actuator based on the task performed in the second domain.” Murez teaches, in an analogous system: “generating pseudo-reference data as a conjugate pseudo-label for the sensor data, the conjugate pseudo-label being generated via a gradient of a predetermined loss function evaluated with the prediction data, the conjugate pseudo-label being generated with respect to the sensor data that is unlabeled in the second domain using the same predetermined loss function that was used to train the machine learning system with the training data in the first domain;” (see Murez at [0071]: “ PNG media_image1.png 164 400 media_image1.png Greyscale ”) “generating loss data based on the pseudo-reference data and the prediction data;” (see Murez at [0071]: “ PNG media_image1.png 164 400 media_image1.png Greyscale ”) “updating parameter data of the machine learning system based on the loss data;” (see Murez at [0071]: “The above general loss function is then optimized via the Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. FIG. 4 shows the pathways for each loss function defined above”) [Therefore, the optimization with adaptive learning rate corresponds to the updating of parameter data based on the loss] “performing, via the machine learning system, a task in the second domain after the parameter data has been updated; (see Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the detection of objects under different weather conditions is interpreted as the task performed in the second domain] and” “controlling an actuator based on the task performed in the second domain.” (see Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the component control of the autonomous vehicle is interpreted as the actuator being controlled]. Caine and Murez1 are analogous arts in the field of machine learning in the field of endeavor of domain adaptation in autonomous vehicles. A person skilled in the art, before the filing date of the present application, would be able to modify Caine with Murez controlling an actuator based on the task performed in the second domain with the motivation being “(Murez [0076]: [t]he invention according to embodiments of the present disclosure is of particular value to fully autonomous navigation systems for vehicle manufacturers. TS2 will significantly reduce the amount of annotated real-world training data needed to train their perception and sensing algorithms. Furthermore, thanks to its domain agnostic feature extraction capability, TS2 produces more robust results when navigating in novel or unseen conditions, such as a new city or in rare weather conditions (e.g., snow, fog, rain).” Regarding Claim 2, Caine in view of Murez teaches “The computer-implemented method of claim 1,” and Caine further teaches “wherein the machine learning system is a classifier configured to perform the task of generating output data that classifies input data.” (Caine at p. 2 left column: “We show pseudo-labeling is extremely effective for 3D object detection, and provide a systematic analysis of how to maximize its performance benefits”. Further at p.3, left column: “We focus on one of the open challenges for the Waymo Open Dataset 2: accurate 3D detection in a new city (Kirkland) with changing environmental conditions (rain) and limited human-labeled data”). Regarding Claim 4, Caine in view of Murez teaches “The computer-implemented method of claim 1,” and Murez further recites “wherein the predetermined loss function relates to the task performed by the machine learning system.” (see Murez at [0071]: “The above general loss function is then optimized via the Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. FIG. 4 shows the pathways for each loss function defined above”. Further, Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the optimization of the models used by Murez are directed to autonomous vehicles, related to this task]. Caine and Murez are analogous arts in the field of machine learning in the field of endeavor of domain adaptation in autonomous vehicles. A person skilled in the art, before the filing date of the present application, would be able to modify Caine with Murez controlling an actuator based on the task performed in the second domain with the motivation being “(Murez [0076]: [t]he invention according to embodiments of the present disclosure is of particular value to fully autonomous navigation systems for vehicle manufacturers. TS2 will significantly reduce the amount of annotated real-world training data needed to train their perception and sensing algorithms. Furthermore, thanks to its domain agnostic feature extraction capability, TS2 produces more robust results when navigating in novel or unseen conditions, such as a new city or in rare weather conditions (e.g., snow, fog, rain).” Regarding Claim 5, Caine in view of Murez teaches “The computer-implemented method of claim 4,” and Murez recites “wherein the predetermined loss function is a cross-entropy loss function, a squared loss function, a hinge loss function, a tangent loss function, a polyloss function, or a logistic loss function.” (Murez at [0013]: “A cross entropy loss function that is defined as a number of correct classifications of the discriminator is optimized”) [The loss function comprises a cross entropy loss function.] A person skilled in the art, before the filing date of the present application, would be modify Caine with Murez to recite wherein the loss function is a cross-entropy loss function, a squared loss function, a hinge loss function, a tangent loss function, a polyloss function, or a logistic loss function with the motivation being “([0013]) [a] cross entropy loss function that is defined as a number of correct classifications of the discriminator is optimized.” Regarding Claim 7, Caine in view of Murez teaches “The computer-implemented method of claim 1,” and, Caine teaches “wherein the sensor data includes digital image data or digital audio data obtained from the one or more sensors.” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”) [The Lidar and camera data are digital image data]. Referring to independent Claim 15, it is rejected on the same basis as independent claim 1, mutatis mutandis, since they are analogous claims. (Since Claim 15 is a system claim, Murez further teaches the limitations “a processor; a non-transitory computer readable medium in data communication with the processor, the non-transitory computer readable medium having computer readable data including instructions stored thereon that, when executed by the processor, cause the processor to perform a method for adapting a machine learning system that is trained with training data in a first domain to operate with sensor data in a second domain” at [0009]: “a system for adapting a deep convolutional neural network trained on a source domain with labels to a target domain without requiring any new labels. The system comprises one or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform multiple operations”). Regarding Claim 16, Caine in view of Murez teaches the system of claim 15, wherein: the machine learning system is a classifier configured to perform the task of generating output data that classifies input data (Caine at p. 2 left column: “We show pseudo-labeling is extremely effective for 3D object detection, and provide a systematic analysis of how to maximize its performance benefits”. Further at p.3, left column: “We focus on one of the open challenges for the Waymo Open Dataset 2: accurate 3D detection in a new city (Kirkland) with changing environmental conditions (rain) and limited human-labeled data”); and the predetermined loss function relates to the task (see Murez at [0071]: “The above general loss function is then optimized via the Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. FIG. 4 shows the pathways for each loss function defined above”. Further, Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the optimization of the models used by Murez are directed to autonomous vehicles, related to this task]. Caine and Murez are analogous arts in the field of machine learning in the field of endeavor of domain adaptation in autonomous vehicles. A person skilled in the art, before the filing date of the present application, would be able to modify Caine with Murez controlling an actuator based on the task performed in the second domain with the motivation being “(Murez [0076]: [t]he invention according to embodiments of the present disclosure is of particular value to fully autonomous navigation systems for vehicle manufacturers. TS2 will significantly reduce the amount of annotated real-world training data needed to train their perception and sensing algorithms. Furthermore, thanks to its domain agnostic feature extraction capability, TS2 produces more robust results when navigating in novel or unseen conditions, such as a new city or in rare weather conditions (e.g., snow, fog, rain).” Regarding Claim 19, Caine and Murez teaches the system of claim 15, further comprising: “an image sensor or a microphone” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”) [The Lidar and camera are image sensors].; wherein the sensor data includes digital image data from the image sensor or digital audio data obtained from the microphone.” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”) [The Lidar and camera data are digital image data]. Referring to dependent Claim 21, it is rejected on the same basis as dependent claim 5, mutatis mutandis, since they are analogous claims. Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Caine in view of Murez and further in view of Munoz Delgado et al (US Pub. 2021/0019572- hereinafter Munoz). Regarding Claim 6, Caine in view of Murez teaches “The computer-implemented method of claim 1,” however, fails to teach “wherein the parameter data is updated using a scaled gradient of the loss data.” Munoz teaches, in an analogous system, teach “wherein the parameter data is updated using a scaled gradient of the loss data” (see Munoz at [0021: “The first and second volumes may be related by scaling, e.g., a sub-volume of the first volume having a particular part of the input image having a particular receptive field, may correspond by scaling to a sub-volume of the second volume that affects largely the part of the synthetic image corresponding to that receptive field. This way, a particularly direct feedback mechanism may be achieved in which the use of the discriminator score leads to a particularly relevant update to the intermediate representation. The gradient of the loss may be updated by scaling the discriminator scores from the first volume to the respective second volumes and updating respective partial derivatives based on respective scaled discriminator scores”). A person skilled in the art, before the filing date of the present application, would be modify Caine and Murez with Munoz to use a scaled gradient of the loss data with the motivation being “([0021]) “a particularly direct feedback mechanism may be achieved in which the use of the discriminator score leads to a particularly relevant update to the intermediate representation” and [0029]: “Accordingly, the generative model, improved with the various features described herein, may be put to use to improve the training of the further machine learning model, e.g., an image classifier, and finally obtain more accurate model outputs, e.g., classifications”). Referring to dependent Claim 18, it is rejected on the same basis as dependent claim 6, mutatis mutandis, since they are analogous claims. Claims 8-9, 11-12, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Caine in view of Wei, et al., (US 20110281253 A1) (hereafter referred to as “Wei”) in further view of Murez. Regarding Claim 8, Caine teaches “A computer-implemented method for test-time adaptation of a machine learning system from a source domain to a target domain, the machine learning system having been trained with training data of the source domain, the computer-implemented method comprising” (Caine at Abstract: “Lastly, we show that these student models generalize better than supervised models to a new domain in which we only have unlabeled data, making pseudo-label training an effective form of unsupervised domain adaptation) “obtaining sensor data from the target domain via one or more sensors;” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”. Further at section 3.4 Pseudo-Label Training: “Once we train the teacher, we select the best teacher model based on validation set performance on the Waymo Open Dataset and use to pseudo-label the unlabeled run segments. Next, we train a student model on the same labeled data the teacher saw, plus all the pseudo-labeled run segments. The mixing ratio of labeled to pseudo-labeled data is determined by the percentage of data the teacher was trained on”) [The unlabeled run segments are the second domain, as they are used to train the student model for a new domain using data from Lidar and camera, being the sensor data] “generating, via the machine learning system, prediction data based on the sensor data;” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”. Further at section 3.4 Pseudo-Label Training: “Once we train the teacher, we select the best teacher model based on validation set performance on the Waymo Open Dataset and use to pseudo-label the unlabeled run segments. Next, we train a student model on the same labeled data the teacher saw, plus all the pseudo-labeled run segments. The mixing ratio of labeled to pseudo-labeled data is determined by the percentage of data the teacher was trained on”) [the student model for a new domain using data from Lidar and camera, being the sensor data, corresponds to the prediction data based on the sensor data]. However, Caine fails to teach: “generating loss data based on a negative convex conjugate of a predetermined loss function applied to a gradient of the predetermined loss function, the predetermined loss function being evaluated based on the prediction data, the predetermined loss function being the same loss function that was used when training the machine learning system with the training data of the source domain;” “updating parameter data of the machine learning system based on the loss data;” “performing, via the machine learning system, a task in the second domain after the parameter data has been updated; and” “controlling an actuator based on the task performed in the target domain.” On the other hand, Wei teaches generating loss data based on a negative convex conjugate of a predetermined loss function applied to a gradient of the predetermined loss function, the predetermined loss function being evaluated based on the prediction data, the predetermined loss function being the same loss function that was used when training the machine learning system with the training data of the source domain (Wei at 0031: The conjugate gradient methods may be the best suitable for global optimizations, especially when the objective function to be maximized, here the learning efficacy p(T1, T2, . . . Tm), is a linear or concave function in the convex body defined by the constraints (9) and (10), that is, the optimization problem is convex programming, in which case a conjugate gradient algorithm is guaranteed to converge to the optimal solution within m iteration steps of line search and search direction update…) [A concave conjugate is functionally equivalent negative convex conjugate.] Caine and Wei are analogous arts in the field of machine learning in the field of endeavor labeling of datasets. A person skilled in the art, before the filing date of the present application, would be motivated to modify Caine with Wei to recite based on a negative convex conjugate of a predetermined function applied to a gradient of the predetermined function with the motivation being “(0031) conjugate gradient algorithm is guaranteed to converge to the optimal solution within m iteration steps of line search and search direction update, each of which involves a gradient calculation and a few objective function evaluations.” Furthermore, Murez teaches in an analogous system: “updating parameter data of the machine learning system based on the loss data;” (see Murez at [0071]: “The above general loss function is then optimized via the Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. FIG. 4 shows the pathways for each loss function defined above”) [Therefore, the optimization with adaptive learning rate corresponds to the updating of parameter data based on the loss] “performing, via the machine learning system, a task in the second domain after the parameter data has been updated; (see Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the detection of objects under different weather conditions is interpreted as the task performed in the second domain] and” “controlling an actuator based on the task performed in the second domain.” (see Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the component control of the autonomous vehicle is interpreted as the actuator being controlled]. Caine, Wei and Murez2 are analogous arts in the field of machine learning in the field of endeavor of labeling of datasets. A person skilled in the art, before the filing date of the present application, would be able to modify Caine and Wei with Murez controlling an actuator based on the task performed in the second domain with the motivation being “(Murez [0076]: [t]he invention according to embodiments of the present disclosure is of particular value to fully autonomous navigation systems for vehicle manufacturers. TS2 will significantly reduce the amount of annotated real-world training data needed to train their perception and sensing algorithms. Furthermore, thanks to its domain agnostic feature extraction capability, TS2 produces more robust results when navigating in novel or unseen conditions, such as a new city or in rare weather conditions (e.g., snow, fog, rain).” Regarding Claim 9, Caine in view of Wei and Murez teaches “The computer-implemented method of claim 8,” and Caine teaches “wherein the machine learning system is a classifier configured to perform the task of generating output data that classifies input data.” (Caine at p. 2 left column: “We show pseudo-labeling is extremely effective for 3D object detection, and provide a systematic analysis of how to maximize its performance benefits”. Further at p.3, left column: “We focus on one of the open challenges for the Waymo Open Dataset 2: accurate 3D detection in a new city (Kirkland) with changing environmental conditions (rain) and limited human-labeled data”). Regarding Claim 11, Caine, Wei and Murez teaches “The computer-implemented method of claim 8,” and Caine further teaches “wherein the predetermined loss function relates to the task performed by the machine learning system.” (see Murez at [0071]: “The above general loss function is then optimized via the Stochastic Gradient Descent (SGD) method with adaptive learning rate, in an end-to-end manner. FIG. 4 shows the pathways for each loss function defined above”. Further, Murez at [0077]: “The annotations for the target image domain obtained by the TS.sup.2 framework can be used for detection and recognition of objects, such as vehicles, pedestrians, and traffic signs, under different weather conditions (e.g., rain, snow, fog) and lighting conditions (e.g., low light, bright light). Thus, the annotations can then be utilized to cause an automatic operation related to controlling a component of the autonomous vehicle”) [Therefore, the optimization of the models used by Murez are directed to autonomous vehicles, related to this task]. Caine, Wei and Murez are analogous arts in the field of machine learning in the field of endeavor of domain adaptation in autonomous vehicles. A person skilled in the art, before the filing date of the present application, would be able to modify Caine and Wei with Murez controlling an actuator based on the task performed in the second domain with the motivation being “(Murez [0076]: [t]he invention according to embodiments of the present disclosure is of particular value to fully autonomous navigation systems for vehicle manufacturers. TS2 will significantly reduce the amount of annotated real-world training data needed to train their perception and sensing algorithms. Furthermore, thanks to its domain agnostic feature extraction capability, TS2 produces more robust results when navigating in novel or unseen conditions, such as a new city or in rare weather conditions (e.g., snow, fog, rain).” Regarding Claim 12, Caine, Wei and Murez teaches “The computer-implemented method of claim 11,” “wherein the predetermined loss function is a cross- entropy loss function, a squared loss function, a hinge loss function, a tangent loss function, a polyloss function, or a logistic loss function.” (Murez at [0013]: “A cross entropy loss function that is defined as a number of correct classifications of the discriminator is optimized”) [The loss function comprises a cross entropy loss function.] A person skilled in the art, before the filing date of the present application, would modify Caine and Wei with Murez to recite wherein the loss function is a cross-entropy loss function, a squared loss function, a hinge loss function, a tangent loss function, a polyloss function, or a logistic loss function with the motivation being “([0013]) [a] cross entropy loss function that is defined as a number of correct classifications of the discriminator is optimized.” Regarding Claim 14, Caine, Wei and Murez teaches “The computer-implemented method of claim 8,” and, Caine teaches “wherein the sensor data includes digital image data or digital audio data obtained from the one or more sensors.” (Caine at section 3.1 Data Setup: “The Waymo Open Dataset [44] is organized as a collection of run segments. Each run segment is a 200 frame sequence of LiDAR and camera data collected at 10Hz”) [The Lidar and images data are digital image data]. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Caine in view of Wei, Murez and further in view of Munoz Delgado. Regarding Claim 13, Caine, Wei and Murez teaches “The computer-implemented method of claim 8,” however, fail to teach “wherein the parameter data is updated using a scaled gradient of the loss data.” Munoz teaches, in an analogous system, teach “wherein the parameter data is updated using a scaled gradient of the loss data” (see Munoz at [0021: “The first and second volumes may be related by scaling, e.g., a sub-volume of the first volume having a particular part of the input image having a particular receptive field, may correspond by scaling to a sub-volume of the second volume that affects largely the part of the synthetic image corresponding to that receptive field. This way, a particularly direct feedback mechanism may be achieved in which the use of the discriminator score leads to a particularly relevant update to the intermediate representation. The gradient of the loss may be updated by scaling the discriminator scores from the first volume to the respective second volumes and updating respective partial derivatives based on respective scaled discriminator scores”). A person skilled in the art, before the filing date of the present application, would modify Caine, Wei and Murez with Munoz to use a scaled gradient of the loss data with the motivation being “([0021]) “a particularly direct feedback mechanism may be achieved in which the use of the discriminator score leads to a particularly relevant update to the intermediate representation” and [0029]: “Accordingly, the generative model, improved with the various features described herein, may be put to use to improve the training of the further machine learning model, e.g., an image classifier, and finally obtain more accurate model outputs, e.g., classifications”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS A SITIRICHE whose telephone number is (571)270-1316. The examiner can normally be reached M-F 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUIS A SITIRICHE/ Primary Examiner, Art Unit 2126 1 For the purposes of compact prosecution, Murez also recites, at minimum, a computer-implemented method for adapting a machine learning system that is trained with training data in a first domain to operate with sensor data in a second domain, (Murez at 27: The method according to embodiments of the present disclosure includes two unique ways to improve the domain adaptation performance by re-designing the feature extractors that are learned… In short, an adversarial network (FIG. 4, module 412) is trained to distinguish the features coming from domain X from that of domain Y… See also 26: “target domain sensor”) and “a second domain” (Murez at 5: The feature extractor extracts features from both source domain (e.g., domain ‘A’) images and target domain images (e.g., domain ‘B’).) [The target data or Domain B is functionally equivalent to a second domain.] 2 For the purposes of compact prosecution, Murez also recites, at minimum, a computer-implemented method for adapting a machine learning system that is trained with training data in a first domain to operate with sensor data in a second domain, (Murez at 27: The method according to embodiments of the present disclosure includes two unique ways to improve the domain adaptation performance by re-designing the feature extractors that are learned… In short, an adversarial network (FIG. 4, module 412) is trained to distinguish the features coming from domain X from that of domain Y… See also 26: “target domain sensor”) and “a second domain” (Murez at 5: The feature extractor extracts features from both source domain (e.g., domain ‘A’) images and target domain images (e.g., domain ‘B’).) [The target data or Domain B is functionally equivalent to a second domain.]
Read full office action

Prosecution Timeline

Jul 19, 2022
Application Filed
Jan 21, 2025
Non-Final Rejection — §103
Apr 28, 2025
Response Filed
May 09, 2025
Final Rejection — §103
Aug 15, 2025
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585947
MODIFYING COMPUTATIONAL GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579476
ADAPTIVE LEARNING FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12579445
MODELS FOR PREDICTING RESISTANCE TRENDS
2y 5m to grant Granted Mar 17, 2026
Patent 12572791
METHOD, DEVICE AND COMPUTER PROGRAM FOR PREDICTING A SUITABLE CONFIGURATION OF A MACHINE LEARNING SYSTEM FOR A TRAINING DATA SET
2y 5m to grant Granted Mar 10, 2026
Patent 12572857
Adaptive Probabilistic Latent Semantic Analysis System For Automated Document Coding And Review In Electronic Discovery
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+22.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month