Prosecution Insights
Last updated: April 19, 2026
Application No. 17/931,449

DISENTANGLED OUT-OF-DISTRIBUTION (OOD) CALIBRATION AND DATA DETECTION

Final Rejection §101§103
Filed
Sep 12, 2022
Examiner
FEITL, LEAH M
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
21 granted / 84 resolved
-30.0% vs TC avg
Moderate +7% lift
Without
With
+7.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
34 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
30.8%
-9.2% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 84 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in response to the amendments filed 12/03/2025. Claims 1, 6, 9, 14, 17, and 20 have been amended, claims 5 and 13 have been cancelled, claims 21-22 have been added. Claims 1-4, 6-12, and 14-22 are currently pending. Response to Arguments Claims 5 and 13 have been cancelled, therefore the rejections of claims 5 and 13 no longer stand. Applicant’s arguments regarding the 101 rejection have been fully considered but they are not persuasive. Applicant argues that the limitations related to classifying data based on the outcome of a comparison between an out-of-distribution score and a predetermined threshold. Applicant argues that “the claimed approaches improve the functionality of AI/ML models”. Examiner notes that section 2106.05(a) of the MPEP states that “the judicial exception alone cannot provide the improvement”. Applicant has not shown which additional elements, even potentially in combination with claimed judicial exceptions, provide this alleged improvement. Therefore, these arguments are considered unpersuasive. The 101 rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended. Applicant’s arguments regarding the prior art rejection have been fully considered but are moot because of the new ground(s) of rejection. Applicant argues that the prior art does not teach “generating a score identifying a likelihood of input data being out-of-distribution compared to a distribution of training data used to train a machine learning model, determining whether a score exceeds a threshold, and taking different actions depending on whether the score exceeds the threshold”. Examiner notes that the Hsu reference teaches a scoring function used to determine the likelihood that data is out-of-distribution, and compares this score to a predetermined threshold (see at least section 2 paragraph 4). While Mohseni does not explicitly recite a score, Mohseni does teach comparing the output of its model to a predetermined threshold to detect out-of-distribution data, as well as generating a user notification and taking suitable actions when out-of-distribution data is detected (see at least paragraph [0088]). Examiner notes that the Perera reference has been brought in to teach utilizing user input to modify a classification model. The prior art rejections have been updated to include the amended limitations and to clarify the reasoning given for the limitations that were not amended. The amendments to independent claims 1, 9, and 17 have not changed the scope such that objections to claims 2-4, 10-12, and 18-19 should no longer objected to or rejected with prior art. The objections to claims 2-4, 10-12, and 18-19 are maintained as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-12, and 14-22 are rejected under 35 U.S.C. 101. Claims 1-4, 6-8, and 21 are directed to a method, claims 9-12, 14-16, and 22 are directed to a system, and claims 17-20 are directed to a non-transitory computer readable medium; therefore, claims 1-4, 6-12, and 14-22 fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). However, claims 1-4, 6-12, and 14-22 fall within the judicial exception of an abstract idea, specifically the abstract ideas of “Mental Processes” (including observation, evaluation, and opinion) and “Mathematical Concepts (including mathematical calculations and relationships)”. Claim 1: Claim 1 is directed to a method; therefore, the claim does fall within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Claim 1 recites the following abstract ideas: extracting features of the input data (mental step directed to observation, evaluation – a person could extract, or identify features in observed input data in their mind); performing a geometric transformation of the features, the geometric transformation based on first and second parametric instance-dependent scalar functions (performing a geometric transformation based on scalar functions is interpreted as a mathematical calculation in light of at least paragraphs [0051]-[0052] of Applicant’s specification); and producing a predictive probability distribution based on the transformed features (mental step directed to observation, evaluation – a person could produce a predictive probability distribution based on observed or determined transformed features in their mind. Examiner notes that the broadest reasonable interpretation of producing a predictive probability distribution also includes a mathematical calculation in light of at least paragraph [0052] of Applicant’s specification); generating a score using a scoring function, the score identifying the likelihood of the input data being out-of-distribution compared to a distribution of training data used to train the machine learning model (mental step directed to observation, evaluation - a person could generate a score in their mind using a scoring function and identify a likelihood of observed input data being out-of-distribution compared to an observed or mentally determined distribution training data in their mind. Examiner notes that the broadest reasonable interpretation of generating a score also includes a mathematical calculation in light of at least paragraph [0055] of Applicant’s specification); determining whether the score exceeds a threshold (mental step directed to observation, evaluation – a person could determine whether an observed or mentally determined score exceeds an observed or mentally determined threshold in their mind); in response to the score failing to exceed the threshold, classifying the input data based on the predictive probability distribution (mental step directed to observation, evaluation – a person could classify observed input data based on an observed or mentally determined predictive probability distribution, having determined in their mind that an observed or mentally determined score failed to exceed an observed or mentally determined threshold); and classifying the input data based on the user input (mental step directed to observation, evaluation – a person could classify observed input data based on observed user input information). Claim 1 recites the following additional elements: at least one processing device of an electronic device; providing input data to a machine learning model to classify contents of the input data; in response to the score exceeding the threshold: generating, by the electronic device, a user notification asking a user of the electronic device for user input; and receiving, by the electronic device, the user input. The processing device is interpreted as a generic component merely used to implement the abstract ideas listed above. Providing input data to a machine learning model, generating a user notification asking a user for user input, and receiving user input are all interpreted as receiving and transmitting data over a network. As the claims do not recite any particular machine learning model nor technical details regarding how the machine learning model is used to classify data in this limitation, the machine learning model is interpreted as a generic computer component. Using a machine learning model to classify contents of input data is interpreted as mere instructions a mental step directed to classifying contents of observed input data using a generic computer component. These additional elements do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea (see MPEP 2106.05(d)(II) and MPEP 2106.05(f)). Claim 9 is a system claim and its limitation is included in claim 1. The only difference is that claim 9 requires a system. Therefore, claim 9 is rejected for the same reasons as claim 1. Claim 17 is a non-transitory computer readable medium claim and its limitation is included in claim 1. The only difference is that claim 17 requires a non-transitory computer readable medium. Therefore, claim 17 is rejected for the same reasons as claim 1. The independent claims are not patent eligible. Dependent claims 2-4, 6-8, 10-12, 14-16, and 18-22 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea, as they recite further embellishment of the judicial exception. Claim 2 recites wherein: the first parametric instance-dependent scalar function is represented by α(f) and is a function of feature norms, where a numerical constraint on α(f) is defined as 0 < α(f) < 1; the second parametric instance-dependent scalar function is represented by β(f) and is a function of feature angles, where a numerical constraint on β(f) is defined as p β(f) > 1; and f represents the features. The parametric instance-functions and their constraints are interpreted as abstract ideas directed to mathematical formulas and corresponding mathematical relationships. Claim 3 recites wherein: the numerical constraint on α(f) is enforced using a sigmoid activation; and the numerical constraint on β(f) is enforced using a softplus constraint. These constraints are interpreted as further descriptions of the mathematical relationships in claim 2. Claim 4 recites wherein the geometric transformation is defined as: PNG media_image1.png 60 300 media_image1.png Greyscale where: li represents a logit corresponding to the ith feature; f represents the features; ||f||2 represents a norm of a feature vector formed by the features; wi represents an ith possible class into which the features may be mapped; and cos φi represents a cosine similarity associated with the feature vector and a vector associated with the ith possible class. This geometric transformation is interpreted as an abstract idea directed to a mathematical calculation. Claim 6 recites wherein the scoring function is based on (i) a covariate shift of the input data relative to the training data and (ii) a concept shift of the input data relative to the training data. Generating a score using a scoring function based on a covariate shift and a concept shift of input data is interpreted as a mathematical calculation in light of at least paragraph [0055] of Applicant’s specification. Claim 7 recites wherein: the first parametric instance-dependent scalar function is represented by α(f); the second parametric instance-dependent scalar function is represented by β(f); and the parametric instance-dependent scalar functions of the geometric transformation are calibrated such that the predictive probability distribution is aligned with an accuracy of the machine learning model. Calibration of the scalar functions of the geometric transformation is interpreted as merely changing the values of the variables used in the geometric transformation in light of at least paragraph [0056] of Applicant’s specification, which does not change the interpretation of the geometric transformation, as a person could change the values of variables used in a mental or mathematical geometric transformation, potentially assisted by pen and paper (see MPEP 2106.04(a)(2)(III)). Claim 8 recites training the machine learning model by: obtaining, using the at least one processing device, training data; and training the machine learning model using the training data, the machine learning model comprising a feature extractor and the geometric transformation; wherein training the machine learning model comprises: adjusting parameters of the feature extractor in order to extract the features of the input data; and adjusting the first and second parametric instance-dependent scalar functions of the geometric transformation in order to transform the features of the input data. The machine learning model comprising a feature extractor and a geometric transformation is interpreted as a generic computer component merely used to apply the mental steps related to feature extraction and geometric transformation (see MPEP 2106.05(f)). The claims only recite training steps directed to adjusting parameters, which is interpreted as merely changing the values of the variables used in the geometric transformation. As person could change the values of variables used in a mental or mathematical geometric transformation in their mind, potentially assisted by pen and paper (see MPEP 2106.04(a)(2)(III)), training the machine learning model is interpreted as a mental step. Obtaining training data is interpreted as an additional element directed to receiving data over a network (see MPEP 2106.05(d)(II)), which does not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas. Claim 9 is a system claim and its limitation is included in claim 1. The only difference is that claim 9 requires a system. Therefore, claim 9 is rejected for the same reasons as claim 1. Claim 10 is a system claim and its limitation is included in claim 2. Claim 10 is rejected for the same reasons as claim 2. Claim 11 is a system claim and its limitation is included in claim 3. Claim 11 is rejected for the same reasons as claim 3. Claim 12 is a system claim and its limitation is included in claim 4. Claim 12 is rejected for the same reasons as claim 4. Claim 14 is a system claim and its limitation is included in claim 6. Claim 14 is rejected for the same reasons as claim 6. Claim 15 is a system claim and its limitation is included in claim 7. Claim 15 is rejected for the same reasons as claim 7. Claim 16 is a system claim and its limitation is included in claim 8. Claim 16 is rejected for the same reasons as claim 8. Claim 18 is a non-transitory computer readable medium claim and its limitation is included in claim 2. Claim 18 is rejected for the same reasons as claim 2. Claim 19 is a non-transitory computer readable medium claim and its limitation is included in claim 4. Claim 19 is rejected for the same reasons as claim 4. Claim 20 is a non-transitory computer readable medium claim and its limitation is included in claim 6. Claim 20 is rejected for the same reasons as claim 6. Claim 21 recites wherein generating the user notification comprises at least one of displaying a user prompt on a display or playing an audio alert on a speaker. Generating a user notification by displaying a user prompt or playing an audio alert is interpreted as an additional element directed to transmitting data over a network (see MPEP 2106.05(d)(II)), which does not integrate the claimed abstract ideas into a practical application or amount to significantly more than the claimed abstract ideas. Claim 22 is a system claim and its limitation is included in claim 21. Claim 22 is rejected for the same reasons as claim 21. Viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-9, 14-17, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Mohseni et al (US 20220318557 A1, herein Mohseni) in view of Hsu et al* (“Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data”, herein Hsu), in further view of Perera et al (US 20220067449 A1, herein Perera). *this document was included with the 09/14/2022 IDS Regarding claim 1, Mohseni teaches a method comprising: providing, using at least one processing device of an electronic device, input data to a machine learning model to classify contents of the input data (para. 0072] recites “training neural network at block 304 includes training one or more neural networks to infer a plurality of characteristics about input information (e.g., image data)” (i.e., providing input data to a machine learning model to infer characteristics about, or classify the input data)); extracting, using the at least one processing device, features of the input data (para. [0076] recites “auxiliary tasks share main feature extraction network (e.g., first portion 202 of neural network 200 of FIG. 2) with main classification task”. Para. [0543] recites “ground truth data may be synthetically produced (e.g., generated from computer models or renderings), real produced (e.g., designed and produced from real-world data), machine-automated (e.g., using feature analysis and learning to extract features from data and then generate labels), human annotated (e.g., labeler, or annotation expert, defines location of labels), and/or a combination thereof” (i.e., extracting features from input data)); performing, using the at least one processing device, a geometric transformation of the features, the geometric transformation based on first and second parametric instance-dependent scalar functions (para. [0075] recites “for self-supervised tasks, T = {(Tn, λn)}n=1N denotes N transformations and their corresponding trade-off scalar weights λn”. Para. [0077] recites “at a block 402, technique 400 includes applying geometric transformations to input data. In at least one embodiment, applying geometric transformations at block 402 includes applying geometric transformations to image data. In at least one embodiment, applying geometric transformations includes applying geometric transformations to input data 105 to generate a portion of transformed input data 102” (i.e., applying geometric transformations to features of the input image data based on corresponding scalar functions)); determining, using the at least one processing device, whether the [score] exceeds a threshold; in response to the [score] failing to exceed the threshold, classifying, using the at least one processing device, the input data [based on the predictive probability distribution] (para. [0088] recites “detecting OOD data includes determining that input data is OOD if a classification output value generated using trained neural network 118 of FIG. 1 is below a predetermined threshold value. In at least one embodiment, one or more circuits are to identify input information as being OOD if a classification probability value ( e.g., output by first output layer 208) generated by an inference operation using one or more neural networks is below a predefined threshold” (i.e., determining whether the output of the neural network exceeds a predetermined threshold and classifying the data when the output of the neural network does not exceed the threshold)); and in response to the [score] exceeding the threshold: generating, by the electronic device, a user notification asking a user of the electronic device for user input; receiving, by the electronic device, the user input (para. [0088] recites “detecting OOD data includes determining that input data is OOD if a classification output value generated using a trained neural network (e.g., trained neural network 118 of FIG.1) is above a predetermined threshold. In at least one embodiment, at a block 608, technique 600 includes performing other actions. In at least one embodiment, performing other actions at block 608 includes generating an OOD notification if OOD data is detected at block 606, returning to block 602 to identify subsequent input data, and/or any other suitable action”. Para. [0265] recites “In at least one embodiment, user input is received from input devices 1308 such as keyboard, mouse, touchpad, microphone, etc.” (i.e., generating a notification when the output of the neural network exceeds a threshold, wherein the notification can be sent to the user)); However, while Mohseni teaches producing a predictive probability as an output from applying a neural network model to the geometrically transformed input data (see at least para. [0071], wherein input data that was geometrically transformed is input to a neural network to output inferred, or predicted characteristics) and comparing the output of the neural network to a predetermined threshold (see at least para. [0088]), Mohseni does not explicitly teach producing a predictive probability distribution and generating a score using a scoring function, the score identifying the likelihood of the input data being out-of-distribution compared to a distribution of training data used to train the machine learning model. Hsu teaches producing a predictive probability distribution (section 2 para. 1 recites “This work considers the OoD detection setting in classification problems. We begin with a dataset Din = {(xi, yi)}Ni=1, denoting in-distribution data xi ϵ Rk and categorical label yi ϵ {y} = {1. . . C} for C classes. Din is generated by sampling from a distribution pin (x, y). We then have a discriminative model fθ(x) with parameters θ learned with the in-domain dataset Din, predicting the class posterior probability p(y|x)”. Section 2 para. 4 recites “The ultimate goal, then, is to find a scoring function S(x) which correlates to the domain posterior probability p(d|x), in that a higher score s from S(x) indicates a higher probability of p(din|x)” (i.e., producing a predictive probability distribution)); and generating a score using a scoring function, the score identifying the likelihood of the input data being out-of-distribution compared to a distribution of training data used to train the machine learning model (section 2 para. 4 recites “The ultimate goal, then, is to find a scoring function S(x) which correlates to the domain posterior probability p(d|x), in that a higher score s from S(x) indicates a higher probability of p(din|x)”. Section 3.1.1 para. 3 recites “At training time, the model calculates the logit fi followed by the softmax function with cross-entropy loss on top of it. At testing time, the class prediction can be made by either calculating argmaxi fi(x) or argmaxi hi(x) (both will give the same predictions). For out-of-distribution detection, we use the scoring function SDeConf (x) = maxi hi(x) or g(x)” (i.e., generating a score using a scoring function to determine whether input data is likely to be out-of-distribution compared to training data)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these teachings by modifying the neural network from Mohseni to output a predictive probability distribution using the method from Hsu. Mohseni and Hsu are both directed to detecting out-of-distribution data using neural networks, and Hsu states in section 4.2 that their DeConf methods (described in section 3.1.1) are not sensitive to the type and depth of neural networks. As such, one of ordinary skill would understand how to modify similar methods of detecting out-of-distribution data. However, while Mohseni teaches performing other actions when an input is determined to be out-of-distribution (see at least paragraph [0088]), the combination of Mohseni and Hsu does not explicitly teach and classifying, using the at least one processing device, the input data based on the user input. Perera teaches and classifying, using the at least one processing device, the input data based on the user input (para. [0113] recites “storing the input image and/or data regarding the input image; returning an error message; returning a message that the classification for the input image is unknown; or requesting, via a graphical user interface, user input to provide a ground truth classification for the input image; and updating parameters of the classification model utilizing the ground truth classification for the input image” (i.e., updating a classification based on user input)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these teachings by modifying the model from Mohseni (as modified by Hsu) to utilize the method of updating the classification based on user input from Perera. Mohseni and Perera are both directed to methods of image classification, and Mohseni teaches in at least paragraph [0088] that “any other suitable action” can be taken after generating an out-of-distribution notification for a user. It would be obvious to one of ordinary skill in the art that the “any other suitable action” from Mohseni could include the method of updating or correcting the classification based on user input data from Perera. Regarding claim 6, the combination of Mohseni, Hsu, and Perera teaches the method of Claim 1, wherein the scoring function is based on (i) a covariate shift of the input data relative to the training data and (ii) a concept shift of the input data relative to the training data (Hsu section 3.1 para. 4 recites “we use the dividend/divisor structure in (EQ 4) as the prior knowledge to design the structure of classifiers, providing classifiers a capacity to decompose the confidence of class probability. In the dividend/divisor structure for classifiers, we define the logit fi(x) for class i, which is the division between two functions hi(x) and g(x): (EQ 5). The quotient fi(x) is then normalized by the exponential function (i.e. softmax) for outputting a class probability p(y = i|din, x)”. Hsu section 3.1.1 para. 3 recites “For out-of-distribution detection, we use the scoring function SDeConf (x) = maxi hi(x) or g(x). Note that when hi(x) = h’i (x) and g(x) = 1, this method reduces to the baseline . We call the three variants of our method DeConf-I, DeConf-E, and DeConf-C”. Hsu section 4.3 para. 2 recites “To create subsets with semantic shift, we separate the classes into two splits. Split A has class indices from 0 to 172, while split B has 173 to 344. Our experiment uses real-A for in-distribution and has the other subsets for out-of-distribution. With the definition given in Section 2, real-B has a semantic shift from real-A, while sketch-A has a non-semantic shift. Sketch-B therefore has both types of distribution shift. Figure 2 illustrates the setup” (i.e., the scoring function is based on covariate, or non-semantic shift, and concept, or semantic shift)). Regarding claim 7, the combination of Mohseni, Hsu, and Perera teaches the method of Claim 1, wherein: the first parametric instance-dependent scalar function is represented by α(f); the second parametric instance-dependent scalar function is represented by β(f); and the parametric instance-dependent scalar functions of the geometric transformation are calibrated such that the predictive probability distribution is aligned with an accuracy of the machine learning model (Mohseni para. [0082] recites “impact of each transformation in training objective described by equation: PNG media_image2.png 62 282 media_image2.png Greyscale is modulated through coefficients. In at least one embodiment, OoD performance depends indirectly on λn through trained network parameters”. Mohseni para. [0097] recites “multi-task learning (Technique) benefits from controllable transformation coefficients via λn (i.e., the scalar functions corresponding to the geometric transformations). In at least one embodiment, instead of explicitly dividing transformation into positive and negative sets, technique (e.g., technique 300 of FIG. 3 and/or technique 500 of FIG. 5) modulates impact on representation learning through trainable λn” (i.e., the predictive output is aligned based on adjusting, or calibrating the scalar functions associated with the geometric transformations of the input data)). Regarding claim 8, the combination of Mohseni, Hsu, and Perera teaches the method of Claim 1, further comprising: training the machine learning model by: obtaining, using the at least one processing device, training data (Mohseni para. 0072] recites “training neural network at block 304 includes training one or more neural networks to infer a plurality of characteristics about input information (e.g., image data)” (i.e., obtaining input data for a machine learning model)); and training the machine learning model using the training data, the machine learning model comprising a feature extractor and the geometric transformation (Mohseni para. [0072] recites “training neural network at block 304 includes training one or more neural networks to infer a plurality of characteristics about input information (e.g., image data)”. Mohseni para. [0543] recites “ground truth data may be synthetically produced (e.g., generated from computer models or renderings), real produced (e.g., designed and produced from real-world data), machine-automated (e.g., using feature analysis and learning to extract features from data and then generate labels), human annotated (e.g., labeler, or annotation expert, defines location of labels), and/or a combination thereof” (i.e., extracting features from input data). Mohseni para. [0077] recites “at a block 402, technique 400 includes applying geometric transformations to input data. In at least one embodiment, applying geometric transformations at block 402 includes applying geometric transformations to image data. In at least one embodiment, applying geometric transformations includes applying geometric transformations to input data 105 to generate a portion of transformed input data 102” (i.e., wherein the machine learning model includes feature extraction and geometric transformation of input data)); wherein training the machine learning model comprises: adjusting parameters of the feature extractor in order to extract the features of the input data; and adjusting the first and second parametric instance-dependent scalar functions of the geometric transformation in order to transform the features of the input data (Mohseni para. [0082] recites “impact of each transformation in training objective described by equation: PNG media_image2.png 62 282 media_image2.png Greyscale is modulated through coefficients. In at least one embodiment, OoD performance depends indirectly on λn through trained network parameters”. Mohseni para. [0097] recites “multi-task learning (Technique) benefits from controllable transformation coefficients via λn (i.e., the scalar functions corresponding to the geometric transformations). In at least one embodiment, instead of explicitly dividing transformation into positive and negative sets, technique (e.g., technique 300 of FIG. 3 and/or technique 500 of FIG. 5) modulates impact on representation learning through trainable λn” (i.e., the predictive output is aligned based on adjusting, or calibrating the scalar functions associated with the geometric transformations of the features of the input data)). Claim 9 is a system claim and its limitation is included in claim 1. The only difference is that claim 9 requires a system. Therefore, claim 9 is rejected for the same reasons as claim 1. Claim 14 is a system claim and its limitation is included in claim 6. Claim 14 is rejected for the same reasons as claim 6. Claim 15 is a system claim and its limitation is included in claim 7. Claim 15 is rejected for the same reasons as claim 7. Claim 16 is a system claim and its limitation is included in claim 8. Claim 16 is rejected for the same reasons as claim 8. Claim 17 is a non-transitory computer readable medium claim and its limitation is included in claim 1. The only difference is that claim 17 requires a non-transitory computer readable medium. Therefore, claim 17 is rejected for the same reasons as claim 1. Claim 20 is a non-transitory computer readable medium claim and its limitation is included in claim 6. Claim 20 is rejected for the same reasons as claim 6. Regarding claim 21, the combination of Mohseni, Hsu, and Perera teaches the method of claim 1, wherein generating the user notification comprises at least one of displaying a user prompt on a display or playing an audio alert on a speaker (Mohseni para. [0146] recites “one or more of controller(s) 1036 may receive inputs (e.g., represented by input data) from an instrument cluster 1032 of vehicle 1000 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface ("HMI") display 1034, an audible annunciator, a loudspeaker, and/or via other components of vehicle 1000” (i.e., displaying a notification to a user on a display)). Claim 22 is a system claim and its limitation is included in claim 21. Claim 22 is rejected for the same reasons as claim 21. Claimed Subject Matter Allowable over Prior Art Claims 2-4, 10-12, and 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12462200 B1 (Liu et al) teaches a method for customizing training features based on a user input to update a machine learning model. US 20230394652 A1 (Pezzotti et al) teaches a method for determining whether an image is out-of-distribution and taking corrective actions based on user information. US 9235562 B1 (Hart et al) teaches a method for determining whether a machine learning model misclassified a document and modifying the machine learning model to correct the identified mistake. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEAH M FEITL whose telephone number is (571) 272-8350. The examiner can normally be reached on M-F 0900-1700 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.M.F./ Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Sep 12, 2022
Application Filed
Aug 22, 2025
Non-Final Rejection — §101, §103
Sep 18, 2025
Interview Requested
Sep 19, 2025
Interview Requested
Oct 07, 2025
Examiner Interview Summary
Oct 07, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Response Filed
Mar 06, 2026
Final Rejection — §101, §103
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572720
METHODS AND APPARATUSES FOR RESOURCE-OPTIMIZED FERMIONIC LOCAL SIMULATION ON QUANTUM COMPUTER FOR QUANTUM CHEMISTRY
2y 5m to grant Granted Mar 10, 2026
Patent 12572723
METHODS AND APPARATUSES FOR RESOURCE-OPTIMIZED FERMIONIC LOCAL SIMULATION ON QUANTUM COMPUTER FOR QUANTUM CHEMISTRY
2y 5m to grant Granted Mar 10, 2026
Patent 12555023
REINFORCEMENT LEARNING EXPLORATION BY EXPLOITING PAST EXPERIENCES FOR CRITICAL EVENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12530434
Classifying Data by Manipulating the Quantum States of Qubits
2y 5m to grant Granted Jan 20, 2026
Patent 12462173
QUANTUM CIRCUIT AND METHODS FOR USE THEREWITH
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
32%
With Interview (+7.0%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 84 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month