Prosecution Insights
Last updated: April 19, 2026
Application No. 18/258,899

METHOD, DEVICE AND COMPUTER-READABLE MEDIUM FOR TRAINING MACHINE LEARNING MODEL PERFORMING COMPETENCY EVALUATION ON PLURALITY OF COMPETENCIES

Non-Final OA §101§103
Filed
Jun 22, 2023
Examiner
MAMILLAPALLI, PAVAN
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Genesis Lab Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
597 granted / 743 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
21 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 743 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is in response for Application # 18/258,899 filed on June 22, 2023 in which claims 1-14 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2021-0010237, filed on January 25, 2021. Status of claims Claims 1-14 are pending, of which claims 1-14 are rejected under 35 U.S.C. 103 and also claims 1-14 are rejected under 35 U.S.C. 101 Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101. because the claims are directed to an abstract idea; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—Claims 1-14 recite a method, system, computer-readable media and apparatus respectively. The analysis of claims 1, 13 and 14 is as follows: Step 2A, prong one: Does claims 1, 13 and 14 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of 1, 13 and 14 “a backbone artificial neural network module for deriving intermediate feature information from input data: and a sub-artificial neural network module for evaluating each competency from the intermediate feature information, the method comprising: a labeling learning step of training the backbone artificial neural network module and the sub-artificial neural network module for a specific competency so as to reduce an error between a first prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub-artificial neural network module for the specific competency and labeling information for the learning input data” as drafted, are algorithmic steps based on various processes can be performed by an programming algorithmic of determining the training neural network for specific competency (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “apparatus”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claims recites 1, 13 and 14 “a backbone artificial neural network module for deriving intermediate feature information from input data: and a sub-artificial neural network module for evaluating each competency from the intermediate feature information, the method comprising: a labeling learning step of training the backbone artificial neural network module and the sub-artificial neural network module for a specific competency so as to reduce an error between a first prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub-artificial neural network module for the specific competency and labeling information for the learning input data” are mere training neural networks by applying machine learning (i.e., applying algorithmic logic); the computers that perform those functions and the programmable logic are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “machine learning from input“, the training and normalization are also recited at a high level of generality and merely generally link to respective technological environments (e.g. training neural network) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of using a trained machine model is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i). Similarly, the training, normalization process and determining are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claims 1, 13 and 14 are rejected as being directed to non-patentable subject matter under §101. The analysis of claims 2-12 are as follows: Step 2A, prong one: Does claims 2-6, 9 and 10 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of claim 2 “wherein output information of the sub-artificial neural network module includes score information for a corresponding competency and behavior index information for a behavior index in which the corresponding competency is found”, Claim 3 “wherein the input data includes text, and output information of the sub-artificial neural network module includes a corresponding competency or a position in which a behavior index related to the corresponding competency is found in the text” Claim 4 “wherein the input data includes video information or voice information with or without preprocessing, and output information of the sub-artificial neural network module includes time information or position in video information or voice information in which a corresponding competency or behavior index related to the corresponding competency is found” Claim 5 “a prediction labeling learning step of calculating a loss based on second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub- artificial neural network module for other competency different from the specific competency, and training a sub- artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency so as to reduce the loss” Claim 6 “wherein the prediction labeling learning step includes calculating the loss by considering uncertainty of a prediction result calculated from the second prediction information” Claim 7 “wherein the second prediction information includes probability information for a plurality of result classes, and the prediction labeling learning step includes calculating the loss by considering uncertainty in probability information of each of the result classes” Claim 8 “wherein the second prediction information includes a regression result value and deviation information, and the prediction labeling learning step includes calculating the loss by considering uncertainty derived from the deviation information” Claim 9 “wherein the prediction labeling learning step includes training a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency, so as to reduce an error between a second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to a sub- artificial neural network module for other competency different from the specific competency and prediction labeling information generated in consideration of uncertainty of the second prediction information” Claim 10 “wherein the prediction labeling learning step includes excluding the prediction labeling learning step related to the second prediction information when uncertainty of a prediction result calculated from the second prediction information is a preset reference or more” Claim 11 “wherein the input data includes at least one of text, voice information, and video information with or without preprocessing, and the backbone artificial neural network module includes a single modal or multi-modal artificial neural network module” Claim 12 “wherein the input data includes tokenized text information and category information of a question related to the text information, and the machine learning model for performing the competency evaluation includes a model for evaluating competency based on at least one of past behaviors and attitudes“ as drafted, are algorithmic steps based on various processes can be performed by an programming algorithmic of determining the training neural network by compound spectrum (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group. Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “device”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claim recites in claim 2 “wherein output information of the sub-artificial neural network module includes score information for a corresponding competency and behavior index information for a behavior index in which the corresponding competency is found”, Claim 3 “wherein the input data includes text, and output information of the sub-artificial neural network module includes a corresponding competency or a position in which a behavior index related to the corresponding competency is found in the text” Claim 4 “wherein the input data includes video information or voice information with or without preprocessing, and output information of the sub-artificial neural network module includes time information or position in video information or voice information in which a corresponding competency or behavior index related to the corresponding competency is found” Claim 5 “a prediction labeling learning step of calculating a loss based on second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub- artificial neural network module for other competency different from the specific competency, and training a sub- artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency so as to reduce the loss” Claim 6 “wherein the prediction labeling learning step includes calculating the loss by considering uncertainty of a prediction result calculated from the second prediction information” Claim 7 “wherein the second prediction information includes probability information for a plurality of result classes, and the prediction labeling learning step includes calculating the loss by considering uncertainty in probability information of each of the result classes” Claim 8 “wherein the second prediction information includes a regression result value and deviation information, and the prediction labeling learning step includes calculating the loss by considering uncertainty derived from the deviation information” Claim 9 “wherein the prediction labeling learning step includes training a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency, so as to reduce an error between a second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to a sub- artificial neural network module for other competency different from the specific competency and prediction labeling information generated in consideration of uncertainty of the second prediction information” Claim 10 “wherein the prediction labeling learning step includes excluding the prediction labeling learning step related to the second prediction information when uncertainty of a prediction result calculated from the second prediction information is a preset reference or more” Claim 11 “wherein the input data includes at least one of text, voice information, and video information with or without preprocessing, and the backbone artificial neural network module includes a single modal or multi-modal artificial neural network module” Claim 12 “wherein the input data includes tokenized text information and category information of a question related to the text information, and the machine learning model for performing the competency evaluation includes a model for evaluating competency based on at least one of past behaviors and attitudes“ are mere training neural networks by applying machine learning (i.e., applying algorithmic logic); the computers that perform those functions and the programmable logic are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “machine learning from input“, the training and normalization are also recited at a high level of generality and merely generally link to respective technological environments (e.g. training neural network) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of using a trained machine model is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i). Similarly, the training, normalization process and determining are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claims 2-12 are rejected as being directed to non-patentable subject matter under §101. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As per claim 14, a "computer-readable medium" is being cited. However, it appears that one of ordinary skill in the art could interpret the computer-readable medium as signal medium, per se. As per specification (Paragraph 0190), a person of ordinary skill in the art would interpret the computer-readable medium include program instructions as electrical pulse ‘signal’. Even though the claim limitation cites that the computer-readable medium comprise having one processor and at least one memory will not solve the issue of computer-readable medium being a signal per se. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. US 2020/0134455 A1 (hereinafter ‘Choi) (IDS 6/22/2023) in view of David Neale Carley US 2022/0067181 A1 (hereinafter ‘Carley’). As per claim 1, Choi disclose, A method for training a machine learning model (Choi: paragraph 0024: disclose an apparatus for training a deep learning model) performing competency evaluation on a plurality of competencies (Choi: paragraph 0018: disclose loss function ‘Loss functions (or cost functions) are essential for training neural networks, acting as a mathematical metric to measure the error between predicted outputs and ground truth labels’. Examiner equates competency evaluation to loss function and also examiner would discuss further in secondary art below) and performed on a computing device having at least one processor and at least one memory in which the machine learning model includes (Choi: paragraph 0008: disclose a computing device which includes one or more processors and a memory in which one or more programs to be executed by the one or more processors are stored): a backbone artificial neural network module (Choi: paragraph 0086: disclose generative model is used to train a feature block, examiner equates a blackbone artificial neural network module equate to generative model) for deriving intermediate feature (Choi: Fig. 4 Element 430: disclose train domain block, which examiner equates domain block of extract first feature as deriving intermediate feature) information from input data (Choi: paragraph 0086: disclose plurality of learning data ‘input data’); and a sub-artificial neural network module for evaluating each competency from the intermediate feature information, the method comprising (Choi: paragraph 0087: Fig. 4 Element 440: disclose each learning data may be learning data regarding a different type of problem. Also, each learning data may include a plurality of learning samples on the pertinent problem and figure disclose the element 450 on training specialty block and paragraph 0088: disclose generative model may be a model that generates a sample data set by learning a probability distribution of the learning data. Examiner equates sub-artificial neural network to generative model). It is noted, however, Choi did not specifically detail the aspects of a labeling learning step of training the backbone artificial neural network module and the sub-artificial neural network module for a specific competency so as to reduce an error between a first prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub-artificial neural network module for the specific competency and labeling information for the learning input data as recited in claim 1. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of a labeling learning step of training (Carley: paragraph 0005: disclose receiving a labeled dataset for use in training the machine learning model) the backbone artificial neural network module (Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the backbone artificial neural network module to a deep learning algorithm, or other types of algorithms) and the sub-artificial neural network module (Carley: Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the sub-artificial neural network module to a deep learning algorithm, or other types of algorithms) for a specific competency so as to reduce an error between a first prediction (Carley: paragraph 0044: disclose trained models be deployed for performing predictions using a new data set as input) information obtained by inputting intermediate feature information (Carley: paragraph 0040: disclose evaluation metrics can include quality metrics, such as an error rate of the machine learning model, a statistical distribution of the machine learning model, a latency of the machine learning model, a confidence level of the machine learning model), which is output by inputting learning input data for the specific competency (Carley: paragraph 0040: disclose evaluation metrics can include quality metrics) to the backbone artificial neural network module, to the sub-artificial neural network module for the specific competency and labeling information for the learning input data (Carley: paragraph 0051: disclose labeled data units and store them for future usage during, for example, training or testing of a machine learning model). Choi and Carley are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “Training Neural Network Systems”. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of Choi and Carley because they are both directed to training neural network systems and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Carley with the method described by Choi in order to solve the problem posed. The motivation for doing so would have been for a deep learning model that can solve various types of problems and can be easily trained on a new problem (Carley: paragraph 0005). Therefore, it would have been obvious to combine Carley with Choi to obtain the invention as specified in instant claim 1. As per claim 2, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of wherein output information of the sub-artificial neural network module includes score information for a corresponding competency and behavior index information for a behavior index in which the corresponding competency is found as recited in claim 2. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein output information of the sub-artificial neural network module includes score information for a corresponding competency and behavior index information for a behavior index in which the corresponding competency is found (Carley: paragraph 0040: disclose evaluation metrics “competency” can include quality metrics ‘behavior Index’, such as an error rate of the machine learning model). As per claim 3, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of wherein the input data includes text, and output information of the sub-artificial neural network module includes a corresponding competency or a position in which a behavior index related to the corresponding competency is found in the text as recited in claim 3. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the input data includes text (Carley: paragraph 0049: disclose receive a corpus of unlabeled data and paragraph 0022: disclose the raw data include data type of text samples), and output information of the sub-artificial neural network module includes a corresponding competency or a position in which a behavior index related to the corresponding competency is found in the text (Carley: paragraph 0042: disclose machine learning metrics (e.g., training accuracy, epoch count, loss, accuracy, test loss, test accuracy, etc.) also be sent to a user). As per claim 4, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of wherein the input data includes video information or voice information with or without preprocessing, and output information of the sub-artificial neural network module includes time information or position in video information or voice information in which a corresponding competency or behavior index related to the corresponding competency is found as recited in claim 4. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the input data includes video information or voice information with or without preprocessing (Carley: paragraph 0059: disclose determine a safe trajectory for navigation based on sensor data such as LIDAR data, camera images ‘video information’, RADAR data, or the like collected by the vehicle), and output information of the sub-artificial neural network module includes time information or position in video information or voice information in which a corresponding competency or behavior index related to the corresponding competency is found (Carley: paragraph 0042: disclose machine learning metrics (e.g., training accuracy, epoch count, loss, accuracy, test loss, test accuracy, etc.) also be sent to a user). As per claim 5, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of a prediction labeling learning step of calculating a loss based on second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub- artificial neural network module for other competency different from the specific competency, and training a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency so as to reduce the loss as recited in claim 5. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of a prediction labeling learning step of calculating a loss based on second prediction information obtained by inputting intermediate feature information (Carley: paragraph 0059: disclose used to train a machine learning model for predicting future locations, trajectories, and/or actions of one or more objects in the environment of an autonomous vehicle to aid the vehicle in making navigation and control decisions), which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to the sub-artificial neural network module (Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the sub-artificial neural network module to a deep learning algorithm, or other types of algorithms) for other competency different from the specific competency, and training a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module (Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the sub-artificial neural network module to a deep learning algorithm, or other types of algorithms) for other competency different from the specific competency so as to reduce the loss (Carley: paragraph 0040: disclose evaluation metrics can include quality metrics, such as an error rate of the machine learning model, a statistical distribution of the machine learning model, a latency of the machine learning model, a confidence level of the machine learning model). As per claim 6, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. It is noted, however, Choi did not specifically detail the aspects of wherein the prediction labeling learning step includes calculating the loss by considering uncertainty of a prediction result calculated from the second prediction information as recited in claim 6. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the prediction labeling learning step (Carley: paragraph 0005: disclose receiving a labeled dataset for use in training the machine learning model) includes calculating the loss by considering uncertainty of a prediction result calculated from the second prediction information (Carley: paragraph 0059: disclose used to train a machine learning model for predicting future locations, trajectories, and/or actions of one or more objects in the environment of an autonomous vehicle to aid the vehicle in making navigation and control decisions). As per claim 7, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. It is noted, however, Choi did not specifically detail the aspects of wherein the second prediction information includes probability information for a plurality of result classes, and the prediction labeling learning step includes calculating the loss by considering uncertainty in probability information of each of the result classes as recited in claim 7. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the second prediction information includes probability information for a plurality of result classes, and the prediction labeling learning step includes calculating the loss by considering uncertainty in probability information of each of the result classes (Carley: paragraph 0059: disclose used to train a machine learning model for predicting future locations, trajectories, and/or actions of one or more objects in the environment of an autonomous vehicle to aid the vehicle in making navigation and control decisions). As per claim 8, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. It is noted, however, Choi did not specifically detail the aspects of wherein the second prediction information includes a regression result value and deviation information, and the prediction labeling learning step includes calculating the loss by considering uncertainty derived from the deviation information as recited in claim 8. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the second prediction information includes a regression (Carley: paragraph 0034: disclose A machine learning algorithm may include different types of algorithms including implementations of a classification algorithm, a neural network algorithm, a regression algorithm ) result value and deviation information, and the prediction labeling learning step includes calculating the loss by considering uncertainty derived from the deviation information (Carley: paragraph 0059: disclose used to train a machine learning model for predicting future locations, trajectories, and/or actions of one or more objects in the environment of an autonomous vehicle to aid the vehicle in making navigation and control decisions). As per claim 9, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. It is noted, however, Choi did not specifically detail the aspects of wherein the prediction labeling learning step includes training a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency, so as to reduce an error between a second prediction information obtained by inputting intermediate feature information, which is output by inputting learning input data for the specific competency to the backbone artificial neural network module, to a sub-artificial neural network module for other competency different from the specific competency and prediction labeling information generated in consideration of uncertainty of the second prediction information as recited in claim 9. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the prediction labeling learning step includes training (Carley: paragraph 0059: disclose autonomous vehicle where any faulty predictions because of malicious attacks could be fatal) a sub-artificial neural network module for other competency different from the specific competency or a sub-artificial neural network module and a backbone artificial neural network module for other competency different from the specific competency, so as to reduce an error between a second prediction information obtained by inputting intermediate feature information (Carley: paragraph 0040: disclose evaluation metrics can include quality metrics, such as an error rate of the machine learning model, a statistical distribution of the machine learning model, a latency of the machine learning model, a confidence level of the machine learning model), which is output by inputting learning input data for the specific competency to the backbone artificial neural network module (Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the backbone artificial neural network module to a deep learning algorithm, or other types of algorithms), to a sub-artificial neural network module (Carley: Carley: paragraph 0034: disclose various training devices use for training data to train one or more implementations of machine learning algorithms. Examiner equates the sub-artificial neural network module to a deep learning algorithm, or other types of algorithms) for other competency different from the specific competency and prediction labeling information generated in consideration of uncertainty of the second prediction information (Carley: paragraph 0059: disclose model because they only allow limited access to data such as labeled data and evaluation metrics as well provide encrypted model repository with checkpoints). As per claim 10, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. It is noted, however, Choi did not specifically detail the aspects of wherein the prediction labeling learning step includes excluding the prediction labeling learning step related to the second prediction information when uncertainty of a prediction result calculated from the second prediction information is a preset reference or more as recited in claim 10. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the prediction labeling learning step includes excluding the prediction labeling (Carley: paragraph 0059: disclose autonomous vehicle where any faulty predictions because of malicious attacks could be fatal) learning step related to the second prediction information when uncertainty of a prediction result calculated from the second prediction information is a preset reference or more (Carley: paragraph 0059: disclose model because they only allow limited access to data such as labeled data and evaluation metrics as well provide encrypted model repository with checkpoints). As per claim 11, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of wherein the input data includes at least one of text, voice information, and video information with or without preprocessing, and the backbone artificial neural network module includes a single modal or multi-modal artificial neural network module as recited in claim 11. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the input data includes at least one of text, voice information, and video information with or without preprocessing (Carley: paragraph 0059: disclose determine a safe trajectory for navigation based on sensor data such as LIDAR data, camera images ‘video information’, RADAR data, or the like collected by the vehicle), and the backbone artificial neural network module includes a single modal or multi-modal artificial neural network module (Carley: paragraph 0047: disclose single computing device, or by multiple distinct computing devices, such as computer servers, logically or physically grouped together to collectively operate as a server system). As per claim 12, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, Choi did not specifically detail the aspects of wherein the input data includes tokenized text information and category information of a question related to the text information, and the machine learning model for performing the competency evaluation includes a model for evaluating competency based on at least one of past behaviors and attitudes as recited in claim 12. On the other hand, Carley achieved the aforementioned limitations by providing mechanisms of wherein the input data includes tokenized text information and category information of a question related to the text information (Carley: paragraph 0049: disclose receive a corpus of unlabeled data and paragraph 0022: disclose the raw data include data type of text samples), and the machine learning model for performing the competency evaluation includes a model for evaluating competency based on at least one of past behaviors and attitudes (Carley: paragraph 0042: disclose machine learning metrics (e.g., training accuracy, epoch count, loss, accuracy, test loss, test accuracy, etc.) also be sent to a user). As per claim 13, Choi disclose, A device for training a machine learning model performing competency evaluation on a plurality of competencies and implemented by a computing device having at least one processor and at least one memory in which the machine learning model includes (Choi: Paragraph 0008: disclose computing device which includes one or more processors and a memory in which one or more programs to be executed by the one or more processors): remaining limitations in this claim 13 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1. As per claim 14, Choi disclose, A computer-readable medium for implementing the method for training a machine learning model performing competency evaluation on a plurality of competencies and performed on a computing device having at least one processor and at least one memory, wherein the machine learning model includes (Choi: Paragraph 0008: disclose the computing device include at least one processor, a computer-readable storage medium): remaining limitations in this claim 14 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub. US 2021/0192392 A1 disclose “Learning method for supervised learning executed by computer, involves labeling training data with label based on cluster generated by clustering, and executing supervised learning of decision tree by using labeled training data” US Pat. US 10,921,755 B2 disclose “Method and system for competence monitoring and contiguous learning for control” Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836. The examiner can normally be reached on M-F. 8am - 4pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAVAN MAMILLAPALLI/ Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602389
RECOMMENDATION WORD DETERMINATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603155
METHODS FOR COMPRESSION OF MOLECULAR TAGGED NUCLEIC ACID SEQUENCE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12601597
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12602503
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591580
CONFIDENCE FABRIC ENHANCED PRIVACY-PRESERVING DATA AGGREGATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 743 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month