Prosecution Insights
Last updated: April 19, 2026
Application No. 18/346,532

TRAINING OF MACHINE-LEARNING ALGORITHM USING EXPLAINABLE ARTIFICIAL INTELLIGENCE

Non-Final OA §103
Filed
Jul 03, 2023
Examiner
HADDAD, MAJD MAHER
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Infineon Technologies AG
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
21 currently pending
Career history
21
Total Applications
across all art units

Statute-Specific Performance

§101
36.1%
-3.9% vs TC avg
§103
44.6%
+4.6% vs TC avg
§102
1.2%
-38.8% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on August 17 th and 25 th , 2023 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 6-8, and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Sivakumar ( US 20230130588 A1 ) in view of Bach (“ OnPixel -Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation ”, 2014) and Ren (“ Learning to Reweight Examples for Robust Deep Learning ”, 2018). Regarding claim 1, Sivakumar teaches [a] method of training of a machine-learning algorithm, t he method comprising (Paragraph 23 of Sivakumar, “ The present disclosure provides systems and methods for training a student neural network to detect one or more objects based on radar data and lidar data. ”) : obtaining a training dataset comprising multiple training feature vectors and associated ground-truth labels , the multiple training feature vectors representing respective radar measurement datasets (Paragraph 25, “… the radar sensor 20 includes a radar emitter system that emits millimeter waves and a radar receiver system that obtains one or more radar echoes associated with the objects in the surrounding environment and generates radar data associated with the contours and ranges of the detected objects. ”, Paragraph 35, “ The radar feature extraction module 71 extracts one or more radar-based features based on the radar input … the one or more radar-based features are vectors that represent … whether a given portion of the radar input corresponds to an edge or contour of an object. ”, Paragraphs 43 and 44, “ the loss module 100 is configured to determine a loss value … based on … ground truth bounding boxes stored in the ground truth database 110 . In one form, each of the ground truth bounding boxes correspond to known object types … the loss module 100 determines the loss value by performing a bounding box refinement loss … of the student neural network 80 based on the loss value … the bounding box refinement loss routine may output a loss value that is a function of a difference between … the proposed bounding boxes (as defined by the feature vector) output by the teacher neural network 70 and the … corresponding ground truth bounding box. ” Sivakumar teaches radar feature extraction that produces feature vectors and uses ground-truth bounding boxes during training where the feature vectors correspond to the training feature vectors based on radar data . ) ; and training the machine-learning algorithm based on loss values that are determined based on a difference between respective classification predictions made by the machine-learning algorithm … and the ground-truth labels (Paragraphs 43 and 44, “ the loss module 100 is configured to determine a loss value … based on … ground truth bounding boxes stored in the ground truth database 110 . In one form, each of the ground truth bounding boxes correspond to known object types … the loss module 100 determines the loss value by performing a bounding box refinement loss … of the student neural network 80 based on the loss value … the bounding box refinement loss routine may output a loss value that is a function of a difference between … the proposed bounding boxes (as defined by the feature vector) output by the teacher neural network 70 and the … corresponding ground truth bounding box. ” Sivakumar teaches a loss module that computes a loss value based on the differences between the network output and the ground-truth bounding boxes. ) , Sivakumar does not teach determining, for each one of the multiple training feature vectors, a respective weighting factor by employing an explainable artificial-intelligence analysis of the machine-learning algorithm in a current training state , … for each one of the multiple training feature vectors …, and the loss values are weighted using the respective weighting factors associated with each training feature vector. Bach, in the same field of endeavor, teaches determining… by employing an explainable artificial-intelligence analysis of the machine-learning algorithm in a current training state (Page 3 Figure 1 Caption, “ In the classification step the image is converted to a feature vector representation and a classifier is applied to assign the image to a given category, e.g., “cat” or “no cat ” … Our method decomposes the classification output f(x) into sums of feature and pixel relevance scores. The final relevances visualize the contributions of single pixels to the prediction. ”, Page 2 Pixel-wise Decomposition as a General Concept, “We are interested to find out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x)… One possible way is to decompose the prediction f(x) as a sum of terms of the separate input dimensions xd respectively pixels: The qualitative interpretation is that Rd < 0 contributes evidence against the presence of a structure which is to be classified while Rd > 0 contributes evidence for its presence.” Page 5 Layer-wise relevance propagation Section, “… relevance R is, namely, the local contribution to the prediction function f(x). ” Bach teaches creating an explainable AI analysis of a trained classifier by decomposing the neural network output into relevance values for input features, where the relevance scores represent the features of the prediction and correspond to the weighting values derived from the explainability analysis of the model. ) ; Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based machine learning training framework with Bach’s relevance propagation explainable AI analysis in order to improve the interpretability and understanding of the training process in black box models (Introduction of Bach). Sivakumar and Bach do not teach for each one of the multiple training feature vectors, a respective weighting factor …in the current training state for each one of the multiple training feature vectors… wherein the loss values are weighted using the respective weighting factors associated with each training feature vector . Ren, in the same field of endeavor, teaches determining, for each one of the multiple training feature vectors, a respective weighting factor …in the current training state for each one of the multiple training feature vectors… wherein the loss values are weighted using the respective weighting factors associated with each training feature vector (Page 3 Section 3.1, “ we aim to minimize the expected loss for the training set … we aim to learn a reweighting of the inputs, where we minimize a weighted loss : ”, Page 2 Introduction, “… we perform validation at every training iteration to … determine the example weights of the current batch … we propose an online reweighting method that… assigns importance weights to examples in every iteration. ” Ren teaches computing weight examples during training iterations. The important weights are assigned to training examples in each batch based on the current state of the learning process. Each training example is assigned an importance weight when the minimization of the loss function occurs, and the loss values are multiplied by these weights during optimization. ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar and Bach’s teaching with Ren’s reweighting technique in order to weight the loss values associated with the training feature vector in order to improve the training and reduce the influence of noisy training examples (Introduction of Ren). Regarding claim 2, Sivakumar teaches … and for a class indicated by the ground-truth label (Paragraph 43, “ the loss module 100 is configured to determine a loss value … based on … ground truth bounding boxes stored in the ground truth database 110 . In one form, each of the ground truth bounding boxes correspond to known object types …”) Sivakumar does not teach the remainder of the claim limitations. Bach, in the same field of endeavor, teaches d etermining … for each one of the multiple training feature vectors comprises, for each one of the multiple training feature vectors: determining for the respective training feature vecto r, an associated further feature relevance vector using the explainable artificial-intelligence analysis ( Page 2 Pixel-wise Decomposition as a General Concept, “We are interested to find out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x)… One possible way is to decompose the prediction f(x) as a sum of terms of the separate input dimensions xd respectively pixels: ”, Page 4 Layer-wise propagation, “ The idea is to find a Relevance score R _d d for each dimension z _d of the vector z at the next layer l ” , See Figure 1, Bach starts off by inputting the feature vector of the image into a neural network to produce a classification output f(x) , and then performs an explainable AI analysis that decomposes the prediction f(x) into a set of relevance values R_d that correspond to individual feature dimensions of the input vector as seen in the image . The prediction is represented as the sum of the feature relevance scores which seen in Equation 1. The set of relevance values forms a further feature relevance vector indicating the contribution of each feature of the input vector to the prediction. ) , the respective associated further feature relevance vector comprising further feature relevance values indicative of a contribution of the features of the respective training feature vector to the classification prediction made by the machine-learning algorithm in the current training state … (Page 3 Pixel-wise Decomposition as a General Concept, “ The qualitative interpretation is that Rd < 0 contributes evidence against the presence of a structure which is to be classified while Rd > 0 contributes evidence for its presence. ”, See Figure 1 Caption, “ In the classification step the image is converted to a feature vector representation and a classifier is applied to assign the image to a given category, e.g., “cat” or “no cat”. Note that the computation of the feature vector usually involves the usage of several intermediate representations. Our method decomposes the classification output f(x) into sums of feature and pixel relevance scores. The final relevances visualize the contributions of single pixels to the prediction. ”) wherein the weighting factors are further determined based on a combination of the feature relevance vectors with the respective further feature relevance vectors (Page 4 Layer-wise relevance propagation, “ The idea is to find a Relevance score R _ d for each dimension z _ d of the vector z at the next layer l … … The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing. ” Bach teaches that the relevance values are propagated across layers of the neural network where the relevance scores for one layer are computed based on the relevance score from the previous layer it visited. This propagation combines relevance information from multiple relevance vectors corresponding to different feature representations. ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based machine learning training framework with Bach’s determination of feature importance values in order to improve the interpretability of the model predictions (Introduction of Bach). Regarding claim 3, Sivakumar teaches and for a class indicated by the ground-truth label (Paragraph 43, “ the loss module 100 is configured to determine a loss value … based on … ground truth bounding boxes stored in the ground truth database 110 . In one form, each of the ground truth bounding boxes correspond to known object types …”) Sivakumar does not teach the remaining claim limitations. Bach, in the same field of endeavor, teaches d etermining … for each one of the multiple training feature vectors comprises, for each one of the multiple training feature vectors: determining for the respective training feature vecto r, an associated further feature relevance vector using the explainable artificial-intelligence analysis ( Page 2 Pixel-wise Decomposition as a General Concept, “We are interested to find out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x)… One possible way is to decompose the prediction f(x) as a sum of terms of the separate input dimensions xd respectively pixels: ”, Page 4 Layer-wise propagation, “ The idea is to find a Relevance score R _d d for each dimension z _d of the vector z at the next layer l ”, See Figure 1, ) , the respective associated further feature relevance vector comprising further feature relevance values indicative of a contribution of the features of the respective training feature vector to the classification prediction made by the machine-learning algorithm in the current training state … (Page 3 Pixel-wise Decomposition as a General Concept, “ The qualitative interpretation is that Rd < 0 contributes evidence against the presence of a structure which is to be classified while Rd > 0 contributes evidence for its presence. ”, See Figure 1 Caption, “ In the classification step the image is converted to a feature vector representation and a classifier is applied to assign the image to a given category, e.g., “cat” or “no cat”. Note that the computation of the feature vector usually involves the usage of several intermediate representations. Our method decomposes the classification output f(x) into sums of feature and pixel relevance scores. The final relevances visualize the contributions of single pixels to the prediction. ”) wherein the weighting factors are further determined based on a combination of the feature relevance vectors with the respective further feature relevance vectors (Page 4 Layer-wise relevance propagation, “ The idea is to find a Relevance score R _ d for each dimension z _ d of the vector z at the next layer l … … The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing. ”) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based machine learning training framework with Bach’s determination of feature importance values in order to improve the interpretability of the model predictions (Introduction of Bach). Regarding claim 6, Sivakumar teaches based on the multiple training feature vectors (Paragraph 35, “ The radar feature extraction module 71 extracts one or more radar-based features based on the radar input. In one form, the one or more radar-based features are vectors ” Sivakumar teaches generating radar-based feature vectors from radar inputs which is then used for training a neural network.) , determining an augmented training dataset comprising one or more augmented training feature vectors by applying, to the training feature vectors, at least one data transformation associated with a physical observable (Paragraph 7, “ In one form, the first augmentation routine is a translation routine, a rotation routine, a scaling routine, a flipping routine, or a combination thereof. ”, Paragraph 30, “ the augmentation module 60 performs one or more augmentation routines on the radar-based intensity map and the lidar-based intensity map to generate a radar input and a lidar input. ” Sivakumar teaches an augmentation module that applies augmentation routines such as translation, rotation, scaling, and flipping on radar/lidar data, which represents the sensor measurements of the physical environment. The augmentation routine is performed on radar data to produce refined/altered data for the extraction module to extract the result’s features to produce augmented training feature vectors.), wherein the training of the machine-learning algorithm is further performed based on the augmented training dataset (Paragraph 48, “… the loss module 100 determines a loss value of the student-based bounding boxes and updates the weights of the student neural network 80 based on the loss values. ” Sivakumar teaches determining loss values and updating the neural network weights based on the processed radar inputs generated after augmentation. After transforming the radar data and then extracting it into augmented feature vectors, the result is then used as input to the neural network, which as of result, is used to train the machine learning model using the loss module.) . Regarding claim 7, Sivakumar teaches the ground-truth label is invariant with respect to the at least one data transformation (Paragraph 43, “ the loss module 100 is configured to determine a loss value of the plurality of student-based bounding boxes based on the plurality of teacher-based bounding boxes and a plurality of ground truth bounding boxes stored in the ground truth database 110 . ” The loss calculation of Sivakumar uses stored ground truth bounding boxes in the process of calculating the loss function to update the neural network.) ; and further ground-truth labels of the augmented training dataset correspond to the respective ground-truth labels of the training dataset (Paragraph 8, “ using the teacher neural network, the plurality of teacher-based bounding boxes based on the radar input and the lidar input further comprises extracting one or more radar-based features based on the radar input … generating a plurality of radar-based proposed bounding boxes ”, Paragraph 44, “ the bounding box refinement loss routine may output a loss value that is a function of a difference between a n… proposed bounding boxes (as defined by the feature vector) output by the teacher neural network 70 and the … corresponding ground truth bounding box. ” Sivakumar teaches generating proposed bounding boxes from radar feature vectors and as of result, compares them to their ground truth bounding box results that are stored.) . Regarding claim 8, Sivakumar teaches the at least one data transformation comprises a shift of a range observable (Paragraph 31, “… the “translation routine” refers to shifting at least one of an X-coordinate, a Y-coordinate, and a Z-coordinate of the radar data points of the radar-based intensity map and/or the lidar data points of the lidar-based intensity map by a respective translation value. ” The translation routine represents spatial measurements of detected objects in the vehicle whenever an object is detected based on the radar sensor.) . Regarding claim 10, Sivakumar teaches the at least one data transformation comprises addition of Gaussian measurement noise (Paragraph 7, “ In one form, the first augmentation routine is a translation routine, a rotation routine, a scaling routine, a flipping routine, or a combination thereof. ”, Paragraph 32, “ the second augmentation routine is a noise augmentation routine, which may include a Gaussian noise function or other noise function configured to add noise to the lidar data points. ” Sivakumar teaches producing one or more augmentation routines, which means that the first augmentation routine can be performed (producing the transformed data), and then the second one is performed (adding Gaussian noise) based on the result of the first one.) . Regarding claim 11, Sivakumar teaches the training is performed jointly based on the training dataset and the augmented training dataset (Paragraph 30, “ the augmentation module 60 performs one or more augmentation routines … to generate a radar input and a lidar input. ”, Paragraph 47, “ the teacher neural network 70 generates … s tudent-based bounding boxes based on the radar input and the lidar input. ” The generated augmented radar and lidar inputs are generated using augmentation routines and using those inputs to generate predictions and compute loss values during the network training.) . Regarding claim 13, Sivakumar teaches [a] processing device configured to train a machine-learning algorithm, the processing device comprising at least one processor configured to… (Paragraph 24, “ It should be readily understood that any one of the components of the training system 10 can be provided at the same location or distributed at different locations (e.g., via one or more edge computing devices) and communicably coupled accordingly. ”) The remainder of claim 13 recites identical limitations to claim 1. Therefore, claim 13 is rejected using the same rationale as claim 1. Claims 14-17 recites identical limitations to claims 2-5. Therefore, claims 14-17 are rejected using the same rationale as claims 2-5. Regarding claim 18, Sivakumar teaches [a] non-transitory computer readable medium with instructions stored thereon, wherein the instructions, when executed by at least one processor, enable the at least one processor to perform the steps of… (Paragraph 10, “ The system includes one or more processors and one or more nontransitory computer-readable mediums storing instructions that are executable by the one or more processors ”) The remainder of claim 13 recites identical limitations to claim 1. Therefore, claim 13 is rejected using the same rationale as claim 1. Claims 4 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Sivakumar ( US 20230130588 A1 ) in view of Bach (“ OnPixel -Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation ”, 2014), Ren (“ Learning to Reweight Examples for Robust Deep Learning ”, 2018), and Avik ( EP 4134924 A1 ). Regarding claim 4, Sivakumar teaches the combination of the feature relevance vectors with the respective further feature relevance vectors comprises (Paragraphs 43 and 44, “ the loss module 100 is configured to determine a loss value … based on … ground truth bounding boxes stored in the ground truth database 110 . In one form, each of the ground truth bounding boxes correspond to known object types … the loss module 100 determines the loss value by performing a bounding box refinement loss … of the student neural network 80 based on the loss value … the bounding box refinement loss routine may output a loss value that is a function of a difference between … the proposed bounding boxes (as defined by the feature vector) output by the teacher neural network 70 and the … corresponding ground truth bounding box. ”) : Sivakumar does not teach an absolute value of a mean subtraction of the further feature… vector from the feature… vector; or an absolute value of a mean subtraction of the feature… vector from the further feature… vector . Avik, in the same field of endeavor, teaches an absolute value of a mean subtraction of the further feature … vector from the feature … vector; or an absolute value of a mean subtraction of the feature … vector from the further feature … vector ( Page 17 Paragraph 4, “ The reconstruction loss 191 aims to minimize the difference between the reconstructed images and the label images … As a metric the mean squared error defines as LMSE … the feature embedding 149 of an input sample is modeled as a multivariate Gaussian distributed random variable X ” Avik teaches measuring a difference between predicted output vector and the ground truth using the mean squared error, which computes the mean of squared differences between the predicted output and the ground truth label. Squaring the differences measures the magnitude of difference between vectors and is similar to taking the absolute value of the value to always make the result positive . ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based machine learning training framework with Avik’s metric for calculating differences between the predicted result and the ground truth label in order to effectively train the model to perform better with predictions ( Page 3 Paragraph 3 of Summary of Avik ) . Regarding claim 9, Sivakumar teaches the at least one data transformation comprises a frequency-flip of a … observable (Paragraph 31, “… the “flipping routine” refers to adjusting a sign of at least one of an X-coordinate, a Y-coordinate, and a Z-coordinate of the radar data points of the radar-based intensity map ”) . Sivakumar does not teach Doopler observable. Avik, in the same field of endeavor, teaches Doppler observable (Page 7 Second to Last Paragraph, “… which associated two positional observables, e.g., range and Doppler, or a positional observable and time, e.g., range and time. ”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based machine learning training framework with Avik’s Doppler observable radar processing framework in order to improve the robustness of radar-based feature extraction and training. (Page 2 Background of Avik). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sivakumar ( US 20230130588 A1 ) in view of Bach (“ OnPixel -Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation ”, 2014), Ren (“ Learning to Reweight Examples for Robust Deep Learning ”, 2018), and Mankodiya (“ OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles ”, May 2022). Regarding claim 5, Sivakumar does not teach the combination of the feature relevance vectors with the respective further feature relevance vectors comprises an absolute value of an IoU combination of the feature relevance vector and the further feature relevance vector . Mankodiya , in the same field of endeavor, teaches the combination of the feature relevance vectors with the respective further feature relevance vectors comprises an absolute value of an IoU combination of the feature relevance vector and the further feature relevance vector (Section 6.1.1, “ IOU, also known as the Jaccard index, is the most commonly used metric for measuring the similarity between two arbitrary shapes [44]. IOU can also be used as a loss function so it can be backpropagated … pixel-wise mapping was done between a ground truth segmentation map ( G ) and a model-predicted segmentation map ( P ). Note: the semantic maps ‘ G ’ and ‘ P ’ are image matrices of the same shape. ” Mankodiya teaches computing an intersection-over-union ( IoU ) similarity value between a predicted feature map and a ground truth feature map, where the similarity measures the overlap between the two representations. ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based model training framework with Mankodiya’s IoU similarity metric in order to update the model and improve performance (Introduction of Mankodiya ). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Sivakumar ( US 20230130588 A1 ) in view of Bach (“ OnPixel -Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation ”, 2014), Ren (“ Learning to Reweight Examples for Robust Deep Learning ”, 2018), and Sharma ( US 20210042645 A1 ) Regarding claim 12, Sivakumar does not teach after training of the machine-learning algorithm: obtaining a federated training dataset comprising multiple federated training feature vectors and associated ground-truth labels; and retraining the machine-learning algorithm jointly based on the federated training dataset and at least one of the training dataset or the augmented training dataset. Sharma, in the same field of endeavor, teaches after training of the machine-learning algorithm: obtaining a federated training dataset comprising multiple federated training feature vectors and associated ground-truth labels (Paragraph 53, “ A federated training … comprises a plurality of models … a plurality of training datasets (e.g., training datasets 552 ) … Training datasets in the plurality of training datasets are annotated with ground truth labels to train the models. ”) ; and retraining the machine-learning algorithm jointly based on the federated training dataset and at least one of the training dataset or the augmented training dataset (Paragraph 53, “ train the models on the matched training dataset s… the gradients generated based on computing error between predictions by the models on the training datasets and the ground truth labels ”, Paragraph 55, “ The runtime intermediary creates a secure tunnel to receive the models, and the secure tunnel prevents the model servers from accessing the training datasets. ” The models of Sharma are trained using multiple matched training datasets within the federated system. The gradients are used to update the model parameter after the models made their predictions. ) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Sivakumar’s radar-based model training framework with Sharma’s federated training framework in order improve the robustness and generalization of the trained model ( Paragraph 4 of Sivakumar ). Claims 19-20 recites identical limitations to claims 6-7. Therefore, claims 19-20 are rejected using the same rationale as claims 6-7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MAJD MAHER HADDAD whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-2265 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon-Friday 8-5 pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Kamran Afshar , can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-7796 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.M.H./ Examiner, Art Unit 2125 /KAMRAN AFSHAR/ Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Jul 03, 2023
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month