DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 12/23/2025 have been entered.
Claims 1-11, and 13-19 remain pending within the application.
The amendments filed 12/23/2025 are sufficient to overcome each and every objection previously set forth in the Non-Final Office Action mailed 09/23/2025. The objections have been withdrawn.
The amendments filed 12/23/2025 are sufficient to overcome the 112(b) rejections previously set forth in the Non-Final Office Action mailed 09/23/2025The rejections have been withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-9, 11, 13, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Baker et al. (“ACCELERATING NEURAL ARCHITECTURE SEARCH USING PERFORMANCE PREDICTION”), hereafter Baker, in view of Dong et al. (“Network Pruning via Transformable Architecture Search”), hereafter Dong.
Regarding claim 1, Baker discloses:
A learning apparatus comprising a processing circuit, wherein the processing circuit: (Baker, page 5, penultimate paragraph, line 3 “On 1 core of a Intel 6700k CPU”, and page 3, section 3.2, paragraph 1, lines 1-3 “We experiment with small and very deep CNNs (e.g., ResNet, Cuda-Convnet) trained on image classification datasets and with LSTMs trained with Penn Treebank (PTB), a language modeling dataset.”),
acquires first sequence data representing transition of inference performance according to a training progress of a first machine learning model trained in accordance with a first training parameter value concerning a specific training condition (Baker, page 3, third paragraph, lines 1-2 “We use features based on time-series (TS) validation accuracies, architecture parameters
(AP), and hyperparameters (HP).” teaches acquiring time-series validation accuracies as first sequence data representing transition of inference performance according to a training progress of a first machine learning model trained with architecture parameters and hyperparameters used to train the neural network),
performs iterative learning of a second machine learning model in accordance with a second training parameter value concerning the specific training condition and changes the second training parameter value based on the inference performance of the second machine learning model and the first sequence data in a training process of the second machine learning model (Baker, page 3, second paragraph, lines 1-5, “
PNG
media_image1.png
193
1255
media_image1.png
Greyscale
” teaches performing iterative learning of a regression model in accordance with a second training parameter value concerning the specific training condition and change the second training parameter value based on the inference performance of the second machine learning model and the first sequence data in a training process of the second machine learning model).
wherein the second machine learning model is … configured with a plurality of model architectures corresponding to a plurality of calculation costs, respectively (Baker, page 3, second paragraph, lines 4-5 “we train T -1 regression models, where each successive model uses one more point of the time-series validation data.”, second paragraph, lines 1-4 “model the validation accuracy yT of a neural network configuration …We train a population of n configurations”, and page 3, third paragraph, last 3 lines “ (3) HP: These include all hyperparameters used for training the neural networks, e.g., initial learning rate and learning rate decay” teaches the regression model to be configured with a plurality of different neural network architectures having calculation costs such as learning rate and learning rate decay),
the first machine learning model has a specific model architecture corresponding to a specific calculation cost in the plurality of model architectures (Baker, Table 2, algorithm 1, and page 3, section 3.1, third paragraph, last 3 lines “AP: These include total number of weights and number of layers. (3) HP: These include all hyperparameters used for training the neural networks, e.g., initial learning rate and learning rate decay” teaches the first machine learning model to have a specific model architecture corresponding to a specific calculation cost in the plurality of model architectures),
a loss function used for training of the second machine learning model … (Baker, Algorithm 1 and Algorithm 2 teaches validation loss used for training the second machine learning model),
the specific training condition is a … parameter value used to adjust the relative weightings … (Baker, Table 2 and page 3, paragraph “Features” teaches the various optimization parameters to be the parameter values used to adjust the relative weightings during training).
While Baker discloses wherein the second machine learning model is … configured with a plurality of model architectures corresponding to a plurality of calculation costs, respectively, they do not disclose the second machine learning model to be a slimmable neural network configured with a plurality of model architectures.
Dong discloses:
a slimmable neural network configured with a plurality of model architectures corresponding to a plurality of calculation costs (Dong, Fig. 1, Fig. 2, equations (6)-(8) and page 5, first paragraph, lines 3-4 “The goal of our TAS is to find an architecture with the minimum validation loss after trained by minimizing the training loss…” and the computation cost of equations 6-8 teaches network pruned neural networks, i.e. slimmable neural networks, that are configured with a plurality of architectures corresponding to the plurality of calculation costs in equations (6)-(8)).
Baker and Dong are analogous art because they are from the same field of endeavor, neural networks and learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include a slimmable neural network configured with a plurality of model architectures, based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
While Baker discloses a loss function used for training of the second machine learning model they do not disclose this loss function to contain a weighted average of a plurality of learning costs corresponding to the plurality of model architectures, respectively.
Dong discloses:
a loss function used for training … contains a weighted average of a plurality of learning costs corresponding to the plurality of model architectures, respectively (Dong, equations (6)-(8) teach a loss function used for training containing a weighted average of a plurality of learning costs corresponding to the plurality of model architectures, respectively).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include a loss function used for training … contains a weighted average of a plurality of learning costs corresponding to the plurality of model architectures, respectively, based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
While Baker discloses the specific training condition is a … parameter value used to adjust the relative weightings …, they do not disclose the specific training condition is a balancing parameter value used to adjust the relative weightings of penalties applied to the plurality of learning costs in the loss function.
Dong discloses:
the specific training condition is a balancing parameter value used to adjust the relative weightings of penalties applied to the plurality of learning costs in the loss function (Dong, page 5, equation (7) and 2 lines above equation 7 “As a result, the validation loss in our search procedure includes not only the classification validation loss but also the penalty for the computation cost” teaches a specific training condition where a balancing parameter value used to adjust the relative weightings of penalties applied to the plurality of learning costs in the loss function).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include the specific training condition is a balancing parameter value used to adjust the relative weightings of penalties applied to the plurality of learning costs in the loss function based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Regarding claim 2, Baker, in view of Dong, discloses the apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated) comprising a processing circuit. Baker further discloses:
generates, based on the first sequence data and second sequence data representing the transition of the inference performance from a training start stage to a current progress stage of the second machine learning model, predicted sequence data representing the transition of the inference performance from the current progress stage to a training end stage of the second machine learning model, and changes the second training parameter value in accordance with the predicted sequence data (Baker, page 3, second paragraph, lines 3-4 “we train T - 1 regression models, where each successive model uses one more point of the time-series validation data” teaches generating predicted time series accuracy as predicted sequence data representing transition of inference performance from a current stage to an end stage of the regression model, and changes the model feature set according to the predicted sequence data).
Regarding claim 3, Baker, in view of Dong, discloses the apparatus according to claim 2 (and thus the rejection of claim 2 is incorporated) comprising a processing circuit. Baker further discloses:
changes the second training parameter value in accordance with a difference between a recognition ratio represented by the predicted sequence data and an allowable value in a predetermined training stage (Examiner notes: for prior art purposes, the examiner interprets “recognition ratio” to be a correct answer ratio, as per specification page 12, line 11) (Baker, page 3, third paragraph, lines 3-4 “the first-order differences of validation accuracies (i.e., yt’ = (yt – yt-1)), and the second-order differences of validation accuracies (i.e., yt’’ = (yt‘ – y’t-1)),”, and page 5, final paragraph, last 3 lines “given the current best performance observed yBEST, we would like to terminate training a new configuration x’ given its partial learning curve
PNG
media_image2.png
48
602
media_image2.png
Greyscale
” teaches changing the second training parameter value in accordance with a difference between a ratio represented by the predicted sequence data and an allowable value in a predetermined training stage, the allowable value being a best value).
Regarding claim 5, Baker, in view of Dong, discloses the apparatus according to claim 2 (and thus the rejection of claim 2 is incorporated) comprising a processing circuit. Baker further discloses:
calculates the predicted sequence data by multiplying the second sequence data from the current progress stage to the training end stage by a ratio of the difference between the first sequence data and the second sequence data (Baker, algorithm 1 and algorithm 2, page 8, section 4.2.1, paragraph 1, lines 2-7 “In f-Hyperband, we train an SRM to predict yri… We also introduce a parameter k which denotes the proportion of the ni models” teaches calculating the predicted sequence data yri by multiplying the second sequence data from the current progress stage to the training end stage by a ratio, i.e. proportion, of the difference between the first sequence data and the second sequence data).
Regarding claim 6, Baker, in view of Dong, discloses the apparatus according to claim 2 (and thus the rejection of claim 2 is incorporated) comprising a processing circuit. Baker further discloses:
changes the second training parameter value based on the difference between the inference performance represented by the predicted sequence data and the inference performance represented by the first sequence data in the training end stage and an allowable value for the difference (Baker, Algorithm 1 and 2, algorithm 2 lines 7-8, page 8, section 4.2.1, paragraph 1, lines 2-3 “In f-Hyperband, we train an SRM to predict yri”, and page 13, penultimate paragraph, lines 7-8 “In addition to the standard Hyperband hyperparameters R and η, we include Δ and δ described in Section 4 and k” teaches changing the second training parameter value based on the difference between the inference performance represented by the predicted sequence data and the inference performance represented by the first sequence data in the training end stage and an allowable value for the difference represented by Δ).
Regarding claim 7, Baker, in view of Dong, discloses the apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated) comprising a processing circuit. Baker further discloses:
changes the second training parameter value in accordance with a difference between the inference performance represented by the first sequence data and the inference performance of the second machine learning model in a predetermined training progress stage, or a difference between the first sequence data and second sequence data representing transition of the inference performance according to the training progress of the second machine learning model (Baker, page 3, third paragraph, lines 3-4 “the first-order differences of validation accuracies (i.e., yt’ = (yt – yt-1)), and the second-order differences of validation accuracies (i.e., yt’’ = (yt‘ – y’t-1)),”, and page 5, final paragraph, last 3 lines “given the current best performance observed yBEST, we would like to terminate training a new configuration x’ given its partial learning curve
PNG
media_image2.png
48
602
media_image2.png
Greyscale
” teaches changing the second training parameter value in accordance with a difference between the inference performance represented by the first sequence data and the inference performance of the second machine learning model in a predetermined training progress stage, or a difference between the first sequence data and second sequence data).
Regarding claim 8, Baker, in view of Dong, discloses the apparatus according to claim 7 (and thus the rejection of claim 7 is incorporated) comprising a processing circuit. Baker further discloses:
changes the second training parameter value based on the difference and an allowable value for the difference (Baker, Algorithm 1, algorithm 2 teaches changing the second training parameter value based on the difference and an allowable value for the difference represented by Δ).
Regarding claim 9, Baker, in view of Dong, discloses the apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated) comprising a processing circuit. Baker further discloses:
changes the second training parameter value such that if the difference is larger than the allowable value for the difference, the second training parameter value becomes close to the first training parameter value (Baker, algorithm 1 lines 8-9 and algorithm 2, lines 7-9 teaches that if the difference is larger than the allowable value for the difference, appending the prediction such that the second training parameter value becomes close to the first training parameter value),
if the difference is smaller than the allowable value, the second training parameter value is separated from the first training parameter value (Baker, algorithm 1 lines 8-9 and algorithm 2, lines 7-9 teaches separating the two training parameter values by not appending).
Regarding claim 11, Baker discloses the apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated) comprising a processing circuit. Baker further discloses:
wherein the specific training condition is a … parameter used to adjust …(Baker, Table 2 and page 3, paragraph “Features” teaches the various optimization parameters to be the parameter values used to adjust the relative weightings during training),
While Baker discloses wherein the specific training condition is a … parameter used to adjust …, they do not disclose the specific training condition to be a balancing parameter used to adjust a penalty to a learning cost included in a loss function.
Dong discloses:
specific training condition is a balancing parameter used to adjust a penalty to a learning cost included in a loss function (Dong, page 5, equation (7) and 2 lines above equation 7 “As a result, the validation loss in our search procedure includes not only the classification validation loss but also the penalty for the computation cost” teaches a specific training condition where a balancing parameter value used to adjust penalties applied to the plurality of learning costs in the loss function).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include specific training condition is a balancing parameter used to adjust a penalty to a learning cost included in a loss function based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Regarding claim 13, Baker discloses the apparatus according to claim 11 (and thus the rejection of claim 11 is incorporated) comprising a processing circuit. Dong further discloses:
wherein the balancing parameter value used to adjust a balance of penalties to the learning cost and a regularization term (Dong, page 2, “regularized by minimization of the computation cost, e.g., floating point operations (FLOPs)” and “The cost loss encourages the computation cost of the network (e.g., FLOP) to converge to a target R so that the cost can be dynamically adjusted by setting different R” teaches balancing parameter value used to adjust a balance of penalties to the learning cost and a regularization term R).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include wherein the balancing parameter value used to adjust a balance of penalties to the learning cost and a regularization term based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Regarding claim 16, Baker discloses the apparatus according to claim 11 (and thus the rejection of claim 11 is incorporated) comprising a processing circuit. Baker further discloses:
… parameter value is used to adjust … concerning multitask training (Baker, page 1 final 2 lines – page 2 line 1 “from both image classification and language modeling domains. We use these predictions and uncertainty estimates obtained from small model ensembles” and Table 2 and page 3, paragraph “Features” teaches the various optimization parameters to be the parameter values used to adjust the relative weightings concerning multitask training for image classification and language modeling),
Baker discloses … parameter value used to adjust … concerning multitask training, but does not disclose the balancing parameter value is used to adjust a balance of penalties to a learning cost of a first task and a learning cost of a second task….
Dong discloses:
the balancing parameter value is used to adjust a balance of penalties to a learning cost of a first task and a learning cost of a second task … (Dong, equation (10) and page 6, first paragraph, lines 1-2 “λ is the weight of loss to balance the standard classification loss and soft matching loss.” Teaches a balancing parameter value used to adjust a balance of penalties to learning costs of classification loss and soft matching loss as costs of two tasks).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include the balancing parameter value is used to adjust a balance of penalties to a learning cost of a first task and a learning cost of a second task based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Regarding claim 17, Baker discloses the apparatus according to claim 11 (and thus the rejection of claim 11 is incorporated) comprising a processing circuit. Dong further discloses:
wherein the balancing parameter value is used to adjust a balance of penalties to a learning cost and a calculation cost concerning a neural architecture search (Dong, equations (6)-(8) and Figure 2).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include wherein the balancing parameter value is used to adjust a balance of penalties to a learning cost and a calculation cost concerning a neural architecture search based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Claims 18 and 19 are substantially similar to claim 1, and thus are rejected on the same basis as claim 1.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Baker et al. (“ACCELERATING NEURAL ARCHITECTURE SEARCH USING PERFORMANCE PREDICTION”), hereafter Baker, in view of Dong et al. (“Network Pruning via Transformable Architecture Search”), hereafter Dong, in further view of Yoshiyama et al. (US-10664752-B2), hereafter Yoshiyama.
Regarding claim 4, Baker, in view of Dong, discloses the apparatus according to claim 3 (and thus the rejection of claim 3 is incorporated) comprising a processing circuit. Baker further discloses:
displays a curve corresponding to the predicted sequence data, a curve corresponding to the first sequence data, a curve corresponding to the second sequence data, a curve corresponding to transition of a difference between the first sequence data and the second sequence data, a curve corresponding to transition of the second training parameter value after changing by the processing circuit, and/or the allowable value on a display (Examiner notes: for prior art purposes, the examiner interprets “and/or” to be “or”) (Baker, Figures 1-8 teaches curves that display learning data).
While Baker discloses displays a curve corresponding to the predicted sequence data, a curve corresponding to the first sequence data, a curve corresponding to the second sequence data, a curve corresponding to transition of a difference between the first sequence data and the second sequence data, a curve corresponding to transition of the second training parameter value after changing by the processing circuit, and/or the allowable value …, they do not disclose doing so on a display.
Yoshiyama discloses:
displaying curves…on a display (Yoshiyama, Fig. 9 and col. 9, lines 46-62 teaches displaying learning curve graphs on a display).
Baker, Dong, and Yoshiyama are analogous art because they are from the same field of endeavor, neural networks and learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker, in view of Dong, to include displaying curves…on a display, based on the teachings of Yoshiyama. One of ordinary skill in the art would have been motivated to make this modification in order to improve development efficiency of the neural network, as suggested by Yoshiyama (col. 1, lines 49-50).
Claims 10 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Baker et al. (“ACCELERATING NEURAL ARCHITECTURE SEARCH USING PERFORMANCE PREDICTION”), hereafter Baker, in view of Dong et al. (“Network Pruning via Transformable Architecture Search”), hereafter Dong, in further view of Tan et al. (US-20210383223-A1), hereafter Tan.
Regarding claim 10, Baker, in view of Dong, discloses the apparatus according to claim 1 (and thus the rejection of claim 1 is incorporated) comprising a processing circuit.
Baker discloses a difference between inference performance represented by the first sequence data and the inference performance of the second machine learning model and iterative learning (Baker, page 3, third paragraph, lines 3-4 “the first-order differences of validation accuracies (i.e., yt’ = (yt – yt-1)), and the second-order differences of validation accuracies (i.e., yt’’ = (yt‘ – y’t-1)),”, and page 5, final paragraph, last 3 lines “given the current best performance observed yBEST, we would like to terminate training a new configuration x’ given its partial learning curve
PNG
media_image2.png
48
602
media_image2.png
Greyscale
”, and page 3, second paragraph, lines 1-5, “
PNG
media_image1.png
193
1255
media_image1.png
Greyscale
”), but does not teach redoing the iterative learning from a training progress stage to which the training has gone back. … in response to a difference being larger than a reference error.
Tan teaches:
if a difference … is larger than a reference error, … redoes the iterative learning from a training progress stage to which the training has gone back (Tan, ¶[0035] and ¶[0057] teaches iterative retraining based on a difference being larger than an error).
Baker, Dong, and Tan are analogous art because they are from the same field of endeavor, neural networks and learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker, in view of Dong, to include if a difference … is larger than a reference error, … redoes the iterative learning from the training progress stage to which the training has gone back, based on the teachings of Tan. One of ordinary skill in the art would have been motivated to make this modification in order to improve model performance, as suggested by Tan (¶[0015]).
Regarding claim 14, Baker, in view of Dong, discloses the apparatus according to claim 11 (and thus the rejection of claim 11 is incorporated) comprising a processing circuit. Dong further discloses:
wherein the balancing parameter value is used to adjust a balance of penalties to a plurality of learning costs corresponding to a plurality of classes concerning … image classification (Dong, equations (6)-(8), Figure 2, and page 3, final paragraph, lines 1-2 “Given an input image, a network takes it as input and produces the probability over each target class.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker to include wherein the balancing parameter value is used to adjust a balance of penalties to a plurality of learning costs corresponding to a plurality of classes concerning … image classification based on the teachings of Dong. One of ordinary skill in the art would have been motivated to make this modification for consistent improvements on different architectures, as suggested by Dong (page 2, firth paragraph, last line).
Baker, in view of Dong, discloses wherein the specific training condition is a balancing parameter value used to adjust a balance of penalties to a plurality of learning costs corresponding to a plurality of classes concerning … image classification, but does not discloses a plurality of classes to concern one of segmentation.
Tan discloses:
learning costs corresponding to a plurality of classes concerning one of segmentation (Tan, ¶[0077] and ¶[0034] teaches validation loss as learning costs for a plurality of classes concerning image segmentation).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker, in view of Dong, to learning costs corresponding to a plurality of classes concerning one of segmentation, based on the teachings of Tan. One of ordinary skill in the art would have been motivated to make this modification in order to improve model performance, as suggested by Tan (¶[0015]).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Baker et al. (“ACCELERATING NEURAL ARCHITECTURE SEARCH USING PERFORMANCE PREDICTION”), hereafter Baker, in view of Dong et al. (“Network Pruning via Transformable Architecture Search”), hereafter Dong, in further view of Pfitzmann et al. (US-20210350274-A1), hereafter Pfitzmann.
Regarding claim 15, Baker, in view of Dong, discloses the apparatus according to claim 11 (and thus the rejection of claim 11 is incorporated) comprising a processing circuit.
Baker, in view of Dong, teaches wherein the balancing parameter value is used to adjust a balance of penalties to a plurality of learning costs in claim 11, but does not disclose adjust … class classification or a ROI size concerning object detection.
Pfitzmann discloses:
adjust … corresponding to class classification or a ROI size concerning object detection (Pfitzmann, ¶[0070) and Figure 3 teaches adjusting parameters corresponding to class classification).
Baker, Dong, and Pfitzmann are analogous art because they are from the same field of endeavor, learning model parameters.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Baker, in view of Dong, to include adjust … corresponding to class classification or a ROI size concerning object detection, based on the teachings of Pfitzmann. One of ordinary skill in the art would have been motivated to make this modification in order to improve model performance, as suggested by Pfitzmann (¶[0018]).
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejection has been withdrawn.
Applicant's arguments filed 12/23/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection.
Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141