Prosecution Insights
Last updated: April 19, 2026
Application No. 18/017,289

MACHINE LEARNING TRAINING APPARATUS AND OPERATING METHOD THEREOF

Final Rejection §101§103§112
Filed
Jan 20, 2023
Examiner
LE, UYEN T
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
LG Energy Solution, Ltd.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
669 granted / 797 resolved
+28.9% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
821
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
27.6%
-12.4% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 797 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 15-17 are new. Claims 1-17 are pending. Applicant’s amendment filed 9 February 2026 is not sufficient to overcome the rejection under 35 U.S.C. 101 discussed below. Applicant stated no 112(f) was intended. However the amendment introduces new issues of 35 U.S.C. 112 discussed below. Response to Arguments Applicant’s arguments with respect to claim(s) 1-17 have been considered but are moot in view of the new ground of rejection presented in this final Office action. Note applicant argues the claims as amended. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification as originally filed does not support the now claimed “data manager”, “data analyzer”, “determinator”, “wherein when the new data is applied to the machine learning module, the machine learning module adjusts to use the new data”. Note the specification does not support any “module” recited in claims 1, 10 and 15. Furthermore the specification does not support the now claimed “adjusts to use the new data”. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The specification as originally filed does not support the now claimed “data manager”, “data analyzer”, “determinator”, ”, “wherein when the new data is applied to the machine learning module, the machine learning module adjusts to use the new data”. Note the specification does not support any “module” recited in claims 1, 10 and 15. Furthermore the specification does not support the now claimed “adjusts to use the new data”. For examination purpose, the “module” is interpreted as a typo error for –model--. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Analysis of patentability of claim 1 Step 1: Claim 1 recites an apparatus in the preamble thus seems to be directed to a statutory class. Step 2A Prong 1: Claim 1 recites "determine whether a number of pieces of the new data is less than..., " determine whether to apply the new data". This limitation is a process that, under their broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of "a determinator". That is, other than reciting "a determining unit", nothing in the claim element precludes the step from practically being performed by a human with the aid of pen and paper. If a claim limitation, under its broadest reasonable interpretation , cover performance of the limitation in the mind, then it falls within the "Mental Processes' grouping of abstract idea (concept performed in the human mind including an observation, evaluation, judgment and opinion). Furthermore the specification as originally filed does not describe any “determinator”. The mere nominal recitation of a generic machine learning training apparatus in the preamble does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process. Step 2A Prong 2: The judicial exception is not integrated into a practical application. The claim recites the additional elements "collect new data from a battery", "extract a feature including capacity of the battery in relation to time series data…”. The recited limitations amount to mere data gathering and analysis considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitation "using different methods according to the number of pieces of the new data" is mere generic data analysis recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner. The now added clause “wherein when the new data is applied to the machine learning module, the machine learning module adjusts to use the new data” merely recites a potential scenario, does not describe how the machine learning module “adjust”, thus considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 10 recites limitations similar to claim 1 in form of a method thus is not patent eligible for the same reasons discussed in claim 1 above. Claims 2, 11 merely recite performing bounds checking and a trend test, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 3 merely further describes the trend test, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claims 4, 8 merely further describe scenarios of new data being applied to train the model, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claims 5, 12 merely recite performing text on the data used for generation of the machine learning model and the new data, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claims 6, 13 merely recite excluding the new data based on test results and a threshold value, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claims 7, 14 merely recite applying the new data based on test results and a threshold value, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 9 merely recites applying the new data based on a determination result, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 15 merely recites the data manager stores usage data configured to be used to generate the machine learning module, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 16 merely further describes the usage data, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). Claim 17 merely further describes the usage data and the new data, considered to be insignificant extra solution activity (see MPEP 2106.05(g)). As shown above although the dependent claims seem to be more detailed than their parent claims, none amount to significantly more than the abstract idea. No claim is patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 9-11, 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al (US 2020/302234 A1) of record, provided by the applicant, further in view of Hooshmand et al (US 20200011932 A1). Regarding claim 1, Walters substantially discloses a machine learning training apparatus (see [0007]), comprising: a data manager configured to collect new data (see [0005],[0025]); a data analyzer configured to extract a feature of data used for generation of a machine learning model (see [0061]) and a feature of the new data (see [0025]); and a determinator configured to: determine whether a number of pieces of the new data is less than a reference number (see [0025], [0026]), and determine whether to apply the new data to the machine learning model using different methods according to the number of pieces of the new data (see [0025], [0026]). The difference is Walters does not specifically show the new data is collected from a battery and the extracted feature includes capacity of the battery in relation to time series data of data used for the generation of a machine learning model and a feature including the capacity of the battery in relation to the time series data of the new data, However it is customary in the art to do so as shown by Hooshmand (see [0004] According to an aspect of the present invention, a battery management system is provided. The battery management system includes a memory for storing program code. The battery management system further includes a processor for running the program code to extract features from battery operation data. The processor further runs the program code to train a deep learning model to model a battery degradation process of a battery using the extracted features. The processor also runs the program code to generate, using the deep learning model, a prediction of a battery capacity degradation based on the battery operation data and a current battery capacity of the battery. The processor additionally runs the program code to control an operation of the battery responsive to the prediction of the battery capacity degradation). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the claimed features while implementing the machine learning apparatus of Walters in order to model a battery degradation process of a battery using the extracted features and to generate, using the deep learning model, a prediction of a battery capacity degradation based on the battery operation data and a current battery capacity of the battery as taught by Hooshmand. Regarding claim 2, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 1, wherein when the number of pieces of the new data is less than the reference number, the determinator determines whether to apply the new data to the machine learning model by performing bounds checking and a trend test with respect to the feature including the capacity of the battery in relation to the time series data of the new data and the feature including the capacity of the battery in relation to the time series data of the data used for generation of the machine learning model (see Walters [0089]: Data profiles in databases 180 may be populated by database processors 504 to generate a plurality of sample datasets in database data 514. For example, database processor 504 may receive a dataset from a user and communicate with data profiler 110 to identify a data schema of the dataset. For this identification database processor may generate a sample vector that represents statistical metrics of the dataset. Database processor 504 may also generate a data index including a plurality of stored vectors corresponding to multiple reference datasets, the stored vectors including statistical metrics of the reference datasets and information based on corresponding data schema of the reference datasets. With this information, database processor 504 may generate similarity metrics of the sample dataset to references in the user datasets. These similarity metrics may later be used by optimization system 105 to determine minimum data requirements to achieve a threshold accuracy, Hooshmand [0004] According to an aspect of the present invention, a battery management system is provided. The battery management system includes a memory for storing program code. The battery management system further includes a processor for running the program code to extract features from battery operation data. The processor further runs the program code to train a deep learning model to model a battery degradation process of a battery using the extracted features. The processor also runs the program code to generate, using the deep learning model, a prediction of a battery capacity degradation based on the battery operation data and a current battery capacity of the battery. The processor additionally runs the program code to control an operation of the battery responsive to the prediction of the battery capacity degradation). Regarding claim 3, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 2, wherein the trend test determines whether a coefficient of an equation obtained from a graph showing a change in the feature including the capacity of the battery in relation to the time series data of the new data over the time series data falls within a range of a coefficient of an equation obtained from a graph showing a change in the feature including the capacity of the battery in relation to the time series data of the data used for generation of the machine learning model (see Waters [0005]: Moreover, after training a model, users frequently realize computer resources were wasted because the model did not achieve the target accuracy or the model turned out to be "overfitted." A model is overfitted when the trained model is excessively complex, such as having too many parameters relative to the number of observations, and has a low predictive accuracy. The overfitted models result from training with an excessively large training sample or performing too many adjustment iterations. In these cases, the parameters of the model end up being too tightly correlated with the training dataset, resulting in a model that is incapable of working with new independent data. The overfitted model begins to simply memorize the training dataset without evaluating the general trend. Thus, significant computational resources are frequently wasted when the training process results in an overfitted model. An overfitted model not only consumes more resources than necessary, because it requires processing an excessively large training dataset and additional tuning iterations, but also frequently has a lower accuracy that forces users to restart the training process with alternative configurations; Hooshmand [0004] According to an aspect of the present invention, a battery management system is provided. The battery management system includes a memory for storing program code. The battery management system further includes a processor for running the program code to extract features from battery operation data. The processor further runs the program code to train a deep learning model to model a battery degradation process of a battery using the extracted features. The processor also runs the program code to generate, using the deep learning model, a prediction of a battery capacity degradation based on the battery operation data and a current battery capacity of the battery. The processor additionally runs the program code to control an operation of the battery responsive to the prediction of the battery capacity degradation). Regarding claim 4, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 2, wherein the determinator applies the new data to training of the machine learning model, when determining that the feature including the capacity of the battery in relation to the time series data of the new data falls within a trend range of the feature including the capacity of the battery in relation to the time series data of the data used for the generation of the machine learning model as a result of performing the trend test and determining that the feature including the capacity of the battery in relation to the time series data of the new data falls within a boundary range of the feature including the capacity of the battery in relation to the time series data of the data used for generation of the machine learning model as a result of the performing the bounds checking (see Walters [0045]: Hyper-parameter optimizer 130 may include one or more computing systems that performs iterations in models to tune hyper-parameters. For example, hyper-parameter optimizer 130 may be implemented with a computer having a plurality of processing nodes that search optimized hyper-parameters and model parameter configurations for machine- learning models. In some embodiments, hyper-parameter optimizer 130 may be in communication with computer clusters 160 and generate parallel processing requests to adjust and search hyper-parameters to allow multiple hyper-parameter configurations to be evaluated concurrently. Alternatively or additionally, hyper-parameter optimizer 130 may have the ability to distribute the training process and include early stopping configurations based on default ranges, user overrides, and/or validation schemes. In some embodiments, hyper-parameter optimizer 130 may receive requests from model generator 120 and tune hyper-parameters of a model by applying one or more of Bayesian optimization, gradient-based optimization, or random search optimization; Hooshmand [0004]). Regarding claim 9, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 1, further comprising a machine learning model training unit configured to train the machine learning model by applying the new data to the machine learning model based on a determination result of the determinator (see Walters [0029] The disclosed systems and methods address the technical problems of reducing computing resource consumption during the generation of machine-learning models and estimating data requirements for new models. Further, the disclosed systems and methods achieve improvements in computer functionality by avoiding expensive guess-and- check optimizations that result in over-tuned or under-tuned machine- learning models. Further, the disclosed methods and systems improve the resource allocation during parallelized processes for predictive model generation. In addition, the disclosed systems and methods may avoid poor performing systems caused by unexpected variance in the dataset, by comparing data profiles to identify comparable sample models that reduce the complexity of hyper-parameter tuning processes. These features result in systems and methods that improve the efficiency of the computerized systems for generating the machine-learning algorithms.). Claims 10, 11 essentially recite limitations similar to claims 1, 2 in form of method, thus are rejected for the same reasons discussed in claims 1, 2 above. Regarding claim 15, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 1, wherein the data manager stores usage data configured to be used to generate the machine learning module (see Hooshmand [0047] Battery capacity fading is a nonlinear and also complicated phenomenon created by two aging means, e.g. cyclic aging and calendar aging. The cyclic aging is caused by charge and discharge actions over the course of the battery's lifetime. It has been observed that the battery capacity is directly affected by charge/discharge characteristics such as state-of charge (SOC), charge and discharge rates, and energy throughput. The calendar aging happens when battery remains idle. Calendar aging is a function of storage SOC. Batteries encounter both cyclic and calendar aging on a daily basis, so a capacity degradation model combining both effects can improve the accuracy of the assessment.). Regarding claim 16, Walters/Hooshmand further teaches or suggests the machine learning training apparatus of claim 15, wherein the usage data stored in the data manager includes a change in an amount of electricity from a discharge state of a battery cell among a plurality of battery cells to a full charge state of the battery cell (see Hooshmand [0047] Battery capacity fading is a nonlinear and also complicated phenomenon created by two aging means, e.g. cyclic aging and calendar aging. The cyclic aging is caused by charge and discharge actions over the course of the battery's lifetime. It has been observed that the battery capacity is directly affected by charge/discharge characteristics such as state-of charge (SOC), charge and discharge rates, and energy throughput. The calendar aging happens when battery remains idle. Calendar aging is a function of storage SOC. Batteries encounter both cyclic and calendar aging on a daily basis, so a capacity degradation model combining both effects can improve the accuracy of the assessment.). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al (US 2020/302234 A1) of record, provided by the applicant, in view of Hooshmand et al (US 20200011932 A1), further in view of KURIKI et al (US 20200076223 A1). Regarding claim 17, Walters/Hooshmand does not specifically show the machine learning training apparatus of claim 15, wherein the usage data stored in the data manager includes a change in a voltage and/or a current of a first battery cell among a plurality of battery cells, and wherein the new data includes a change in a voltage and/or a current of a second battery cell among the plurality of battery cells. However it is customary in the art to store battery information of voltage or current for an artificial intelligence to estimate future deterioration and select the most suitable charging method as shown by Kuriki (see [0013] On the basis of battery information (remaining capacity, deterioration information, or the like) of the secondary battery incorporated in an electronic device and schedule information about the use of the electronic device by a user, artificial intelligence (AI) incorporated in the electronic device estimates the degree of future deterioration and selects and executes the most suitable charging method (including conditions of the timing, the voltage value, the current value, and the like of charging)). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the claimed features while implementing the apparatus of Walters/Hooshmand the motivation being for the machine learning apparatus to document battery cells states for a plurality of cells to estimate future battery deterioration and select the most suitable charging method as shown by Kuriki. Claim(s) 5-8, 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al (US 2020/302234 A1) of record, provided by the applicant, in view of Hooshmand et al (US 20200011932 A1), further in view of Schmitz, Gregor PJ, Chris Aldrich, and Francois S. Gouws. "ANN-DT: an algorithm for extraction of decision trees from artificial neural networks." IEEE Transactions on Neural Networks 10, no. 6 (1999): 1392-1401, of record. Regarding claim 5, Walters/Hooshmand does not specifically show the machine learning training apparatus of claim 1, wherein when the number of pieces of the new data is greater than or equal to the reference number, the determinator performs an F-test and a T-test on the data used for generation of the machine learning model and the new data. However it is customary in the art as shown by Schmitz to use F-test to determine whether continued recursion would be meaningful or not and to use T-test to determine whether significant improvement is achieved (see page 1395, right col. 2ne paragraph, page 1399 left col. 2nd paragraph). it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include such features while implementing the apparatus of Walters/Hooshmand in order to benefit from standardized test techniques in machine learning training. Regarding claim 6, Walters/ Hooshmand/ Schmitz further teaches or suggests the machine learning training apparatus of claim 5, wherein the determinator excludes the new data from training of the machine learning model, when a result value obtained by performing the F-test and a result value obtained by performing the T-test are greater than or equal to a threshold value (see Walters [0122]: In step 810, optimization system 105 may determine if the estimated model accuracy is below or above an accuracy threshold. The accuracy threshold may be defined by a user. For example, a user may determine that models generated should have an accuracy of at least 98%. Alternatively, the accuracy threshold may be defined by an organization. If the model accuracy is not below the threshold (step 810: No), the trained model is still a viable model, even though it was generated with a limited training dataset.). Regarding claim 7, Walters Hooshmand /Schmitz further teaches or suggests the machine learning training apparatus of claim 5, wherein the determinator determines whether to apply the new data to the machine learning model by performing bounds checking and a trend test with respect to the feature including the capacity of the battery in relation to the time series data of the new data and the feature including the capacity of the battery in relation to the time series data of the data used for the generation of the machine learning model, when a result value obtained by performing the F-test and the T-test is less than a threshold value (see Walters [0045]: Hyper-parameter optimizer 130 may include one or more computing systems that performs iterations in models to tune hyper-parameters. For example, hyper-parameter optimizer 130 may be implemented with a computer having a plurality of processing nodes that search optimized hyper-parameters and model parameter configurations for machine- learning models. In some embodiments, hyper-parameter optimizer 130 may be in communication with computer clusters 160 and generate parallel processing requests to adjust and search hyper-parameters to allow multiple hyper-parameter configurations to be evaluated concurrently. Alternatively or additionally, hyper-parameter optimizer 130 may have the ability to distribute the training process and include early stopping configurations based on default ranges, user overrides, and/or validation schemes. In some embodiments, hyper-parameter optimizer 130 may receive requests from model generator 120 and tune hyper-parameters of a model by applying one or more of Bayesian optimization, gradient-based optimization, or random search optimization). Regarding claim 8, Walters/ Hooshmand/ Schmitz further teaches or suggests the machine learning training apparatus of claim 7, wherein the determinator applies the new data to training of the machine learning model, when determining that the feature including the capacity of the battery in relation to the time series data of the new data falls within a trend range of the feature including the capacity of the battery in relation to the time series data of the data used for the generation of the machine learning model as a result of performing the trend test and determining that the feature including the capacity of the battery in relation to the time series data of the new data falls within a boundary range of the feature of the data used for the generation of the machine learning model as a result of performing the bounds checking (see Hooshmand [0004]-[0006], Walters [0045]: Hyper-parameter optimizer 130 may include one or more computing systems that performs iterations in models to tune hyper-parameters. For example, hyper-parameter optimizer 130 may be implemented with a computer having a plurality of processing nodes that search optimized hyper-parameters and model parameter configurations for machine-learning models. In some embodiments, hyper- parameter optimizer 130 may be in communication with computer clusters 160 and generate parallel processing requests to adjust and search hyper- parameters to allow multiple hyper-parameter configurations to be evaluated concurrently. Alternatively or additionally, hyper-parameter optimizer 130 may have the ability to distribute the training process and include early stopping configurations based on default ranges, user overrides, and/or validation schemes. In some embodiments, hyper- parameter optimizer 130 may receive requests from model generator 120 and tune hyper-parameters of a model by applying one or more of Bayesian optimization, gradient-based optimization, or random search optimization.). Claims 12, 13, 14 essentially recite limitations similar to claims 5, 6, 7 in form of method, thus are rejected for the same reasons discussed in claims 5, 6, 7 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sung Jae Mo (KR 20160101506 A) teach a battery state estimating method and a device thereof. According to an embodiment, the battery state estimating method can receive a battery signal, divide the received battery signal into segment data of a predetermined time interval, and estimate a battery state by using a battery state probability estimation value of the segment data. The feature extraction unit 870 can selectively extract only necessary feature data from the D-dimensional segment data extracted from the target battery module 810. With the low dimensional mapping model, it is possible to convert the D-dimensional segment data into a K-dimensional space that can distinguish the difference between the normal or abnormal patterns with minimum loss while minimizing the loss of the amount of information present in the D-dimensional segment space. Thereafter, it has an effect of enabling more effective data processing with a low calculation amount in the data processing step. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UYEN T LE whose telephone number is (571)272-4021. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay M Bhatia can be reached at 5712723906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UYEN T LE/Primary Examiner, Art Unit 2156 7 March 2026
Read full office action

Prosecution Timeline

Jan 20, 2023
Application Filed
Nov 06, 2025
Non-Final Rejection — §101, §103, §112
Feb 09, 2026
Response Filed
Mar 07, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591550
SHARE REPLICATION BETWEEN REMOTE DEPLOYMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12591540
DATA MIGRATION IN A DISTRIBUTIVE FILE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12581301
MEDIA AGNOSTIC CONTENT ACCESS MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579189
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING OBJECT IDENTIFIER
2y 5m to grant Granted Mar 17, 2026
Patent 12561371
GRAPH OPERATIONS ENGINE FOR TENANT MANAGEMENT IN A MULTI-TENANT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.7%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 797 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month