Prosecution Insights
Last updated: April 18, 2026
Application No. 17/673,793

ENTERPRISE MANAGEMENT SYSTEM AND EXECUTION METHOD THEREOF

Non-Final OA §101§103
Filed
Feb 17, 2022
Examiner
XIE, THEODORE L
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Data Systems Consulting Co. Ltd.
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
1y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-2.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 7m
Avg Prosecution
38 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application The following is a Non-Final Office Action. In response to Examiner's communication on 11/12/2025, Applicant on 02/01/2026, amended Claims 1, 11, and cancelled Claims 5, 15. Claims 1-4, 10-14, 20 are now pending in this application and have been rejected below. Response to Amendment Applicants’ amendments are insufficient to overcome the 35 USC 101 rejections set forth in the previous action. The rejections are maintained below. Applicants’ amendments render moot the 35 USC 102 rejections set forth in the previous action in view of new and updated grounds for rejection necessitated by Applicants’ amendments. Therefore, these rejections are withdrawn in view of the new grounds for rejection under 35 USC 103 as set forth below. Applicants’ amendments render moot the 35 USC 103 rejections set forth in the previous action in view of new and updated grounds for rejection necessitated by Applicants’ amendments. Therefore, these rejections are withdrawn in view of the new grounds for rejection necessitated by Applicants’ amendments, as set forth below. Response to Arguments – 35 USC § 101 Applicant's arguments with respect to the 35 USC 101 rejections have been fully considered but they are not persuasive. Applicant’s arguments assert that the dynamic selection of inference algorithm and evaluation index improves the accuracy and efficiency of the model training and inference process. Examiner respectfully disagrees. While Applicant’s improvements may demonstrate an improvement to the claimed method, Applicant’s assertion that the additional elements are sufficient to integrate into a practical application require resolving what the subject of Applicant’s improvement is. Applicant’s arguments address 2106.05(a)(II.), Improvements to Any Other Technology of Technical Field. However, it is not the actual process of “model training and inference” that is subject to a specific technical improvement. Selecting a model type is fundamentally a mental process, analogizing to evaluating available tools to a human operator before performing some action. The benefit is present even if the tools employed were any generic statistical models instead of “machine learning algorithms”, and so it cannot be said that this method represents an improvement to “[machine learning] models and inference”. What is claimed is instead the mentally performable step to of evaluating generic computing components before implementation. If the assertion is that these claims integrate into a practical application by virtue of representing an improvement to technology, the benefit described must be specific to the training and operation of machine learning models. Similar logic applies as to Applicant’s arguments regarding the evaluation index – the application of metrics akin to “an index of classification accuracy, regression analysis mean square error, or area under the curve…” amount to the usage of generic mathematical operations that serve to aid in the mental process step of screening models before usage. These serve as extensions of the step of evaluating different tools before deployment and do not reflect an improvement to “models and inference”. Accordingly, the rejections under 35 USC 101 have been updated to address the amendments and maintained below. Response to Arguments – 35 USC § 102 and 35 USC § 103 Applicant' s arguments with respect to the rejections under 35 USC 103 have been considered but are moot in light of new grounds of rejections necessitated by applicant’s amendments. Examiner respectfully notes the new grounds of rejection below in view of Jadon(US 20220004897 A1). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 10-14, 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 The claims are directed to a method and apparatus. Therefore, the claims are directed to at least one of the four statutory categories. 101 Analysis – Step 2A Regarding Prong 1 of the Step 2A analysis in the MPEP, the claims are to be analyzed to determine whether they recite subject matter that is directed to a judicial expectation, namely a law of nature, a natural phenomenon, or one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent Claim 1 includes limitations that recite an abstract idea and will henceforth be used as a representative claim for the 101 rejection until otherwise noted. Claim 1 recites: An enterprise management system, comprising: a storage device, storing a plurality of modules; and a processor, coupled to the storage device, used to execute the modules; wherein the processor obtains user operation behavior data, and executes a data collection module according to the user operation behavior data to obtain user organization information, a user operation behavior record, and a user operation time record from an enterprise resources planning database, wherein the data collection module generates inference data according to the user organization information, the user operation behavior record, and the user operation time record; and the processor executes a model inference module, and inputs the inference data to a task inference model in the model inference module, so that the task inference model generates inference result data, wherein the task inference model comprises artificial intelligence machine learning algorithm, wherein the processor executes a model training module according to an automatic scheduling setting to train the task inference model according to the inference result data and user operation result data corresponding to the inference result data, wherein the user operation result data generated through an actual operation executed by the user according to the inference result data, wherein the data collection module comprises a training data collecting unit, the training data collecting unit obtains training data according to the user organization information, the user operation behavior record, and the user operation time record from the enterprise resources planning database, and the processor executes a data training module according to the training data to train the task inference model, wherein the processor stores a characteristic engineering parameter of the task inference model after training in a model parameter module, wherein the data training module comprises a training characteristic engineering unit, a model construction engineering unit, and a model training unit, the training characteristic engineering unit performs data exploration on the training data, and the model construction engineering unit selectively constructs the task inference model by selecting a machine learning algorithm from a plurality of machine learning algorithms according to the training data, wherein the training characteristic engineering unit generates a characteristic parameter according to an input requirement of the task inference model, and the model training unit trains the task inference model according to the characteristic parameter, wherein the data training module further comprises a model test unit, the model test unit iteratively executes the training characteristic engineering unit, the model construction engineering unit, and the model training unit model to iteratively train the task inference model,wherein the model test unit determines whether the task inference model has completed training according to an evaluation index of the task inference model on a test set , wherein the evaluation index is dynamically selected from a plurality of evaluation index types based on different task inference models, and the evaluation index is an index of classification accuracy, regression analysis mean square error, or area under the curve of receiver operating characteristic curve. The examiner submits that the foregoing bolded limitation(s) constitute an abstract idea because under its broadest reasonable interpretation, the claim covers a mental process and a mathematical concept. “obtains user operation behavior data, “to obtain user organization information…”, “generates inference data…”, “inputs the inference data…”, recite abstract ideas - namely, mental processes that could be performed by a human with a pen and paper, per the MPEP, merely adapting them into the context of a technological environment with computing parts does not preclude them from being abstract. Further, the “evaluation index…from a plurality of evaluation index types based on different…models” wherein the evaluation indexes represent different mathematical calculations and formulas, such as “an index of…accuracy, regression analysis mean square error, or area under the curve…” are clearly mathematical concepts. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the MPEP, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into practical application. As noted in the MPEP, it must be determined whether any additional elements in the claim beyond the judicial exception integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements, such as merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): An enterprise management system, comprising: a storage device, storing a plurality of modules; and a processor, coupled to the storage device, used to execute the modules; wherein the processor obtains user operation behavior data, and executes a data collection module according to the user operation behavior data to obtain user organization information, a user operation behavior record, and a user operation time record from an enterprise resources planning database, wherein the data collection module generates inference data according to the user organization information, the user operation behavior record, and the user operation time record; and the processor executes a model inference module, and inputs the inference data to a task inference model in the model inference module, so that the task inference model generates inference result data, wherein the task inference model comprises artificial intelligence machine learning algorithm, wherein the processor executes a model training module according to an automatic scheduling setting to train the task inference model according to the inference result data and user operation result data corresponding to the inference result data, wherein the user operation result data generated through an actual operation executed by the user according to the inference result data, wherein the data collection module comprises a training data collecting unit, the training data collecting unit obtains training data according to the user organization information, the user operation behavior record, and the user operation time record from the enterprise resources planning database, and the processor executes a data training module according to the training data to train the task inference model, wherein the processor stores a characteristic engineering parameter of the task inference model after training in a model parameter module, wherein the data training module comprises a training characteristic engineering unit, a model construction engineering unit, and a model training unit, the training characteristic engineering unit performs data exploration on the training data, and the model construction engineering unit selectively constructs the task inference model by selecting a machine learning algorithm from a plurality of machine learning algorithms according to the training data, wherein the training characteristic engineering unit generates a characteristic parameter according to an input requirement of the task inference model, and the model training unit trains the task inference model according to the characteristic parameter, wherein the data training module further comprises a model test unit, the model test unit iteratively executes the training characteristic engineering unit, the model construction engineering unit, and the model training unit model to iteratively train the task inference model,wherein the model test unit determines whether the task inference model has completed training according to an evaluation index of the task inference model on a test set , wherein the evaluation index is dynamically selected from a plurality of evaluation index types based on different task inference models, and the evaluation index is an index of classification accuracy, regression analysis mean square error, or area under the curve of receiver operating characteristic curve. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. As it pertains to Claim 1, the additional elements in the claims include “An enterprise management system, comprising: a storage device, storing a plurality of modules; and a processor, coupled to the storage device, used to execute the modules; wherein the processor”, “and executes a data collection module according to the user operation behavior data”, “wherein the data collection module”, “and the processor executes a model inference module”, “to a task inference model in the model inference module, so that the task inference model generates inference result data”, as well as the various mentions of units, and the training of a machine learning algorithm. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional elements are generic computing components that are merely used as a tool to perform the recited abstract idea and/or do no more than generally link the use of the recited abstract idea to a particular technological environment or field of use under Step 2A Prong Two. Generically reciting, as a non-limiting example of one such additional element, a “data collection module” does not serve to integrate the abstract idea into a practical application. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing an abstract idea. Claim 11 does not integrate the recited abstract ideas into a practical application by analogous reasoning. Claims 2, 12 additionally recites “an inference data extracting unit”, “a user behavior recording unit”, “platform data management unit”, “user behavior recording unit”, “transmit…user current behavior attribute data…inference data extracting unit…platform data management unit and the enterprise resources planning database…to the model inference module”. Claims 3, 13 additionally recites “an inference characteristic engineering unit”, “a model selection unit”, “model prediction unit”, “task inference model generates the inference result data”. Claims 10, 20 additionally recite “a data management module”. These additional limitations do not integrate the recited abstract ideas into a practical application by analogous reasoning as above. Claims 4, 14 do not recite additional limitations beyond those found in Claims from which they depend, and therefore do not integrate the recited abstract ideas into a practical application. 101 Analysis – Step 2B Regarding Step 2B of the MPEP, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to generic computing components that are merely used as a tool to perform the recited abstract idea and/or do no more than generally link the use of the recited abstract idea to a particular technological environment or field of use. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. Claim 11 does not integrate the recited abstract ideas into a practical application by analogous reasoning. Claims 2, 12 additionally recites “an inference data extracting unit”, “a user behavior recording unit”, “platform data management unit”, “user behavior recording unit”, “transmit…user current behavior attribute data…inference data extracting unit…platform data management unit and the enterprise resources planning database…to the model inference module”. Claims 3, 13 additionally recites “an inference characteristic engineering unit”, “a model selection unit”, “model prediction unit”, “task inference model generates the inference result data”. Claims 10, 20 additionally recite “a data management module”. These additional limitations do not integrate the recited abstract ideas into a practical application or amount to significantly more by analogous reasoning as above. Claims 4, 14 do not recite additional limitations beyond those found in Claims from which they depend, and therefore do not integrate the recited abstract ideas into a practical application or amount to significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 10, 11-12, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yang(CN109583659A) in view of Jadon(US 20220004897 A1). Claims 1, 11 As to Claim 1, Yang teaches: An enterprise management system, In [0004], "If we can use deep learning technology to learn the rules based on the user's previous operating behaviors and combined with the external factors that affect the operating behaviors, and thus recommend the next module that is most likely to be operated to the user, we can save the user's operating time, be simple and convenient, and greatly improve the user experience". comprising: a storage device, storing a plurality of modules; and a processor, coupled to the storage device, used to execute the modules; In [0045], "Based on the above method, the present invention also mentions a user operation behavior prediction system based on deep learning, which includes the following modules". [0046-0052] list various examples of said modules. While the reference does not explicitly disclose a storage device and processor, it would be apparent to one of ordinary skill in the art that the usage of such computing parts are implicit; in [0047], we have a "module for preprocessing user data", implying the existence of a processor. In [0050], " It is used to classify the training set and the verification set and import them into the user operation behavior prediction system", meaning we have the existence of means for storage. wherein the processor obtains user operation behavior data, and executes a data collection module according to the user operation behavior data to obtain user organization information, a user operation behavior record, and a user operation time record from an enterprise resources planning database, In [0015-0016], "S1: extracting user behavior data from historical operation logs, wherein the user behavior data at least includes environmental data and continuous behavior data, wherein the environmental data includes multiple external environmental factors when the data occurs, and the continuous behavior data at least includes the user's operation behavior sequence and the occurrence time of the operation behavior. Furthermore, the external environmental factors include some or all of the company ID, department ID, user ID, and time period when the behavior occurs". Given that this database is prescribed to correspond to a company with company resources, we understand this to be an enterprise resources planning database. and inputs the inference data to a task inference model in the model inference module, so that the task inference model generates inference result data, wherein the task inference model comprises artificial intelligence machine learning algorithm In [0049-0050], "4) A module for creating a user operation behavior prediction system, wherein the user operation behavior prediction system includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters. It is used to classify the training set and the verification set and import them into the user operation behavior prediction system, and use the window sliding technology to train and verify the user operation behavior prediction system to obtain the module of the optimized user operation behavior prediction system". wherein the processor executes a model training module according to an automatic scheduling setting to train the task inference model according to the inference result data and user operation result data corresponding to the inference result data, wherein the user operation result data generated through an actual operation executed by the user according to the inference result data In [0078], ""Step 7: System training and optimization. After the system is defined, the next step is to feed the system with data. However, due to the large amount of data, if all the data is fed at once, it will take up a lot of computer memory resources and sometimes even cause the computer to crash. In order to solve this problem, the method of feeding data in batches is adopted, and multiple rounds of training are performed, and the system is continuously optimized during training. The optimization algorithm used is the ""adam optimization algorithm"", and the loss function is the multi-classification cross entropy loss function (categorical_crossentropy)." Note the iterative usage of multiple rounds of input data, namely user operations as outlined in [0072-0073] above, and intermediate validation on inference output, as we are given is the output in [0080-0083]. " wherein the data collection module comprises a training data collecting unit, the training data collecting unit obtains training data according to the user organization information, the user operation behavior record, and the user operation time record from the enterprise resources planning database, In [0015-0016], "S1: extracting user behavior data from historical operation logs, wherein the user behavior data at least includes environmental data and continuous behavior data, wherein the environmental data includes multiple external environmental factors when the data occurs, and the continuous behavior data at least includes the user's operation behavior sequence and the occurrence time of the operation behavior. Furthermore, the external environmental factors include some or all of the company ID, department ID, user ID, and time period when the behavior occurs". and the processor executes a data training module according to the training data to train the task inference model, wherein the processor stores a characteristic engineering parameter of the task inference model after training in a model parameter module, In [0049], "4) A module for creating a user operation behavior prediction system, wherein the user operation behavior prediction system includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters". The mechanics of training are outlined in [0078], "In order to solve this problem, the method of feeding data in batches is adopted, and multiple rounds of training are performed, and the system is continuously optimized during training. The optimization algorithm used is the "adam optimization algorithm", and the loss function is the multi-classification cross entropy loss function (categorical_crossentropy)". wherein the data training module comprises a training characteristic engineering unit, a model construction engineering unit, and a model training unit, the training characteristic engineering unit performs data exploration on the training data, and the model construction engineering unit selectively constructs the task inference model … according to the training data, wherein the training characteristic engineering unit generates a characteristic parameter according to an input requirement of the task inference model, and the model training unit trains the task inference model according to the characteristic parameter, The user operation behavior prediction system encapsulates the behavior of a model construction engineering unit and model training unit. With respect to model construction, in [0027], "S4: Create a user operation behavior prediction system, which includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters". The mechanics of training are outlined in [0078], "In order to solve this problem, the method of feeding data in batches is adopted, and multiple rounds of training are performed, and the system is continuously optimized during training. The optimization algorithm used is the "adam optimization algorithm", and the loss function is the multi-classification cross entropy loss function (categorical_crossentropy)". We understand the training characteristic unit to be fulfilled by the extraction module that is later used as a parameter as outlined above in [0027]. The extraction module can be found in [0046], " A module for extracting user behavior data from historical operation logs, wherein the user behavior data includes at least environmental data and continuous behavior data" wherein the data training module further comprises a model test unit, the model test unit iteratively executes the training characteristic engineering unit, the model construction engineering unit, and the model training unit model to iteratively train the task inference model, The user operation behavior prediction system encapsulates the behavior of a model test unit, model construction engineering unit, and model training unit. With respect to testing, in [0079], "Step 8: Performance evaluation. After the user operation behavior system is trained, it is time to evaluate the pros and cons of the system.Since this system is a classification system, the classification accuracy is used to intuitively evaluate the quality of the system". With respect to model construction, in [0027], "S4: Create a user operation behavior prediction system, which includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters". The mechanics of training are outlined in [0078], "In order to solve this problem, the method of feeding data in batches is adopted, and multiple rounds of training are performed, and the system is continuously optimized during training. The optimization algorithm used is the "adam optimization algorithm", and the loss function is the multi-classification cross entropy loss function (categorical_crossentropy)". wherein the model test unit determines whether the task inference model has completed training according to an evaluation index of the task inference model on a test set, In [0036], "S6: Import the test set into the optimized user operation behavior prediction system. If the output prediction accuracy of the user behavior is greater than or equal to the set accuracy threshold, proceed to step S7; otherwise, return to step S5. The calculation formula for the prediction accuracy is as follows". and the evaluation index is an index of classification accuracy, regression analysis mean square error, or area under the curve of receiver operating characteristic curve. In [0079], “Step 8: Performance evaluation. Once the user behavior system is trained, the next step is to evaluate its performance. Since this system is a classification system, the classification accuracy is used to intuitively evaluate the system's quality.”. Yang does not expressly disclose the remaining limitations. However, Jadon teaches: by selecting a machine learning algorithm from a plurality of machine learning algorithms In [0111], “Training workflow unit 302 may perform a model evaluation process and a model selection process. During the model evaluation process, training workflow unit 302 may train ML models and generate evaluation metrics for the ML models. During the model selection process, training workflow unit 302 may use the evaluation metrics for the ML models to determine a selected ML model. This disclosure provides example details regarding the model evaluation process and model selection process with respect to FIG. 4”. We consider distinct models to encompass different algorithms as supported by [0050], “The techniques of this disclosure may use multiple major widely known algorithms, trains, and selects an accurate ML model for prediction, which may save hours of time and effort for any unknown metric prediction”. wherein the evaluation index is dynamically selected from a plurality of evaluation index types based on different task inference models We implicitly apply different evaluation metrics on the basis of the type of model we’re evaluating in [0125], “Model selection unit 408 may automatically determine a selected ML model in the predetermined plurality of ML models based on evaluation metrics for the ML models trained for a request. Model selection unit 408 may use various evaluation metrics for the ML models to determine the selected ML model. For example, for regression-based ML models…For classification-based ML models… For ML models trained using unsupervised learning…” Jadon discloses a system for selecting applicable machine learning models out of a plurality of options for data center administration. Yang discloses a system meant to synthesize different machine learning approaches to provide enterprise resource analysis. Each reference discloses means to interface with optimally leveraging machine learning models for some output task. Extending the comparative model approach as recorded in Jadon to the system of Yang is applicable as they are directed to the shared problem of optimally deploying machine learning models for commercial resource analysis. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the comparative model approach as taught in Jadon and apply that to the system of Yang. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting the comparative model approach of Jadon as outlined in [0019-0020]. Claim 11 is rejected as disclosing substantially similar limitations as Claim 1. Claims 2, 12 As to Claim 2, Yang combined with Jadon teaches all the limitations of Claim 1 as discussed above. Yang teaches: The enterprise management system according to claim 1, wherein the data collection module comprises a user behavior recording unit, a platform data management unit, and an inference data extracting unit, and the user behavior recording unit transmits user current behavior attribute data to the inference data extracting unit according to the user operation behavior data, so that the inference data extracting unit extracts the user organization information, the user operation behavior record, and the user operation time record from the platform data management unit and the enterprise resources planning database, The functionality of the data collection module is encapsulated in [0015-0016], "S1: extracting user behavior data from historical operation logs, wherein the user behavior data at least includes environmental data and continuous behavior data, wherein the environmental data includes multiple external environmental factors when the data occurs, and the continuous behavior data at least includes the user's operation behavior sequence and the occurrence time of the operation behavior. Furthermore, the external environmental factors include some or all of the company ID, department ID, user ID, and time period when the behavior occurs". Further, since information associated with operations includes company ID, department ID, and user ID. We understand the disclosed collection of historical operation logs to anticipate an enterprise resource planning database; the capture of such data is indicated to be in the context of an enterprise.Claim 12 is rejected as disclosing substantially similar limitations as Claim 2. Claims 10, 20 As to Claim 10, Yang combined with Jadon teaches all the limitations of Claim 7 as discussed above. Yang teaches: The enterprise management system according to claim 7, wherein the processor executes a data management module to perform data cleaning and regularization on the training data, and provides the training data after the data cleaning and regularization to the training characteristic engineering unit. In [0018-0021], "S2: Preprocess user behavior data. In a further embodiment, in step S2, the method for preprocessing the user behavior data includes: S21: Eliminate noise data that is not relevant to the business. S22: Perform normalization on the remaining environmental data and pre-train the remaining continuous behavior data to achieve vectorization". Claim 20 is rejected as disclosing substantially similar limitations as Claim 10. Claims 3-4, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Yang(CN109583659A) in view of in view of Jadon(US 20220004897 A1) in further view of Chang(WO 2016164680 A2). Claims 3, 13 As to Claim 3, Yang combined with Jadon teaches all the limitations of Claim 1 as discussed above. Yang teaches: The enterprise management system according to claim 2, wherein the model inference module comprises an inference characteristic engineering unit, We construe the functionality of inference characteristic engineering to be under the umbrella of data extraction, as we understand inferences to be an intermediate step between data collection and model input. In [0046], "1) A module for extracting user behavior data from historical operation logs, wherein the user behavior data includes at least environmental data and continuous behavior data, wherein the environmental data includes multiple external environmental factors when the data occurs, and the continuous behavior data includes at least the user's operation behavior sequence and the time when the operation behavior occurs". and a model prediction unit, and the inference characteristic engineering unit obtains a corresponding characteristic engineering parameter from the model parameter module according to the user organization information, the user operation behavior record and the user operation time record, and performs characteristic extraction on the user organization information, the user operation behavior record, and the user operation time record according to the characteristic engineering parameter to generate the inference data, The functionality of the data collection module is encapsulated in [0015-0016], "S1: extracting user behavior data from historical operation logs, wherein the user behavior data at least includes environmental data and continuous behavior data, wherein the environmental data includes multiple external environmental factors when the data occurs, and the continuous behavior data at least includes the user's operation behavior sequence and the occurrence time of the operation behavior. Furthermore, the external environmental factors include some or all of the company ID, department ID, user ID, and time period when the behavior occurs". Downstream, these details are used as parameters in the final output, in [0049], "4) A module for creating a user operation behavior prediction system, wherein the user operation behavior prediction system includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters". and the model prediction unit inputs the inference data to the task inference model, so that the task inference model generates the inference result data. In [0049], "4) A module for creating a user operation behavior prediction system, wherein the user operation behavior prediction system includes a deep LSTM subsystem with operation behavior sequence and operation behavior occurrence time as parameters and a fully connected neural network subsystem with external environmental factors as parameters". Yang does not expressly disclose the remaining limitations. However, Jadon teaches: a model selection unit, In [0111], “Training workflow unit 302 may perform a model evaluation process and a model selection process. During the model evaluation process, training workflow unit 302 may train ML models and generate evaluation metrics for the ML models. During the model selection process, training workflow unit 302 may use the evaluation metrics for the ML models to determine a selected ML model. This disclosure provides example details regarding the model evaluation process and model selection process with respect to FIG. 4”. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the comparative model approach as taught in Jadon and apply that to the system of Yang. Motivation to do so comes from the same rationale as outlined above with respect to Claim 1. Yang combined with Jadon does not expressly disclose the remaining limitations. However, Chang teaches: wherein the model selection unit selects one of a plurality of models as the task inference model according to the inference data, Understanding inference data as encompassing the predictor and output variables in [00137], "In block 1502, a population is initialized. In some aspects, initializing the population can include randomly selecting multiple predictive models". In [00138-00139], " In block 1504, each selected model is evaluated. In some aspects, the automated model development tool can determine a Kolmogorov-Smirnov ("KS") test value for each selected model using the respective set of dependent variables (e.g., output variables). The KS value for a model with a given set of predictor variables can indicate the degree to which the model with the given set of predictor variables accurately predicts the output variables in the sample data set. In block 1506, a model is selected. In some aspects, the automated model development tool can select the model. For example, the automated model development tool can select model-variable subset combinations for a "crossover" stage after ranking all the models by KS test value". In [00151], “Any suitable device or set of computing devices can be used to execute the automated model development tool described herein…Although FIG. 18 depicts a single computing system for illustrative purposes, any number of servers or other computing devices can be included in a computing system that executes an automated model development tool 102. For example, a computing system may include multiple computing devices configured in a grid, cloud, or other distributed computing system that executes then automated model development tool 102”. Yang combined with Jadon discloses a system for predicting user operation behavior by leveraging statistical models. Chang discloses a system meant to streamline the development of statistical models. Each reference discloses means for leveraging and implementing statistical models. Extending the data analysis and multi-model approach as recorded in Chang is applicable to Yang combined with Jadon as they are both concerned with the task of developing and implementing statistical models. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to use the data analysis and multi-model approach as taught in Chang and apply that to the system as taught in Yang combined with Jadon. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit the particular data analysis methodologies of Chang would enable users to utilize Yang combined with Jadon with more flexible means to calibrate multi model selection, as well as provide users with insights on recommendations. Claim 13 is rejected as presenting substantially similar limitations as Claim 3. Claims 4, 14 As to Claim 4, Yang combined with Jadon teaches all the limitations of Claim 1 as discussed above. Yang does not expressly disclose the remaining limitations. However Chang teaches: The enterprise management system according to claim 1, wherein the processor performs engineering package transfer on the inference result data to output a recommendation result list. It is specified that inference data is an intermediate step between data collection and feeding the data to the model to be used. In line with this interpretation, in [00113], "In some aspects, in block 1202, the automated model development tool may output data associated with analyzing or classifying the predictor variables. As an example, the automated model development tool may output a report, list, chart, etc., that indicates predictor variables that are classified as numeric predictor variables or a predictor variables that are classified as character predictor variables". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to adopt the data analysis and multi-model approach as taught in Chang and apply that to the system as taught in Yang combined with Jadon. Motivation to do so comes from the same rationale as outlined above with respect to Claim 3. Claim 14 is rejected as presenting substantially similar limitations as Claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEODORE L XIE whose telephone number is (571)272-7102. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THEODORE XIE/Examiner, Art Unit 3623 /WILLIAM S BROCKINGTON III/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Feb 17, 2022
Application Filed
Jul 22, 2025
Non-Final Rejection — §101, §103
Oct 20, 2025
Response Filed
Nov 06, 2025
Final Rejection — §101, §103
Feb 01, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 31, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591576
DRILLING PERFORMANCE ASSISTED WITH AN ARTIFICIAL INTELLIGENCE ENGINE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
1y 7m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month