Prosecution Insights
Last updated: April 19, 2026
Application No. 18/302,397

MACHINE LEARNING OPTIMIZATION OF EXPERT SYSTEMS

Non-Final OA §101§103§112
Filed
Apr 18, 2023
Examiner
BROWN, SARA GRACE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Coachem Inc.
OA Round
3 (Non-Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
40 granted / 151 resolved
-25.5% vs TC avg
Strong +29% interview lift
Without
With
+29.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
33 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 24 November 2025 has been entered. Response to Arguments Regarding the 35 USC 101 rejection, Examiner has fully considered Applicant’s arguments and amendments. Regarding Applicant’s assertion of “In regard to the rejection under 35 USC 101, Applicant submits that the claims include "adjusting the set of decision branches of the expert system" where the Office notes that the expert system is a non-abstract element ("but for the language of 'by the expert system,' covers an abstract idea", Action, page 5). Thus, the claims recite a practical application at least because there is a non-abstract element (e.g., the expert system) and (ii) the combination of that non- abstract element and any abstract elements improve the relevant technology (see paragraph 5 and 9 of the specification as filed; and 2106.04(d)(1), 2106.05). Moreover, there is an additional application of the expert system in the claimed "generating ... a second recommended intervention different than the provided recommended intervention.",” Examiner respectfully disagrees. While the limitation of “adjusting the set of decision branches of the expert system” is an additional element for consideration under Step 2A, Prong 2, the present claims recite several other limitations that are abstract limitations for consideration under Step 2A, Prong 1. Additionally, the present claims do not integrate the judicial exception into a practical application in view of the above limitation. Applicant’s purported improvement of generating a recommendation, as drafted, is not an improvement to the additional elements of the claims. Rather, this type of an improvement is an improvement to the abstract limitations for consideration under Step 2A, Prong 1. MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements...” Additionally, as discussed in 2106.05(a)(II) improvements to technology or technical fields, “an improvement in the abstract idea itself … is not an improvement in technology” Regarding Applicant’s assertion of “Furthermore, the specific adjusting of "the set of decision branches of the expert system" further refine the claim such that it is not merely linking to the field of machine learning. Newly added claims 22 through 26 include yet further definitions of such adjusting which yet further remove the claims from any mere linking to the field of machine learning.,” Examiner respectfully disagrees. Examiner respectfully asserts that these limitations are nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, this additional element is not sufficient to prove integration into a practical application or anything significantly more. Accordingly, the present claims are rejected under 35 USC 101. Regarding the 35 USC 103 rejection, Examiner has fully considered Applicant’s arguments and amendments. Regarding Applicant’s assertion of “In response to the Office Action, the applicant has amended each of independent claims 1, 19, and 20 to recite the feature of "adjusting, using the output from the machine learning model, the set of decision branches of the expert system." The Applicant argues that the applied references do not disclose, teach or suggest at least this new feature, and submits that it is self- evident that the introduction of this feature merits further search and/or consideration.,” Examiner has introduced St. Clair in order to cure the deficiencies of the prior art combination of the record. See the detailed 35 USC 103 rejection below. Accordingly, the present claims are rejected under 35 USC 103. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 22 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 22, the claim introduces subject matter not supported by the original disclosure. The added material which is not supported by the original disclosure is as follows: dependent claim 22 introduces new matter by incorporating the limitations of “wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting (i) at least one a parameter module or a threshold module of the expert system using the first output from the first model, (ii) a prediction module of the expert system using the second output from the second model, and (iii) an action module of the expert system using the third output from the third model.” With respect to the claimed first, second, and third models, paragraph [0048] teaches the machine learning models include three models. Paragraph [0048] explicitly states “In some implementations, the first model 142 can perform adjustments for one or more of the parameter module 108 or the threshold module 110. The second model 144 can perform adjustments for the prediction module 114. The third model 146 can perform adjustments for the action module 118.” Additionally, paragraph [0071] teaches the system can be used to continuously improve the expert system including the three claimed modules. The original disclosure does not disclose adjusting the set of decision branches of the expert system by adjusting the three modules associated with their respective three models. The original disclosure only describes decision branches in [0005], which merely discloses an expert system having decision branches that can be updated periodically based on a machine learning model. However, the present disclosure does not disclose, whether expressly, implicitly, or inherently “wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting (i) at least one a parameter module or a threshold module of the expert system using the first output from the first model, (ii) a prediction module of the expert system using the second output from the second model, and (iii) an action module of the expert system using the third output from the third model.” The original disclosure does not support the claimed invention. Therefore, claim 22 introduces new matter into the disclosure by incorporating claim amendments that are not supported by the original disclosure. Therefore, claim 22 is rejected under 35 USC 112(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 9-16, and 19-26 are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e. abstract idea) without anything significantly more. Step 1: Claims 1-4, 9-16, and 21-26 are directed to a method, claim 19 is directed to a non-transitory computer-readable medium, and claim 20 is directed to a system. Therefore, the claims are directed to patent eligible categories of invention. Step 2A, Prong 1: Claims 1, 19, and 20 recite generating a recommended intervention pertaining to job-performance of a user, constituting an abstract idea based on “Certain Methods of Organizing Human Activity” related to managing personal behavior or interactions between individuals. Claim 1 recites limitations, similarly recited in claims 19 and 20, of “providing a recommended intervention pertaining to job- performance of a first user based on a set of decision branches; obtaining, subsequent to providing the recommended intervention, a second set of user data; and generating, a second recommended intervention different than the provided recommended intervention pertaining to the performance of the first user,” that, as drafted, is a process that, under its broadest reasonable interpretation, but for the language of “by the expert system,” covers an abstract idea but for the recitation of generic computer components. That is, other than reciting “by the expert system,” nothing in the claim elements preclude the steps from being interpreted as an abstract idea. For example, with the exception of the “by the expert system” language, the claim steps in the context of the claim encompass an abstract idea directed to “Certain Methods of Organizing Human Activity.” Dependent claims 2-3 and 13 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration. Dependent claims 4, 9-12, 14-16, and 21-26 will be evaluated under Step 2A, Prong 2 below. Step 2A, Prong 2: Claims 1, 19, and 20 do not integrate the use of the judicial exception into a practical application. Claim 1 is a method performed “by an expert system.” Claim 19 is directed to a “non-transitory computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising,” which is recited in the preamble of the claim. Claim 20 is a system comprising “one or more processors; and machine-readable media interoperably coupled with the one or more processors and storing one or more instructions that, when executed by the one or more processors, perform operations comprising,” which is recited in the preamble of the claim. Claim 1 recites the additional elements, similarly recited in claims 19 and 20, of “providing by an expert system over a user-interface a recommended intervention pertaining to job- performance of a first user based on a set of decision branches of the expert system processing a first set of user data obtained from one or more computing devices,” “obtaining, subsequent to providing the recommended intervention, a second set of user data from the one or more computing devices,” providing the first set of user data to a machine learning model that is different than the expert system,” and “and generating, using the adjusted expert system, a second recommended intervention different than the provided recommended intervention pertaining to the performance of the first user based on the adjusted set of decision branches of the expert system processing the second set of user data from the one or more computing devices, wherein the second recommended intervention is presented on the interface of the one or more computing devices.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Claim 1 recites the additional element, similarly recited in claims 19 and 20, of “adjusting, using the output from the machine learning model, the set of decision branches of the expert system” and “obtaining, in response to providing the first set of user data, an output from the machine learning model indicating an adjustment to the set of decision branches of the expert system.” This limitation is nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, this additional element is not sufficient to prove integration into a practical application. Therefore, the additional elements of the independent claims, when considered both individually and in combination, are not sufficient to prove integration into a practical application. Dependent claims 2-3 and 13 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which does not integrate the judicial exception into a practical application. Dependent claim 4 recites the additional element of “comprising: storing in memory (i) the recommended intervention and (ii) the second set of user data from the one or more computing devices with a first identifier that identifies the first user.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Dependent claim 9 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new cause affecting job-performance of the first user not included in a previous set of causes accessible by the expert system.” Dependent claim 10 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new intervention that affects job-performance of the first user not included in a previous set of interventions accessible by the expert system.” Dependent claim 11 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new metric that represents an aspect of job-performance of the first user not included in a previous set of metric accessible by the expert system.” Dependent claim 12 recites the additional element of “comprising: generating the adjusted expert system by adjusting the expert system using the output from the machine learning model indicating the adjustment to the expert system.” Dependent claim 14 recites the additional element of “wherein the output from the machine learning model includes a set of weights for the expert system to prioritize one or more rules where multiple rules are applicable to select an intervention in response to a given set of conditions.” Dependent claim 15 recites the additional element of “wherein the machine learning model is trained to generate adjustments for the expert system.” Dependent claim 22 recites the additional element of “wherein the machine learning model comprises a first model, a second model, and a third model, and wherein obtaining the output from the machine learning model comprises: obtaining (i) first output from the first model, (ii) second output from the second model, and (iii) third output from the third model, wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting (i) at least one a parameter module or a threshold module of the expert system using the first output from the first model, (ii) a prediction module of the expert system using the second output from the second model, and (iii) an action module of the expert system using the third output from the third model.” Dependent claim 23 recites the additional element of “wherein the set of decision branches includes one or more if-then rules, and wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting the one or more if-then rules included in the set of decision branches of the expert system.” Dependent claim 24 recites the additional element of “wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: trimming one or more logical pathways included in the set of decision branches of the expert system.” Dependent claim 25 recites the additional element of “and providing the combined user data to the machine learning model with the data indicating the recommended intervention.” These limitations are nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, this additional element is not sufficient to prove integration into a practical application. Dependent claim 16 recites the additional element of “wherein obtaining, subsequent to providing the recommended intervention, the second set of user data from the one or more computing devices comprises: obtaining, subsequent to providing the recommended intervention, the second set of user data from the one or more computing devices over a period of time different from a period of time within which the first set of user data is obtained.” Dependent claim 26 recites the additional element of “wherein the second set of user data includes at least one of recognized words spoken by the first user or recognized words included by the first user in an electronic message or electronic mail.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Dependent claim 21 recites the additional element of “wherein the expert system includes (i) a parameter module, (ii) a threshold module, (iii) a prediction module, and (iv) an action module configured to perform processing of the set of decision branches of the expert system, and wherein the output from the machine learning model indicating the adjustment to the set of decision branches of the expert system comprises an adjustment to at least one of (i) the parameter module, (ii) the threshold module, (iii) the prediction module, or (iv) the action module.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). Therefore, the additional elements of the dependent claims, when considered both individually and in the context of the independent claims, are not sufficient to prove integration into a practical application. Step 2B: Claims 1, 19, and 20 do not recite anything significantly more than the judicial exception. Claim 1 is a method performed “by an expert system.” Claim 19 is directed to a “non-transitory computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising,” which is recited in the preamble of the claim. Claim 20 is a system comprising “one or more processors; and machine-readable media interoperably coupled with the one or more processors and storing one or more instructions that, when executed by the one or more processors, perform operations comprising,” which is recited in the preamble of the claim. Claim 1 recites the additional elements, similarly recited in claims 19 and 20, of “providing by an expert system over a user-interface a recommended intervention pertaining to job- performance of a first user based on a set of decision branches of the expert system processing a first set of user data obtained from one or more computing devices,” “obtaining, subsequent to providing the recommended intervention, a second set of user data from the one or more computing devices,” providing the first set of user data to a machine learning model that is different than the expert system,” and “and generating, using the adjusted expert system, a second recommended intervention different than the provided recommended intervention pertaining to the performance of the first user based on the adjusted set of decision branches of the expert system processing the second set of user data from the one or more computing devices, wherein the second recommended intervention is presented on the interface of the one or more computing devices.” These additional elements are mere instructions to implement an abstract idea using a computer in its ordinary capacity, or merely uses the computer as a tool to perform the identified abstract idea. Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception. See MPEP 2106.05(f). Claim 1 recites the additional element, similarly recited in claims 19 and 20, of “adjusting, using the output from the machine learning model, the set of decision branches of the expert system” and “obtaining, in response to providing the first set of user data, an output from the machine learning model indicating an adjustment to the set of decision branches of the expert system.” This limitation is nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, this additional element is not anything significantly more than the judicial exception. Therefore, the additional elements of the independent claims, when considered both individually and in combination, is not anything significantly more than the judicial exception. Dependent claims 2-3 and 13 further narrow the abstract idea identified in the independent claims and do not introduce further additional elements for consideration, which is not anything significantly more. Dependent claim 4 recites the additional element of “comprising: storing in memory (i) the recommended intervention and (ii) the second set of user data from the one or more computing devices with a first identifier that identifies the first user.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception.. See MPEP 2106.05(f). Dependent claim 9 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new cause affecting job-performance of the first user not included in a previous set of causes accessible by the expert system.” Dependent claim 10 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new intervention that affects job-performance of the first user not included in a previous set of interventions accessible by the expert system.” Dependent claim 11 recites the additional element of “wherein the output from the machine learning model indicating the adjustment to the expert system comprises data indicating a new metric that represents an aspect of job-performance of the first user not included in a previous set of metric accessible by the expert system.” Dependent claim 12 recites the additional element of “comprising: generating the adjusted expert system by adjusting the expert system using the output from the machine learning model indicating the adjustment to the expert system.” Dependent claim 14 recites the additional element of “wherein the output from the machine learning model includes a set of weights for the expert system to prioritize one or more rules where multiple rules are applicable to select an intervention in response to a given set of conditions.” Dependent claim 15 recites the additional element of “wherein the machine learning model is trained to generate adjustments for the expert system.” Dependent claim 22 recites the additional element of “wherein the machine learning model comprises a first model, a second model, and a third model, and wherein obtaining the output from the machine learning model comprises: obtaining (i) first output from the first model, (ii) second output from the second model, and (iii) third output from the third model, wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting (i) at least one a parameter module or a threshold module of the expert system using the first output from the first model, (ii) a prediction module of the expert system using the second output from the second model, and (iii) an action module of the expert system using the third output from the third model.” Dependent claim 23 recites the additional element of “wherein the set of decision branches includes one or more if-then rules, and wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: adjusting the one or more if-then rules included in the set of decision branches of the expert system.” Dependent claim 24 recites the additional element of “wherein adjusting the set of decision branches of the expert system using the output from the machine learning model comprises: trimming one or more logical pathways included in the set of decision branches of the expert system.” Dependent claim 25 recites the additional element of “and providing the combined user data to the machine learning model with the data indicating the recommended intervention.” These limitations are nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, this additional element is not anything significantly more than the judicial exception.. Dependent claim 16 recites the additional element of “wherein obtaining, subsequent to providing the recommended intervention, the second set of user data from the one or more computing devices comprises: obtaining, subsequent to providing the recommended intervention, the second set of user data from the one or more computing devices over a period of time different from a period of time within which the first set of user data is obtained.” Dependent claim 26 recites the additional element of “wherein the second set of user data includes at least one of recognized words spoken by the first user or recognized words included by the first user in an electronic message or electronic mail.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception.. See MPEP 2106.05(f). Dependent claim 21 recites the additional element of “wherein the expert system includes (i) a parameter module, (ii) a threshold module, (iii) a prediction module, and (iv) an action module configured to perform processing of the set of decision branches of the expert system, and wherein the output from the machine learning model indicating the adjustment to the set of decision branches of the expert system comprises an adjustment to at least one of (i) the parameter module, (ii) the threshold module, (iii) the prediction module, or (iv) the action module.” Use of a computer or other machinery in its ordinary capacity for tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., certain methods of organizing human activity) is not anything significantly more than the judicial exception.. See MPEP 2106.05(f). Therefore, the additional elements of the dependent claims, when considered both individually and in the context of the independent claims, are not anything significantly more than the judicial exception. Accordingly, claims 1-4, 9-16, and 19-26 are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, 9-13, 15-16, 19-21, and 23-26 are rejected under 35 U.S.C. 103 as being unpatentable over Minter (US 20220245557 A1) in view of De Oliveira et al. (US 20230260018 A1) in view of St. Clair et al. (“Strategies for Adding Adaptive Learning Mechanisms to Rule-Based Diagnostic Expert Systems,” 1988). Regarding claim 1, Minter teaches a method (Fig. 6), comprising: providing by an expert system over a user-interface a recommended intervention pertaining to job- performance of a first user based on a set of decision branches of the expert system processing a first set of user data obtained from one or more computing devices (Fig. 5 and [0104] teaches the actions delivery component may instruct the content delivery network to deliver content to a plurality of user terminals, such as computer displays, that allow the actions delivered to the user terminals to display in one or more GUIs of the user terminals, wherein Fig. 6 and [0107] teach the system may receive data from a data link and determine KPIs and models, wherein [0108-0109] teach the system can generate actions directed at improving variables such that metrics are improved, wherein the system may deliver actions via the action delivery component based on the generated recommendations, wherein the action may be displayed on a dashboard of a GUI that may be hosted on the content delivery network, wherein [0106] teaches determining recommendations and delivering actions by monitoring employee activities for a predetermined period of time, wherein from these monitored metrics, machine learning logic may identify employees that can be improved and determine areas that need to be prioritized, wherein [0083] teaches the recommendation component comprises a decision tree classifier, wherein [0066] teaches the data model database may store time series data of one or more periods of agent activity and action delivery records, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein [0045] teaches the recommendations identified by the machine learning may focus on driving improvements including training for employees, wherein the customized actions can be provided to each employee in order to shift the employee performance distribution upwards; see also: [0026-0027, 0057, 0104, 0113]); obtaining, subsequent to providing the recommended intervention, a second set of user data from the one or more computing devices (Fig. 6 and [0106] teach delivering a selected technique for improving employee performance to the agent, wherein in subsequent monitoring periods the process may measure the results of the action delivered to the agent, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein in subsequent periods, the system may measure the results of the training for the employees in the improvement areas, as well as in [0031-0032] teach the system can determine that the particular employee has performed the training module and gathers second data associated with the particular employee in a second time period that is subsequent to the particular employee completing the particular training module, wherein the system can determine that the employee has or has not improved subsequent to completing the training module, wherein [0033] teaches the system can generate distributions based on metrics data in a first period and second distributions based on new metrics data, the new metrics data received via the link in the second period, the first period being different from the second period; see also: [0027, 0069-0070, 0101-0105, 0109]); providing the first set of user data to a machine learning model that is different than the expert system ([0102] teaches the action effectiveness component may evaluate the effectiveness of various actions through the subsequent success of agents and may transmit instructions or updates to the recommendations component to conduct testing within the generated recommendations in order to ascertain the results of the test and update the weighting or decision logic of the recommendations component so that more successful action may be preferred, as well as in [0103] teaches the action effectiveness component may provide training data to machine learning models of the recommendation component to update them via re-training, wherein the action effectiveness component may also add or remove one or more result effective variables from the machine learning logic of the recommendations component based on their effectiveness, wherein the action effectiveness component is a sub-component of the update ML component, as well as in Fig. 6 and [0109] teach the system can generate updates based on machine learning re-evaluation or re-training, wherein the updates may be provided to the processes of 610 and 612 of Fig. 6, wherein [0066] teaches the data model database may store time series data of one or more periods of agent activity and action delivery records, wherein the data model database may regularly exchange data with the recommendation component such that the data model database receives updated data models and may provide updated data to the machine learning logic of the recommendations component, as well as in [0069] teaches utilizing the prior period and other periods of data available to re-train the machine learning logic in multiple rounds, as well as in [0070] teaches the machine learning logic may be re-trained in order to improve the machine learning logic based on the predicted distributions or metrics and associated recommendation or actions taken by the system for the previous periods; see also: Fig. 4, [0027, 0031-0033, 0036, 0044-0045, 0101, 0113]; Examiner’s Note: See the 35 USC 103 combination below for teachings pertaining to the unbolded claim language.); obtaining, in response to providing the first set of user data, an output from the machine learning model indicating an adjustment to the set of decision branches of the expert system (Fig. 6 and [0106] teach delivering a selected technique for improving employee performance to the agent, wherein in subsequent monitoring periods the process may measure the results of the action delivered to the agent, wherein if the desired or predicted result is not achieved, then further analysis may be performed to updated and optimize the machine learning logic of the components being executed, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein in subsequent periods, the system may measure the results of the training for the employees in the improvement areas, wherein if the desired result was not achieved, the machine learning may alter its logic and perform further analysis, wherein the system may repeat the process such that additional training is selected to meet goals, wherein [0103] teaches the action effectiveness component may provide training data to machine learning models of the recommendation component to update them via re-training, wherein the action effectiveness component may also add or remove one or more result effective variables from the machine learning logic of the recommendations component based on their effectiveness, wherein the action effectiveness component is a sub-component of the update ML component, as well as in Fig. 6 and [0109] teach the system can generate updates based on machine learning re-evaluation or re-training, wherein the updates may be provided to the processes of 610 and 612 of Fig. 6, wherein [0083] teaches the recommendation component comprises a decision tree classifier, as well as in [0069] teaches utilizing the prior period and other periods of data available to re-train the machine learning logic in multiple rounds, as well as in [0070] teaches the machine learning logic may be re-trained in order to improve the machine learning logic based on the predicted distributions or metrics and associated recommendation or actions taken by the system for the previous periods; see also: Fig. 4, [0027, 0031-0033, 0044-0045, 0101-0102, 0113]); adjusting, using the output from the machine learning model, the set of decision branches of the expert system (Fig. 6 and [0109] teach the action may be displayed on a dashboard being a GUI that may be hosted on the content delivery network, wherein the system may generate updates based on machine learning re-evaluation or re-training, wherein the updates may be provided to steps 610 and 612 of Fig. 6, wherein the process 616 may receive data from the data link to drive the machine learning updating as part of process 602 in a subsequent period, with newer data, wherein the machine learning accuracy can be updated and improved with iterations through separate data sets, wherein the data received at process 616 may be post-action data for evaluation of one or more actions delivered at step 612, wherein [0083] teaches the recommendation component comprises a decision tree classifier, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein in subsequent periods, the system may measure the results of the training for the employees in the improvement areas, wherein if the desired result was not achieved, the machine learning may alter its logic and perform further analysis, wherein the system may repeat the process such that additional training is selected to meet goals, as well as in [0031-0032] teach the system can determine that the particular employee has performed the training module and gathers second data associated with the particular employee in a second time period that is subsequent to the particular employee completing the particular training module, wherein the system can determine that the employee has or has not improved subsequent to completing the training module, wherein [0033] teaches the system can generate distributions based on metrics data in a first period and second distributions based on new metrics data, the new metrics data received via the link in the second period, the first period being different from the second period, wherein recommended actions are then generated to move employees to improve the employees relative to the distribution; see also: [0044-0045, 0066-0069]); and generating, using the adjusted expert system, a second recommended intervention different than the provided recommended intervention pertaining to the performance of the first user based on the adjusted set of decision branches of the expert system processing the second set of user data from the one or more computing devices (Fig. 6 and [0109] teach the action may be displayed on a dashboard being a GUI that may be hosted on the content delivery network, wherein the system may generate updates based on machine learning re-evaluation or re-training, wherein the updates may be provided to steps 610 and 612 of Fig. 6, wherein the process 616 may receive data from the data link to drive the machine learning updating as part of process 602 in a subsequent period, with newer data, wherein the machine learning accuracy can be updated and improved with iterations through separate data sets, wherein the data received at process 616 may be post-action data for evaluation of one or more actions delivered at step 612, wherein [0083] teaches the recommendation component comprises a decision tree classifier, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein in subsequent periods, the system may measure the results of the training for the employees in the improvement areas, wherein if the desired result was not achieved, the machine learning may alter its logic and perform further analysis, wherein the system may repeat the process such that additional training is selected to meet goals, as well as in [0031-0032] teach the system can determine that the particular employee has performed the training module and gathers second data associated with the particular employee in a second time period that is subsequent to the particular employee completing the particular training module, wherein the system can determine that the employee has or has not improved subsequent to completing the training module, wherein [0033] teaches the system can generate distributions based on metrics data in a first period and second distributions based on new metrics data, the new metrics data received via the link in the second period, the first period being different from the second period, wherein recommended actions are then generated to move employees to improve the employees relative to the distribution; see also: [0044-0045, 0066-0069]), wherein the second recommended intervention is presented on the interface of the one or more computing devices (Fig. 6 and [0109] teach the action may be displayed on a dashboard being a GUI that may be hosted on the content delivery network, wherein the system may generate updates based on machine learning re-evaluation or re-training, wherein the updates may be provided to steps 610 and 612 of Fig. 6, wherein the process 616 may receive data from the data link to drive the machine learning updating as part of process 602 in a subsequent period, with newer data, wherein the machine learning accuracy can be updated and improved with iterations through separate data sets, wherein the data received at process 616 may be post-action data for evaluation of one or more actions delivered at step 612, as well as in [0036] teaches the system monitors employees for a predetermined period of time in order to identify areas for the employee performance that need improvement, wherein the system can provide training that is deemed to have the highest effectiveness, wherein in subsequent periods, the system may measure the results of the training for the employees in the improvement areas, wherein if the desired result was not achieved, the machine learning may alter its logic and perform further analysis, wherein the system may repeat the process such that additional training or other improvement
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Dec 14, 2024
Non-Final Rejection — §101, §103, §112
Mar 07, 2025
Interview Requested
Mar 13, 2025
Applicant Interview (Telephonic)
Mar 13, 2025
Examiner Interview Summary
Mar 19, 2025
Response Filed
Jun 21, 2025
Final Rejection — §101, §103, §112
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Examiner Interview Summary
Nov 24, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Dec 09, 2025
Non-Final Rejection — §101, §103, §112
Mar 02, 2026
Interview Requested
Mar 10, 2026
Examiner Interview Summary
Mar 10, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602620
APPARATUS AND A METHOD FOR THE IDENTIFICATION OF A BREAKAWAY POINT
2y 5m to grant Granted Apr 14, 2026
Patent 12591811
Machine Learning Request Fulfillment Platform
2y 5m to grant Granted Mar 31, 2026
Patent 12552035
Robotic Fleet Resource Provisioning System
2y 5m to grant Granted Feb 17, 2026
Patent 12541732
System and Method of Machine Vision Assisted Task Optimization
2y 5m to grant Granted Feb 03, 2026
Patent 12505394
SYSTEMS AND METHODS FOR MODIFYING ONLINE STORES
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
56%
With Interview (+29.3%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month