Prosecution Insights
Last updated: April 19, 2026
Application No. 17/338,056

SYSTEM FOR HARNESSING KNOWLEDGE AND EXPERTISE TO IMPROVE MACHINE LEARNING

Non-Final OA §103§112
Filed
Jun 03, 2021
Examiner
HWANG, MEGAN ELIZABETH
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Truata Limited
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
9 granted / 19 resolved
-7.6% vs TC avg
Strong +60% interview lift
Without
With
+60.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-14 and 17-18 are pending. Claims 19-20 have been withdrawn from consideration. Applicant is reminded to cancel Claims 19-20. This Office Action is responsive to the amendment filed on 11/20/2025, which has been entered into the above identified application. Drawings The drawings were received on 11/20/2025. The replacement drawing for Fig. 5 is acceptable. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 6-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding Claim 6, the specification fails to recite or suggest “accessing reputation data for each previously generated machine learning model, wherein the reputation data represents a level of expertise by previous users who generated the previously generated machine learning model”. At best, the specification recites these limitations as pertaining to machine learning pipelines/workflows, which are the processes used “to train, test, validate, evaluate and deploy machine learning models” (Paragraph [0004]), but are not themselves machine learning models. For the purpose of examination, this limitation will be interpreted as “accessing reputation data for each previously generated machine learning pipeline, wherein the reputation data represents a level of expertise by previous users who generated the previously generated machine learning pipeline”. Regarding Claim 7, it is rejected for its dependency on an unallowable claim. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 8-9 recite the limitation "the implicit inputs". There is insufficient antecedent basis for this limitation in the claims. Claims 10-12 recite the limitation "the explicit inputs". There is insufficient antecedent basis for this limitation in the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-9, 13-14 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Jannach et al. (“Supporting the Design of Machine Learning Workflows with a Recommendation System”, published February 2016), hereinafter Jannach; in view of Kalluri et al. (US 20210264251 A1, filed 02/25/2020), hereinafter Kalluri; in further view of Ramarajan et al. (US 20120016690 A1, filed 07/15/2011), hereinafter Ramarajan. Jannach, Kalluri and Ramarajan were cited in previous Office Actions. Regarding Claim 1, Jannach teaches a method of a recommendations engine for harnessing knowledge, expertise and previous activity to improve machine learning performance by building an improved machine learning pipeline (Jannach: “Machine learning and data analytics tasks in practice require several consecutive processing steps. RapidMiner is a widely used software tool for the development and execution of such analytics workflows.” [Abstract]), the method comprising: capturing, by the recommendations engine, input data generated by a user during a data science workflow process for generating a first machine learning pipeline, wherein the data science workflow process comprises a plurality of stages performed by the user to generate the first machine learning pipeline (Jannach: “The application of modern machine learning algorithms in the context of data mining or predictive analytics projects typically requires the implementation of a nontrivial processing workflow. Such workflows, for example, include the retrieval of the raw data, the application of certain preprocessing steps, or the training or evaluation of the actual machine learning model.” [Section 1. Introduction]; “Once activated, the plug-in monitors the user actions related to the process definition, in particular the insertion or removal of operators. After such a relevant user action is observed, the UI component collects the information about the currently modeled process and forwards it to a remote recommendation service to receive a new set of recommendations.” [Section 4.1. Implementation Architecture]); accessing, by the recommendations engine, previously built machine learning pipelines (Jannach: “The prediction approaches in Jannach and Fischer [2014] were mostly based on operator co-occurrence patterns within a larger pool of historical data analysis workflows.” [Section 2.1. Existing Approaches]; “we propose different recommendation techniques and evaluate them in an offline setting using a pool of several thousand existing workflows.” [Abstract]); computing, by the recommendations engine, a plurality of similarity metrics (Jannach: “The operators are used to model data processing tasks and the control flow” [Section 1. Introduction]; “Each historical process p was represented as a vector containing Boolean values. Each vector element corresponds to one of the operators in the process and is set to “1” if the operator is contained at least once in p. The similarity of two processes was then determined by calculating the cosine of the angle between the vectors.” [Section 2.1.1. A K-Nearest-Neighbors Method (kNN)]); retrieving one or more past machine learning pipelines among the accessed previously built machine learning pipelines that is the most similar to the first machine learning pipeline (Jannach: “(2) Scoring function: To determine the prediction score for an operator op for a given partial process p, the k most similar processes to p were considered and the similiarity values for those neighbors that contained the operator op were summed up. More formally, given a target process p, its k nearest neighbors Nk(p), and an operator op, [Equation 1], where sim(p, pi) is zero if pi does not contain op, and the cosine similarity of p and pi otherwise. (3) The operators are finally ranked and recommended based on their score as computed in Equation (1) in decreasing order.” [Section 2.1.1. A K-Nearest-Neighbors Method (kNN)]); weighting, by the recommendations engine, each of the one or more past machine learning pipelines respectively (Jannach: “The general idea of the proposed KNN-CTX method is to increase the weight (importance) of neighbors that contain an operator that is also part of the context of the currently constructed process, that is, we assume that these neighbors are better predictors for the current situation than others that are similar as well but have no overlap with the context.” [Section 2.2.2. A Context-Aware kNN Method (kNN-CTX)]); providing at least one recommendation, via the recommendations engine, that is based on the weighted one or more past machine learning pipelines, to include additional processing steps, settings and/or configurations into at least one or the plurality of stages in the data science workflow (Jannach: “The operators are finally ranked and recommended based on their score as computed in Equation (1) in decreasing order. The main idea of the kNN method is therefore that operators appearing in processes that are similar to the currently developed one are likely to appear in the current process as well.” [Section 2.1.1. A K-Nearest-Neighbors Method (kNN)]; “The elements of the processes are (1) a set of process steps called “operators” and (2) a set of edges connecting the operators.” [Section 1. Introduction]); and updating the first machine learning pipeline using the at least one recommendation (Jannach: “The server then creates the recommendations and also updates the new partial process received by the client in the pool of known processes.” [Section 4.1.2. An Adaptive Recommendation Service]). However, Jannach fails to expressly disclose accessing, by the recommendations engine, a user reputation system comprising reputation scores of users that built the previously built machine learning pipelines; computing, by the recommendations engine, a plurality of distance metrics using a set of distance functions; identifying, by the recommendations engine, the distance metric among the plurality of distance metrics that is the shortest; retrieving the most similar pipeline based on the distance metric that is the shortest; and weighting, by the recommendations engine, the pipelines by a reputation score, from the user reputation system, of a user who generated each of the one or more past machine learning pipelines. In the same field of endeavor, Kalluri teaches computing, by the recommendations engine, a plurality of distance metrics using a set of distance functions (Kalluri: “calculating, for each previously-executed partial workflow, a distance between the composite feature vector of the previously-executed partial workflow and the composite feature vector of the partial workflow in a domain space” [0014]; “Multi-dimensional machine-learning feature vectors may be generated to create the composite feature vector. Each task of the ordered sequence of tasks in a communication workflow may be represented by a task vector (e.g., a feature vector). The group of target user devices that are targeted to receive communications as a result of executing a task may be represented by a group vector. In some examples, the structure of a communication workflow and the component task vectors may be combined to generate the composite feature vector that represents the communication workflow.” [0007]; In light of paragraph [0078] of the specification, which states “At 740, the case from step 730 is submitted to the case base where distance metrics are calculated to find close matches using the dataset characteristics, target variable, evaluation metric, executed steps plus the addition of the user reputation and experience and evaluating the outcome”, BRI of “set of distance functions” involves compiling a distance for each of a plurality of features as opposed to utilizing multiple types of distance measurements for a singular feature); identifying, by the recommendations engine, the distance metric among the plurality of distance metrics that is the shortest (Kalluri: “comparing the distance with a threshold value, where when the distance is equal to or less than the threshold value, then the previously-executed partial workflow is determined to be similar to the partial workflow” [0014]); and retrieving the most similar pipeline based on the distance metric that is the shortest (Kalluri: “The computer-implemented method where comparing the composite feature vector of each previously-executed partial workflow, from the subset of previous-executed partial workflows sharing a same structure with the partial workflow, with the composite feature vector of the partial workflow includes: calculating, for each previously-executed partial workflow, a distance between the composite feature vector of the previously-executed partial workflow and the composite feature vector of the partial workflow in a domain space; and comparing the distance with a threshold value, where when the distance is equal to or less than the threshold value, then the previously-executed partial workflow is determined to be similar to the partial workflow, and where when the distance is larger than the threshold value, then the previously-executed partial workflow is determined to not be similar to the partial workflow.” [0014]; “Partial workflow predictor 250 may then determine one or more partial portions of previously-executed communication workflows that are similar to the composite feature vector of the new partial workflow 410 using the similarity detection techniques described above... For example, the top performing partial portion of a previously-executed communication workflow may be selected. The remaining tasks of that selected partial portion of a previously-executed communication workflow may be recommended for completing the new partial workflow 410.” [0058]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated computing, by the recommendations engine, a plurality of distance metrics using a set of distance functions; identifying, by the recommendations engine, the distance metric among the plurality of distance metrics that is the shortest; and retrieving the most similar pipeline based on the distance metric that is the shortest, as taught by Kalluri to the method of Jannach because both of these methods are directed towards generating recommendations for workflow improvement. In making this combination and identifying a most similar workflow to be used for a recommendation using a distance metric, it would allow the method of Jannach another means besides cosine similarity to measure similarity “in the multi-dimensional space of the composite feature vectors of the previously-executed communication workflows” (Kalluri: [0045]). Jannach and Kalluri still fail to teach accessing, by the recommendations engine, a user reputation system comprising reputation scores of users that built the previously built machine learning pipelines; and weighting, by the recommendations engine, the pipelines by a reputation score, from the user reputation system, of a user who generated each of the one or more past machine learning pipelines. In the same field of endeavor, Ramarajan teaches accessing, by the recommendations engine, a user reputation system comprising reputation scores of users that built the previously built machine learning pipelines (Ramarajan: “Then, these other embodiments may convert, at least in part in a computer process, the treatment recommendations into an expert score for each treatment option, and for each treatment option, determine the treatment score (also) as a function of the expert score. The method and system also may apply an expert weight to the expert score. Among other ways, the expert weight may be calculated using at least one of a ranking of the expert's academic institution, a ranking of the expert's employing institution, the expert's previous success in recommending, the expert's degree of experience, and the relatedness of the expert's qualifications or experiences to treating or working or processing conditions of the patient.” [0011]); and weighting, by the recommendations engine, the pipelines by a reputation score, from the user reputation system, of a user who generated each of the one or more past machine learning pipelines (Ramarajan: “In various embodiments, the expert recommendations are weighted by a factor derived from information correlated with the credibility and characteristics of each expert. Such indicia may include, among other things, … past performance, and the like…” [0080]; “In addition, illustrative embodiments can extend beyond people directly involved in a treatment decision. For example, such embodiments can be used in case studies for education purposes, machine learning, for database management/development, and other uses. Discussion of use by a patient and those relating to the decision-making processes thus is for example only and not intended to limit various embodiments.” [0022]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated accessing, by the recommendations engine, a user reputation system comprising reputation scores of users that built the previously built machine learning pipelines; and weighting, by the recommendations engine, the pipelines by a reputation score, from the user reputation system, of a user who generated each of the one or more past machine learning pipelines, as taught by Ramarajan to the method of Jannach and Kalluri because both of these systems are directed towards generating recommendations most applicable for a given task based on historical user data. In making this combination and scoring recommendations based on reputation score, it would allow the system of Jannach and Kalluri to provide users with expertise from the most reputable sources (Ramarajan: [0080]). Regarding Claim 2, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the recommendations engine includes a plurality of software-based recommenders that make recommendations about which techniques and configurations to use at each stage of the workflow (Jannach: “we propose algorithms for operator recommendation that take the user’s current modeling context into account and benchmark these methods with the algorithms presented in Jannach and Fischer [2014].” [Section 1. Introduction]; “RapidMiner includes a visual modeling environment (Figure 1), which supports the definition of complete data analysis workflows (processes). The elements of the processes are (1) a set of process steps called “operators” and (2) a set of edges connecting the operators.” [Section 1. Introduction]; BRI in light of Paragraph [0045] of the specification, which states “The recommender of the recommendation engine 470 refers to a software- based module that recommends the next step to take in building a workflow” would support that a “software-based recommender” is effectively an algorithm). Regarding Claims 17-18, they are system claims that corresponds to the method of Claims 1-2. Therefore, they are rejected for the same reasons as Claims 1-2 above. Regarding Claim 5, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the recommendations engine includes at least one of a data enhancement recommender, a problem definition recommender, a modeling practices recommender, and a visualization recommender (Jannach: “If the input data that is retrieved and preprocessed in the example in Figure 1 is not binary, then the automated recommendation of the “numerical-to-binomial” conversion operator might therefore prevent the user from creating a faulty processes definition.” [Section 1. Introduction]). Regarding Claim 6, Jannach, Kalluri and Ramarajan teach the method of Claim 1, further comprising accessing reputation data for each previously generated machine learning model, wherein the reputation data represents a level of expertise by previous users who generated the previously generated machine learning model (Ramarajan: “Then, these other embodiments may convert, at least in part in a computer process, the treatment recommendations into an expert score for each treatment option, and for each treatment option, determine the treatment score (also) as a function of the expert score. The method and system also may apply an expert weight to the expert score. Among other ways, the expert weight may be calculated using at least one of a ranking of the expert's academic institution, a ranking of the expert's employing institution, the expert's previous success in recommending, the expert's degree of experience, and the relatedness of the expert's qualifications or experiences to treating or working or processing conditions of the patient.” [0011]). Regarding Claim 7, Jannach, Kalluri and Ramarajan teach the method of Claim 6, further comprising weighting the at least one recommendation on the reputation data (Ramarajan: “the expert recommendations are weighted by a factor derived from information correlated with the credibility and characteristics of each expert.” [0080]). Regarding Claim 8, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the implicit inputs to the recommendations engine include decisions or actions made by previous users (Jannach: “A different strategy for web service discovery was proposed in Chan et al. [2012]. Instead of relying on text-based (content-based) matching as done, for example, in Dong et al. [2004], they try to apply historical usage data and classic collaborative and content-based filtering techniques to determine suitable recommendations, for example, by comparing user profiles.” [Section 6. Related Work]). Regarding Claim 9, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the implicit inputs to the recommendations engine are extracted from existing knowledge stores including at least machine learning communities and websites (Jannach: “The process definitions contained processes that were publicly shared by users on the “myexperiment” website, the example process sets from the RapidMiner framework, and data from other sources.” [Section 3.2. Dataset]). Regarding Claim 13, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the most appropriate recommendation is automatically selected (Jannach: “An intelligent user-interface (UI) component would therefore automatically propose the insertion of a corresponding “rule generation” operator” [Section 1. Introduction]). Regarding Claim 14, Jannach, Kalluri and Ramarajan teach the method of Claim 1, wherein the recommendation to be applied is selected by the user (Jannach: “Figure 5 shows the default position of the recommendation lists, which is close to the existing tree- or search-based operator selection window. In case the user finds a relevant operator in the recommendation list, they can drag and drop the operator into the modeling window, which in turn leads to a request to the server for a new recommendation list.” [Section 4.1. Implementation Architecture]). Claims 3-4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Jannach in view of Kalluri and Ramarajan, as applied to Claim 1 above, in further view of Govan et al. (US 10868888 B1, filed 01/24/2019), hereinafter Govan. Govan was cited in a previous Office Action. Regarding Claim 3, Jannach, Kalluri and Ramarajan teach the method of Claim 1. However, they fail to expressly disclose wherein the recommendations engine comprises a single multipurpose recommender. In the same field of endeavor, Govan teaches wherein the recommendations engine comprises a single multipurpose recommender (Govan: “This structure may provide a single personalization platform capable to effectively addressing a diverse variety of personalization tasks and datasets” [Col. 7, Lines 41-44]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the recommendations engine comprises a single multipurpose recommender, as taught by Govan to the system of Jannach, Kalluri and Ramarajan because both of these systems are directed towards generating recommendations based on user input. In making this combination, it would allow the system of Jannach, Kalluri and Ramarajan to “dynamically adapt to individual customers and to individual requests” by “facilitating a general-purpose, multi-tenant personalization-as-a-service platform rather than requiring custom-designed, application-specific personalization systems” ([Col. 7, Lines 44-48)]. Regarding Claim 4, Jannach, Kalluri and Ramarajan teach the method of Claim 1. However, they fail to expressly disclose wherein the recommendations engine comprises a plurality of specialized tuned recommenders. In the same field of endeavor, Govan teaches wherein the recommendations engine comprises a plurality of specialized tuned recommenders (Govan: “In some embodiments, models 202 include various specialized recommenders adapted to effectively attack specific types of personalization scenarios” [Col. 7, Lines 36-38]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the recommendations engine comprises a plurality of specialized tuned recommenders, as taught by Govan to the system of Jannach, Kalluri and Ramarajan because both of these systems are directed towards generating recommendations based on user input. In making this combination, it would allow the system of Jannach, Kalluri and Ramarajan to “effectively attack specific types of personalization scenarios” [Col. 7, Lines 37-38]. Regarding Claim 12, Jannach, Kalluri, Ramarajan and Govan teach the method of Claim 3, wherein the explicit inputs to the recommendations engine include minimum standards defined within an organization or community of users (Govan: “resources repository 322 may apply business rules (both pre-processed and applied in real time) to items in resources repository 322 in order to ensure that only recommendable candidates are returned by data pipeline 200 for the real-time scoring system to evaluate.” [Col. 5, Lines 10-14]). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Jannach in view of Kalluri and Ramarajan, as applied to Claim 1 above, in further view of Park et al. (US 20220237540 A1, filed 01/22/2021), hereinafter Park. Park was cited in a previous Office Action. Regarding Claim 10, Jannach, Kalluri and Ramarajan teach the method of Claim 1. However, they fail to expressly disclose wherein the explicit inputs to the recommendations engine include at least one of manually defined problem statements, solution definitions and other user feedback. In the same field of endeavor, Park teaches wherein the explicit inputs to the recommendations engine include at least one of manually defined problem statements, solution definitions and other user feedback (Park: “feedback associated with user interaction patterns analyzed by the trained model may be used to not only provide updated feedback to users but also update the model (for example, through updated training with the newly-acquired feedback)” [0028], “Based on the tasks defined via the user software requirements 505, examples of each task (from a start point to an end point) may be automatically identified from the software verification tests (represented in FIG. 5 by reference numeral 510), the user sequential activity logs (represented in FIG. 5 by reference numeral 515), or a combination thereof.” [0031]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the explicit inputs to the recommendations engine include at least one of manually defined problem statements, solution definitions and other user feedback, as taught by Park to the method of Jannach, Kalluri and Ramarajan because both of these methods are directed towards generating recommendations based on user input. In making this combination, it would allow for “a particular user's performance to be re-assessed based on known outcomes” (Park: [0028]). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Jannach in view of Kalluri and Ramarajan, as applied to Claim 1 above, in further view of Baphna et al. (US 20180197428 A1, filed 03/05/2018), hereinafter Baphna. Baphna was cited in a previous Office Action. Regarding Claim 11, Jannach, Kalluri and Ramarajan teach the method of Claim 1. However, they fail to expressly disclose wherein the explicit inputs to the recommendations engine include best practice decisions for each stage of a machine learning pipeline and different contexts. In the same field of endeavor, Baphna teaches wherein the explicit inputs to the recommendations engine include best practice decisions for each stage of a machine learning pipeline and different contexts (Baphna: “The machine learning module 212 calculates the deviation from the median or best practices approach for every action that a user attempts” [0095]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the explicit inputs to the recommendations engine include best practice decisions for each stage of a machine learning pipeline and different contexts, as taught by Baphna to the system of Jannach, Kalluri and Ramarajan because both of these systems are directed towards making recommendations for a user in response to user interaction with the system. In making this combination, it provides another knowledge source for the system to Jannach, Kalluri and Ramarajan to measure user interactions “against reference paths for peer users, experts, real-time live data from other users, and/or the like” ([0095]). Response to Arguments The Examiner acknowledges the Applicant’s amendments to Claims 1 and 17. Applicant’s arguments, filed 11/20/2025, with respect to the objections to the specification have been considered and are persuasive. The objections have been withdrawn. Applicant’s arguments, filed 11/20/2025, with respect to the objections to the drawings have been considered and are persuasive. The objections have been withdrawn. Applicant’s arguments, filed 11/20/2025, with respect to the rejection of Claims 1-14 and 17-18 under 35 U.S.C. 112(a) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 U.S.C. § 112(a) and 35 U.S.C. § 112(b). Applicant’s arguments, filed 11/20/2025, with respect to the rejection of Claims 1-14 and 17-18 under 35 U.S.C. § 103 have been considered and are found moot in light of the new grounds of rejection (see rejection above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yao et al. (“ReputationNet: A Reputation Engine to Enhance ServiceMap by Recommending Trusted Services”) discusses proposing heuristic algorithms to provide service recommendations for scientific workflows based on the reputation of previously captured workflows. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEGAN E HWANG whose telephone number is (703)756-1377. The examiner can normally be reached Monday-Friday 10:00-7:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.E.H./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jun 03, 2021
Application Filed
Nov 07, 2024
Non-Final Rejection — §103, §112
May 07, 2025
Applicant Interview (Telephonic)
May 07, 2025
Examiner Interview Summary
May 09, 2025
Response Filed
Aug 16, 2025
Final Rejection — §103, §112
Nov 20, 2025
Response after Non-Final Action
Dec 04, 2025
Request for Continued Examination
Dec 10, 2025
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12456093
Corporate Hierarchy Tagging
2y 5m to grant Granted Oct 28, 2025
Patent 12437514
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437517
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437518
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437519
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
99%
With Interview (+60.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month