Prosecution Insights
Last updated: April 19, 2026
Application No. 17/947,937

SYSTEM AND METHOD FOR MULTI-TASK LIFELONG LEARNING ON PERSONAL DEVICE WITH IMPROVED USER EXPERIENCE

Non-Final OA §101§103
Filed
Sep 19, 2022
Examiner
KOWALIK, SKIELER ALEXANDER
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
2 granted / 9 resolved
-32.8% vs TC avg
Strong +88% interview lift
Without
With
+87.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
41.0%
+1.0% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
4.5%
-35.5% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5, 12, and 18 are objected to because of the following informalities: They recite “the trained the second machine learning model.”. Which should be read “the trained second machine learning model.” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-14 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim recites a “A personal assistant on a mobile device to provide recommendations to a user based on learned user behavior”. A “personal assistant”, as described by paragraph [0033], is a prediction system that resides on a server or computing device. Due to the nature of the prediction system, it is clear that, from the wording of the disclosure and the functions disclosed therein, the system is not a machine or process, but rather a learning model. Thus, a “personal assistant” is computer code, which is not one of the four statutory categories of invention. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (Abstract Idea) without significantly more. Regarding claim 1, in Step 1 of the 101 analysis set forth in MPEP 2106, the claim recites a method for providing recommendations. A method is one of the four statutory categories of invention. In Step 2a Prong 1 of the 101 analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: grouping the user behavior data by labels, each of the grouped user behavior data labeled with a corresponding task classification and the grouped user behavior data training a first machine learning model; (one can mentally group data based off labels as a process of simply evaluating the labels and making a determination based on the labels) proactively predicting an expected user behavior data during a future time interval by applying the trained first machine learning model to the collected user behavior data (one can mentally predict data based off of data as a process of simply evaluating the data and making a determination based on the data) recommending a task to the first user based on the expected user behavior and a threshold associated with each task classification; (one can mentally determine a recommendation or selection based off of data as a process of simply evaluating the data and making a determination based on the data) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea. In Step 2a Prong 2 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: A computer-implemented method for providing recommendations to a user based on learned user behavior, comprising: (Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) collecting user behavior data, from one or more sources, of a first user during a current time interval in relation to a context of a surrounding environment, the collected user behavior data enriched with associated information; (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)) obtaining feedback from the first user and continuously learning patterns in the collected user behavior data to refine the trained first machine learning model based on the feedback and changes to the user behavior data; and storing the trained first machine learning model into a knowledge base for continued and multi-task learning. (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)) Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In Step 2b of the 101 analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional element (iv) recites generally linking the use of the judicial exception to a particular technological environment or field of use, (v) and (vi) recites mere data gathering, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites The computer-implemented method of claim 1, further comprising collecting the user behavior data of one or more second users to continuously learn patterns in the collected user behavior data in which to predict the expected user behavior data. (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites The computer-implemented method of claim 1, wherein refining the trained machine learning model comprises: continuously tracking the first user to collect additional user behavior data, storing the additional user behavior data in a data buffer, wherein the additional user behavior data is stored in a time sequence; (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)) removing the additional user behavior data stored in the data buffer that appears earlier in the time sequence and appending the additional user behavior data stored in the data buffer that appears later in the time sequence, when the data buffer is full; (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)) and retraining the trained first machine learning model with the first user behavior data remaining in the data buffer. (In step 2A prong 2 training a model is a mere application of a computer tool (M.L. Model), which is not indicative of integration into a practical application. In step 2B, merely applying a computer tool is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites The computer-implemented method of claim 1, wherein the threshold is adaptively learned over a period of time and provides a basis of measurement in which to ensure that the predicting satisfies a level of confidence; (In step 2A prong 2 learning a value is a mere application of a computer tool (M.L. Model), which is not indicative of integration into a practical application. In step 2B, merely applying a computer tool is not indicative of significantly more.) and the task is recommended to the first user when the prediction satisfies the threshold. (Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 5, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 5 recites The computer-implemented method of claim 1 further comprising detecting similarities by: comparing similarity metrics between the trained first machine learning model of the first user and a trained second machine learning model of a second user for a same task; (In step 2A, prong 1, this recites an abstract idea but for recitation of generic computer components which is not indicative of integration into a practical application.) and computing the similarity metrics for the trained first machine learning model and the trained the second machine learning model. (In step 2A, prong 1, this recites mathematical concept but for recitation of generic computer components which is not indicative of integration into a practical application.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 6, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 6 recites The computer-implemented method of claim 5, wherein detecting similarities comprises combining a set of commonly learned tasks for the first and second users to determine the similarity metrics between the first and second users based on the computed similarity metrics of learned models for the tasks in the set of commonly learned tasks. (In step 2A, prong 1, this recites an abstract idea but for recitation of generic computer components which is not indicative of integration into a practical application.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 7, it is dependent upon claim 6, and thereby incorporates the limitations of, and corresponding analysis applied to claim 6. Further, claim 7 recites The computer-implemented method of claim 6, wherein detecting similarities comprises: determining a subset of tasks from the combined set of tasks, the subset of tasks having a same task classification as the task to be predicted; (In step 2A, prong 1, this recites an abstract idea but for recitation of generic computer components which is not indicative of integration into a practical application.) extracting meta-data from each of the tasks in the subset of tasks into a single document; (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g))) applying an information retrieval method to measure the document similarity as the task similarity with the task to recommend; (Merely reciting the words “apply it” (or an equivalent) with the judicial exception (MPEP 2105.04(d))) sorting and determining the most similar task within the group of tasks to the task to recommend; (In step 2A, prong 1, this recites an abstract idea but for recitation of generic computer components which is not indicative of integration into a practical application.) and applying the associated learned machine model for the task to recommend. (Merely reciting the words “apply it” (or an equivalent) with the judicial exception (MPEP 2105.04(d))) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claims 8-14 and claims 15-20, they comprise of limitations similar to those of claims 1-7 and are therefor rejected for similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 8-9, 11, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over WANG (U.S. Pub. No. US 8412665 B2) in view of ANDERSON (U.S. Pub. No. US 20140280861 A1) in view of COHEN (U.S. Pub. No. US 7885844 B1) Regarding claim 1, WANG substantially teaches the claim including: A computer-implemented method for providing recommendations to a user based on learned user behavior, comprising: collecting user behavior data, from one or more sources, of a first user during a current time interval in relation to a context of a surrounding environment, the collected user behavior data enriched with associated information; ((Col 1, lines 13-19) Behavioral targeting uses information collected based on an individual user's online behavior. Such information can include web pages/websites the user has visited, or search queries the user has performed. In particular, such web pages/websites are selected to provide services and content to the individual user. It is desirable to build user behavior models that understand and differentiate between users. (Col 2, lines 33-43)… In order to understand temporal user behaviors, behavior representation considers both the user behavior and the time the user behavior occurred. For example, an online user behavior can be characterized by issued queries by the user and uniform resource locators (URLs) browsed by the user. The scale of queries and/or URLs can be relatively large, and when time is considered with these user behaviors, the scale of the behavior representation can become even larger. To address this issue, a data scale is considered for short term user behavior representation, which is relatively small.) grouping the user behavior data by labels, each of the grouped user behavior data labeled with a corresponding task classification and the grouped user behavior data training a first machine learning model; ((Col. 5, lines 57-61) In phase II 206, the action prediction module 202 identifies whether a user has finished the action. Accordingly, two classes of users for classification are from U.sub.patt. The history in one class of users includes the days before the action happened, (i.e., h.sub.p.sup.L where p<0), while the history in another class of users involves the days after the action happened (i.e., h.sub.p.sup.L where p<0). (Col 8, lines 9-12) Each model in the model family is to understand a specific behavior pattern before the action happened. Different training samples can be arranged from U.sub.patt and U.sub.rand to train each individual model. (classes are labels)) While WANG does teach gathering user behavior data and labeling it for training, it does not explicitly teach: proactively predicting an expected user behavior data during a future time interval by applying the trained first machine learning model to the collected user behavior data, However, in analogous art that similarly handles user data, ANDERSON teaches: proactively predicting an expected user behavior data during a future time interval by applying the trained first machine learning model to the collected user behavior data, (([0037])In this multi-device embodiment, productivity agent 322 and productivity agent 324 collect operations and usage information from the operating system of computing devices 320 and 340, respectively. Productivity agents 322 and 324 then create rules in a similar manner as described above by identifying usage patterns for each computing device based on the collected operations and usage information. As stated above, rules can also be input directly into computing device 320 or computing device 340 or into productivity agent database 312 and then downloaded via network 330. In the exemplary embodiment, rules 316 is coded to a specific user. Productivity agent 322 and/or 324 then determine(s) the number of times the user performs the rule and compares the number to a first threshold value. If productivity agent 322 and/or 324 determine(s) the number of times the user performs the rule surpasses the first threshold value, productivity agent 322 and/or 324 create(s) an automation. For example, when a first user logs onto computing device 320, productivity agent 322 downloads the rules of rules 316 that pertain to the first user via network 330. Productivity agent 322 then determines the number of times that the rule is performed by the first user, and if the number of times surpasses a first threshold value, productivity agent 322 creates an automation. In addition, in the background, productivity agent 322 continuously collects and analyzes operations and usage information of the first user, identifying usage patterns and creating rules if new usage patterns are identified. Once the first user logs off, productivity agent 322 uploads any newly created rules and automations for the first user to productivity agent database 312 via network 330. (here, the rules are predicted expected user actions)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with ANDERSON‘s behavior prediction and, with WANG‘s behavior data, with a reasonable expectation of success, a method for predicting user behavior, as in ANDERSON, where the data is gathered by user actions, as found in WANG. A person of ordinary skill would have been motivated to improve productivity (ANDERSON [0002]). While WANG, as modified by ANDERSON, does teach predicting the user actions, it does not explicitly teach: and recommending a task to the first user based on the expected user behavior and a threshold associated with each task classification; However, in analogous art that similarly provides tasks to a user, COHEN teaches: and recommending a task to the first user based on the expected user behavior and a threshold associated with each task classification; ((Col. 16, lines 44-55) After recommendations are automatically generated, they may be provided to the corresponding task performers in various ways, as discussed in greater detail elsewhere. In addition, when a particular task may be recommended to multiple distinct task performers, some or all of those task performers may be selected in various ways to actually receive the recommendations. For example, a recommendation of a task may be provided to task performers only if a sufficiently high similarity to previously performed tasks exists (e.g., higher than a predetermined threshold), or alternatively to a predetermined number or percentage of task performers having previously performed tasks with the highest similarity) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with COHEN‘s task recommendation and, with WANG‘s, as modified by ANDERSON, behavior data, with a reasonable expectation of success, a method for recommending a task, as in COHEN, based on user behavior data, as found in WANG, as modified by ANDERSON. A person of ordinary skill would have been motivated to improve prediction quality (COHEN Col 2, lines 60-67 – Col. 3, lines 1-8). ANDERSON further teaches: obtaining feedback from the first user and continuously learning patterns in the collected user behavior data to refine the trained first machine learning model based on the feedback and changes to the user behavior data; (([0037])If productivity agent 322 and/or 324 determine(s) the number of times the user performs the rule surpasses the first threshold value, productivity agent 322 and/or 324 create(s) an automation. For example, when a first user logs onto computing device 320, productivity agent 322 downloads the rules of rules 316 that pertain to the first user via network 330. Productivity agent 322 then determines the number of times that the rule is performed by the first user, and if the number of times surpasses a first threshold value, productivity agent 322 creates an automation. In addition, in the background, productivity agent 322 continuously collects and analyzes operations and usage information of the first user, identifying usage patterns and creating rules if new usage patterns are identified. Once the first user logs off, productivity agent 322 uploads any newly created rules and automations for the first user to productivity agent database 312 via network 330. The next time the first user logs on, the process starts all over again with the newly created rules and automations being downloaded along with the prior rules and automations. In the exemplary embodiment, productivity agent 324 operates in a similar fashion. (Changed user behavior being sent from the user is a type of user feedback)) and storing the trained first machine learning model into a knowledge base for continued and multi-task learning. (([0042]) The programs productivity agent 112, automations 114 and rules 116 in computing device 120; programs productivity agent database 312, automations 314 and rules 316 in server 310; programs productivity agent 322 in computing device 320; and program productivity agent 324 in computing device 340 are stored in persistent storage 408 for execution by one or more of the respective computer processors 404 via one or more memories of memory 406. In this embodiment, persistent storage 408 includes a magnetic hard disk drive. Alternatively, or in addition to a magnetic hard disk drive, persistent storage 408 can include a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information.) In regards to claim 2, ANDERSON further teaches: The computer-implemented method of claim 1, further comprising collecting the user behavior data of one or more second users to continuously learn patterns in the collected user behavior data in which to predict the expected user behavior data. (([0024] ) In addition, in the exemplary embodiment, productivity agent 112 can create rules specific to a user of computing device 120 based on usage and operations data of the user collected from the operating system. Therefore, productivity agent 112 can create a first rule or first set of rules for a first user of computing device 120 based on the first user's usage and operations activity collected from the operating system, and a second rule or second set of rules for a second user of computing device 120 based on the second user's usage and operations activity.) In regards to claim 4, ANDERSON further teaches: The computer-implemented method of claim 1, wherein the threshold is adaptively learned over a period of time and provides a basis of measurement in which to ensure that the predicting satisfies a level of confidence; (([0026]) Productivity agent 112 then determines if the number of times that at least one of the rules has been performed surpasses a first threshold value (decision 208). In the exemplary embodiment, the first threshold value is 50, however, in other embodiments, the first threshold value may be another value left to the discretion of the programmer or user. In other embodiments, there may be multiple threshold values, with different rules having different threshold values. If productivity agent 112 determines that the number of times that at least one of the rules has been performed does not surpass a first threshold value (decision 208, “NO” branch), productivity agent 112 moves back to step 206 and once again determines the number of times each rule has been performed.) COHEN further teaches: and the task is recommended to the first user when the prediction satisfies the threshold. ((Col. 16, lines 44-55) After recommendations are automatically generated, they may be provided to the corresponding task performers in various ways, as discussed in greater detail elsewhere. In addition, when a particular task may be recommended to multiple distinct task performers, some or all of those task performers may be selected in various ways to actually receive the recommendations. For example, a recommendation of a task may be provided to task performers only if a sufficiently high similarity to previously performed tasks exists (e.g., higher than a predetermined threshold), or alternatively to a predetermined number or percentage of task performers having previously performed tasks with the highest similarity) Regarding claims 8-9 and 15-16, they comprise of limitations similar to those of claims 1-2 and are therefore rejected for similar rationale. Regarding claim 11, it comprises of limitations similar to those of claim 4 and is therefore rejected for similar rationale Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over WANG (U.S. Pub. No. US 8412665 B2), ANDERSON (U.S. Pub. No. US 20140280861 A1), COHEN (U.S. Pub. No. US 7885844 B1) in further view of YUVAL (U.S. Pub. No. US 20090063590 A1) In regards to claim 3, ANDERSON further teaches: The computer-implemented method of claim 1, wherein refining the trained machine learning model comprises: continuously tracking the first user to collect additional user behavior data, (([0037]) If productivity agent 322 and/or 324 determine(s) the number of times the user performs the rule surpasses the first threshold value, productivity agent 322 and/or 324 create(s) an automation. For example, when a first user logs onto computing device 320, productivity agent 322 downloads the rules of rules 316 that pertain to the first user via network 330. Productivity agent 322 then determines the number of times that the rule is performed by the first user, and if the number of times surpasses a first threshold value, productivity agent 322 creates an automation. In addition, in the background, productivity agent 322 continuously collects and analyzes operations and usage information of the first user, identifying usage patterns and creating rules if new usage patterns are identified. Once the first user logs off, productivity agent 322 uploads any newly created rules and automations for the first user to productivity agent database 312 via network 330. The next time the first user logs on, the process starts all over again with the newly created rules and automations being downloaded along with the prior rules and automations. In the exemplary embodiment, productivity agent 324 operates in a similar fashion.) While ANDERSON does teach continuously tracking user behavior, it does not explicitly teach: storing the additional user behavior data in a data buffer, wherein the additional user behavior data is stored in a time sequence; removing the additional user behavior data stored in the data buffer that appears earlier in the time sequence and appending the additional user behavior data stored in the data buffer that appears later in the time sequence, when the data buffer is full; However, in analogous art that similarly analyzes user behavior online, YUVAL teaches: storing the additional user behavior data in a data buffer, wherein the additional user behavior data is stored in a time sequence; removing the additional user behavior data stored in the data buffer that appears earlier in the time sequence and appending the additional user behavior data stored in the data buffer that appears later in the time sequence, when the data buffer is full; ([0016] In operation, when the user is online (i.e., connected to a web server), the web pages accessed by the user can be automatically copied and stored in the memory. The memory may be an integral part of the user device or an external memory. In one implementation, the memory can be a first in first out (FIFO) buffer of pre-defined capacity. A FIFO buffer allows users to store data until the buffer reaches its full capacity. Once the buffer is full and new data has to be added, it automatically deletes data that was stored in the buffer initially. Thus the buffer maintains the latest data in memory for future reference, e.g., recent web pages surfed by the user. In one implementation, the users can specify the memory size or capacity of the memory. (the sequence of moving new buffer data in and removing the eldest data is FIFO, which is disclosed by YUVAL)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with YUVAL‘s FIFO buffer and, with WANG‘s, as modified by ANDERSON and COHEN, behavior data, with a reasonable expectation of success, a FIFO buffer that stores data, as in YUVAL, where the data stored is user behavior data, as found in WANG, as modified by ANDERSON and COHEN. A person of ordinary skill would have been motivated to improve data retention (YUVAL [0015]). WANG further teaches: and retraining the trained first machine learning model with the first user behavior data remaining in the data buffer. ((Col. 9, lines 1-7) As discussed, there can be some positive users in the user set U.sub.rand due to the lack of supervised knowledge. In order to eliminate classifier bias introduced from these users, an optimized behavior model can be applied to filter the users in U.sub.rand. For example, the user whose conditional probability P(y=1|u) is larger than a threshold can be removed from the training set and the ensemble model will be retrained.) Regarding claims 10 and 17, they comprise of limitations similar to those of claim 4 and are therefore rejected for similar rationale. Claims 5-6, 12-13, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over WANG (U.S. Pub. No. US 8412665 B2), ANDERSON (U.S. Pub. No. US 20140280861 A1), COHEN (U.S. Pub. No. US 7885844 B1) in further view of SHEN (U.S. Pub. No. US 20140058723 A1) Regarding claim 5, while WANG, as modified by ANDERSON and COHEN, does each claim 1, which claim 5 is dependent upon, it does not explicitly teach: The computer-implemented method of claim 1 further comprising detecting similarities by: comparing similarity metrics between the trained first machine learning model of the first user and a trained second machine learning model of a second user for a same task; and computing the similarity metrics for the trained first machine learning model and the trained the second machine learning model. However, in analogous art that similarly handles user data, SHEN teaches: The computer-implemented method of claim 1 further comprising detecting similarities by: comparing similarity metrics between the trained first machine learning model of the first user and a trained second machine learning model of a second user for a same task; and computing the similarity metrics for the trained first machine learning model and the trained the second machine learning model. ([0027] Accordingly, in step 220, for each account of the first group of accounts, this method may calculate and compare the similarity of a plurality of language models corresponding to the first group of accounts, and clusters the first group of accounts according to the comparison result of the similarity. In step 230, this method downloads a plurality of new data from one or more monitoring sites during the first time interval, discovers near-synonyms of at least one monitored vocabulary set from the new added data. For each of updated time intervals, this method updates the near-synonyms to existed language models, and for each new account of group of accounts different from a previous group of accounts of different groups of accounts, this method re-establishes a language model to describe its post contents of the new account. Then this method re-calculates and re-compares the similarity of the plurality of language models of the different groups of accounts, integrates the different groups of accounts and the previous group of accounts according to the re-comparison result of the similarity, and re-clusters an integrated group of accounts.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with SHEN‘s model similarity calculation and, with WANG‘s, as modified by ANDERSON and COHEN, behavior data, with a reasonable expectation of success, a calculation of a model’s similarity and the data of the model’s similarity, as in SHEN, where the data is user behavior data, as found in WANG, as modified by ANDERSON and COHEN. A person of ordinary skill would have been motivated to improve data security (SHEN [0010]). Regarding claim 6, SHEN further teaches: The computer-implemented method of claim 5, wherein detecting similarities comprises combining a set of commonly learned tasks for the first and second users to determine the similarity metrics between the first and second users based on the computed similarity metrics of learned models for the tasks in the set of commonly learned tasks. ([0027] Accordingly, in step 220, for each account of the first group of accounts, this method may calculate and compare the similarity of a plurality of language models corresponding to the first group of accounts, and clusters the first group of accounts according to the comparison result of the similarity. In step 230, this method downloads a plurality of new data from one or more monitoring sites during the first time interval, discovers near-synonyms of at least one monitored vocabulary set from the new added data. For each of updated time intervals, this method updates the near-synonyms to existed language models, and for each new account of group of accounts different from a previous group of accounts of different groups of accounts, this method re-establishes a language model to describe its post contents of the new account. Then this method re-calculates and re-compares the similarity of the plurality of language models of the different groups of accounts, integrates the different groups of accounts and the previous group of accounts according to the re-comparison result of the similarity, and re-clusters an integrated group of accounts. (the combination comes in the re-establishment of the model as it adds the new data during this step)) Regarding claims 12-13 and 18-19, they comprise of limitations similar to those of claims 5-6 and are therefore rejected for similar rationale. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over WANG (U.S. Pub. No. US 8412665 B2), ANDERSON (U.S. Pub. No. US 20140280861 A1), COHEN (U.S. Pub. No. US 7885844 B1), SHEN (U.S. Pub. No. US 20140058723 A1) in further view of CANTOR (U.S. Pub. No. US 20130325763 A1) in further view of SCHOLTES (U.S. Pub. No. US 20140222928 A1) While WANG, as modified by ANDERSON, COHEN, and SHEN does teach claim 6, which claim 7 is dependent upon, it does not explicitly teach: The computer-implemented method of claim 6, wherein detecting similarities comprises: determining a subset of tasks from the combined set of tasks, the subset of tasks having a same task classification as the task to be predicted; …sorting and determining the most similar task within the group of tasks to the task to recommend; and applying the associated learned machine model for the task to recommend. However, in analogous art that similarly uses learning algorithms, CANTOR teaches: The computer-implemented method of claim 6, wherein detecting similarities comprises: determining a subset of tasks from the combined set of tasks, the subset of tasks having a same task classification as the task to be predicted; ([0046] The learning algorithm may also comprise evaluating the accuracy of the various estimates of task effort produced for alternative subsets of tasks and alternative subsets of attributes. Based on the evaluation, the learning algorithm may determine the particular subsets of tasks and attributes that lead to the best overall prediction of the effort. An output produced from the learning algorithm may comprise one or more subsets of tasks for the project. (in order for the prediction to include one of the tasks, it must be the same classification)) sorting and determining the most similar task within the group of tasks to the task to recommend; and applying the associated learned machine model for the task to recommend. ([0088] Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks. (matching from the database must use sorting and is a type of determination)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with CANTOR‘s determination of user tasks and, with WANG‘s, as modified by ANDERSON, COHEN, and SHEN, behavior data, with a reasonable expectation of success, a determination of the task, as in CANTOR, where the data used for the prediction is user behavior data, as found in WANG, as modified by ANDERSON, COHEN, and SHEN. A person of ordinary skill would have been motivated to improve prediction quality and reliability (CANTOR [0005]). While CANTOR does teach determining a task based on similarity, it does not explicitly teach: extracting meta-data from each of the tasks in the subset of tasks into a single document; applying an information retrieval method to measure the document similarity as the task similarity with the task to recommend; However, in analogous art that similarly manages a user, SCHOLTES teaches: extracting meta-data from each of the tasks in the subset of tasks into a single document; ([0048] At step 206, for each document in the document collection 111, name, authorship and alias resolution is automatically performed using one or more of the individual name, authorship and alias resolution methods (e.g., Jaro-Winkler, Jaccard Similarity and Authorship SVM) or combining the techniques in one method using a voting algorithm, and the like, in step 204. In step 205, the extracted information is stored in a record in the meta data information storage device 109 that belongs to the corresponding document in the document collection database 111. A user 207 can store preferences for normalization and alias resolution in the user settings storage device 201. Settings and machine learning models used by the Jaro-Winkler, Jaccard Similarity, Authorship SVM and Voting SVM are stored in data storage device 202.) applying an information retrieval method to measure the document similarity as the task similarity with the task to recommend; ([0049] FIG. 3 illustrates an exemplary process of three (3) individual name and alias resolution processes and of a combined voting process to disambiguate and normalize names and aliases. In FIG. 3, to determine the name of the author 301 of a document or to determine the actual reference of a name in the document, the three approaches are used in step 204 to normalize the name and to resolve the name reference or alias. In step 302, a Jaro-Winkler similarity score is calculated between each name, reference or alias occurrence and each candidate known by the system. In step 303, the Jaccard score for the connected path is calculated for each name, reference or alias occurrence and each candidate known by the system. In step 304, an average prediction score of the authorship SVM by using information from the document in which the name or references occur, for each candidate known by the system. Optionally, step 305 can combine the output from steps 302, 303 and 304 using a suitable voting algorithm, and the like, and can be employed, for example, if steps 302, 303 and 304 have different values for name, reference and alias occurrences.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with SCHOLTES‘s meta-data extraction and saving and, with WANG‘s, as modified by ANDERSON, COHEN, SHEN, and CANTOR, task data, with a reasonable expectation of success, extracting meta-data from the tasks, as in SCHOLTES’, where the task data is from user behavior data, as found in WANG, as modified by ANDERSON, COHEN, SHEN, and CANTOR. A person of ordinary skill would have been motivated to improve large data management (SCHOLTES [0005]). Regarding claims 14 and 20, they comprise of limitations similar to those of claim 7 and are therefore rejected for similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SKIELER A KOWALIK whose telephone number is (571)272-1850. The examiner can normally be reached 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571)270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SKIELER ALEXANDER KOWALIK/Examiner, Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Sep 19, 2022
Application Filed
Sep 23, 2025
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
99%
With Interview (+87.5%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month