Prosecution Insights
Last updated: April 19, 2026
Application No. 17/167,549

HYPER-PERSONALIZED QUALIFIED APPLICANT MODELS

Non-Final OA §103
Filed
Feb 04, 2021
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
5 (Non-Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action is in response to the RCE and amendment filed 28 January 2026. Claims 1-20 are pending. Claims 1, 13, and 19 are independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. (US 2024/0037153, filed 4 October 2019, hereafter Cheng) and further in view of Zhang et al. (GLMix: Generalized Linear Mixed Models For Large-Scale Response Prediction, 2016, hereafter Zhang) and further in view of Drew et al. (US 2014/0344195, published 20 November 2014, hereafter Drew) and further in view of Bhide et al. (US 2022/0083899, filed 11 September 2020, hereafter Bhide)) and further in view of Ozcaglar et al. (US 2019/0163780, published 30 May 2019, hereafter Ozcaglar) and further in view of Coretti et al. (Seedless Fruit Is the Sweetest: Random Number Generation, Revisited, 2019, originally provided 8 September 2025, hereafter Coretti). As per independent claim 1, Cheng discloses a system for training and testing a machine learning model, comprising: a processing circuit (Figure 7, item 702: Here, a processor is a processing circuit) a memory having instructions stored thereon, which, when executed by a processor, cause the system to perform operations comprising (paragraph 0086: Here, computer-readable media, such as solid state memories, floppy and other removable disks, hard disk drives, magnetic media, optical disks, DVDs and other similar non-transitory type media are disclosed) obtain a first plurality of data samples, each of the data samples indicating a value for a first variable and a second value for a second variable (paragraph 0032: Here, a machine learning model may be trained using a set of training data. This data includes candidates, each having candidate features, including educational histories, professional experiences, and qualifications including education, prior experiences, prior job titles, prior projects, skills, and certifications) identify a model to train with a first machine learning algorithm, the model having a global model and one or more different types of random effects model, each random effect model corresponding to a different variable in the first plurality of data samples (paragraphs 0032 and 0039: Here, a targeting filtering module is utilizes a trained machine learning model (paragraphs 0031-0032). This model is a global model as all candidates are entered into the targeting filtering module to generate an initial candidate set. Additionally, a recruiter embedding module is applied to the candidate data set (paragraph 0039). This includes variables for identifying whether the candidate is considered, claimed, not considered, and/or not claimed) wherein the global model is trained using data samples in the first training set (paragraph 0032: Here, a machine learning model is trained with a training data set based upon candidate features of past candidates) training a first iteration of a model using the first training set (paragraphs 0032 and 0039) obtain a second plurality of data samples, the data samples in the second plurality of data samples indicating a value for the first variable and a value for the second variable, at least some of the second plurality of data samples being identical to at least some of the first plurality of data samples (paragraphs 0038-0040: Here, the personalization module personalizes a set of candidates based upon a machine learning model. The personalization module is trained with a training set of recruiter and candidate embeddings. In this instance, the candidate embeddings are the first plurality of data samples used for training the targeting filtering module) Cheng fails to specifically disclose wherein the model is a GLMix model and training a second iteration of the model. However, Zhang, which is analogous to the claimed invention because it is directed toward training a model for use in matching job applicants and job posters, discloses use of a GLMix model (Section 1) and training a second iteration of the model (Section 3.1: Here, a second iteration of the machine learning model is trained). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Zhang with Cheng, with a reasonable expectation of success, as it would have allowed for using ID-level regression coefficients in order to improve functionality of the models (Zhang: Abstract). This would have provided users with the advantage of parallelizing data to avoid the bottleneck present in most ID-level regression models and facilitate a better end user experience. Cheng fails to specifically disclose: generate a set of random numbers using a random number generator, the random number generator using output of a hash function as a seed therefore, wherein the hash function takes as input a value for each variable assign a corresponding one or the set of random numbers to the data samples from the first plurality of data samples Further, Drew, which is analogous to the claimed invention because it is directed to generating random numbers, discloses: generate a set of random numbers using a random number generator, the random number generator using output of a hash function as a seed therefore, wherein the hash function takes as input a value for each variable (paragraph 0014) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Drew with Cheng-Zhang, with a reasonable expectation of success, as it would have allowed for separating data sets to classify data into different data sets for individual processing (Drew: paragraph 0141). Cheng fails to specifically disclose: assign a corresponding one or the set of random numbers to the data samples from the first plurality of data samples and randomly select data samples from the first plurality of data samples to assign to a first training set or a first holdout set randomly selecting data samples from the second plurality of data samples to assign to a second training set or a second holdout set using output of the hash function as a seed to the random number generator test both the first iteration of the model and the second iteration of the model using the second holdout set However, Bhide, which is analogous to the claimed invention because it is directed toward validating AI models using holdout sets, discloses: assign a corresponding one or the set of random numbers to the data samples from the first plurality of data samples and randomly select data samples from the first plurality of data samples to assign to a first training set or a first holdout set using output of a hash function as a seed to a random number generator (Figure 1; paragraph 0041) randomly select data samples from the second plurality of data samples to assign to a second training set or a second holdout set using output of the hash function as a seed to the random number generator (Figure 1; paragraph 0041) test both the first iteration of the model and the second iteration of the model using the second holdout set (Figure 1; paragraph 0041) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Bhide with Cheng-Zhang- Drew, with a reasonable expectation of success, as it would have provided the advantage of both training and validating a model using a single data set (Bhide: paragraph 0041). This would have allowed a user to validate data models before placing them into production (Bhide: paragraph 0002). Additionally, Cheng-Zhang-Bhide fail to specifically disclose the hash function taking as input a value for each variable to which a random effects model in the model corresponds. However, Coretti, which is analogous to the claimed invention because it discloses generating a random number using cryptographic hashing, discloses generation a random number (Section 1.2). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the Coretti with Cheng-Zhang-Bhide, with a reasonable expectation of success, as it would have allowed a user to improve the quality of random number generation (Coretti: Section 1.2). Thereby preventing introduction of bias into the training/validation of data. Finally, Cheng fails to specifically disclose the GLMix model having a global model and a set of different types of random effects models, wherein each type of random effects models corresponds to a different variable in the first plurality of data samples, and wherein a corresponding instance of each type of random effects model is trained using the value for the first variable or the value for the second variable. However, Ozcaglar, which is analogous to the claimed invention because it is directed toward training a GLMix model, discloses the GLMix model (paragraph 0018: Here, a generalized linear mix model is disclosed) having a global model and a set of different types of random effects models (paragraph 0021: Here, a GLMix model includes a global model (query-based model) and a set of different types of random effects models (user-based models based on a history of user actions. This includes selecting a subset of candidates), wherein each type of random effects models corresponds to a different variable in the first plurality of data samples (paragraphs 0065-0066: Here, a plurality of feature sets are used in modeling a GLMix model. These features include candidate skills, candidate position, candidate interface language, region, position seniority, and company size. The training dataset is split into a training and a testing dataset), and wherein a corresponding instance of each type of random effects model is trained using the value for the first variable or the value for the second variable (paragraph 0065-0066) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ozcaglar with Cheng-Zhang-Drew-Bhide, with a reasonable expectation of success, as it would have allowed for training specially configured models to perform processing in order to reduce electronic resource consumption (paragraph 0019). As per dependent claim 3, Cheng discloses wherein the training a first iteration of the model using the first training set includes: training the global model using all data samples in the first training set (paragraph 0032: Here, a machine learning model may be trained using a set of training data. This data includes candidates, each having candidate features, including educational histories, professional experiences, and qualifications including education, prior experiences, prior job titles, prior projects, skills, and certifications) training a random effect model of a first type using only data samples in the first training set that correspond to a particular value for the first variable (paragraphs 0038-0040: Here, a targeting filtering module is utilizes a trained machine learning model (paragraphs 0031-0032). This model is a global model as all candidates are entered into the targeting filtering module to generate an initial candidate set. Additionally, a recruiter embedding module is applied to the candidate data set (paragraph 0039). This includes variables for identifying whether the candidate is considered, claimed, not considered, and/or not claimed) training a random effect model of a second type using only data samples in the first training set that correspond to a particular value for the second variable (paragraph 0043: Here, the ranking module uses a trained machine learning model for identifying users that are claimed by recruiters as positive examples and those that are not claimed as negative examples) As per dependent claim 4, Cheng discloses wherein the first plurality of data samples and the second plurality of data samples include actions by users to apply for job postings (paragraph 0023: Here, a candidate applies for a job via a job posting), wherein the first variable is a user identification the second variable is job posting identification (paragraph 0023). Cheng fails to specifically disclose a graphical user interface to communicate with users who apply for job postings. However, the examiner takes official notice that it was notoriously well-known in the art at the time of the applicant’s effective filing date to have provided a graphical user interface for interacting with users applying for a job posting. Such an interface would have allowed a user to enter their information, access job listings, and receive communications from potential employers. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the well-known with Cheng-Zhang-Bhide, with a reasonable expectation of success, as it would have facilitated entering information, access job listings, and receive communications from potential employers. As per dependent claim 5, Cheng discloses wherein the random effect model of the first type is a per-user model (paragraphs 0031-0032) and the random effect model of the second type is a per-job posting model (paragraphs 0043). As per dependent claim 6, Cheng discloses wherein the training the first iteration of the model and the training the second iteration of the model data incudes assigning a positive label to any data sample corresponding to a particular pair of user identification and job posting identification where the data sample or another data sample included a positive signal from an agent of an employer corresponding to the job posting identification in the particular pair (paragraph 0043: Here, a result is marked as a positive result if it features candidates associated with the recruiter). As per dependent claim 7, Cheng discloses wherein the positive signal is a job offer (paragraph 0035: Here, a positive signal is a job offer/applicant being hired). As per dependent claim 8, Cheng discloses wherein the positive signal is an interview request (paragraph 0039: Here, a positive signal is an interview). As per dependent claim 9, Cheng discloses wherein the positive signal is a communication sent from the agent of the employer to the user corresponding to the user identification of the particular pair (paragraph 0039: Here, a positive signal is a message from a recruiter). As per dependent claim 10, Cheng discloses wherein the training the first iteration of the model and the training the second iteration of the model data includes, for a data sample not assigned a positive label within a preset time frame after the user corresponding to the user identification of the particular pair applied for the job posting corresponding with the job posting identification for the particular pair, assigning a negative label (paragraph 0043: Here, candidates that were not contacted within a threshold period of time are associated with a negative result). As per dependent claim 11, Cheng discloses wherein the training the first iteration of the model and the training the second iteration of the model data includes, for a data sample not assigned a positive label or a negative label, assigning a preliminary negative label to the data sample if a positive label has been assigned to at least one other data sample corresponding toe the same job posting identification as the job posting identification for the particular pair but a different user identification, within the preset time frame (paragraph 0043). As per dependent claim 12, Cheng, Zhang, Drew, and Bhide disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Bhide discloses automatically switching from the first iteration of the model to the second iteration of the model based on testing (Figure 2; paragraph 0043: Here, if it is determined that a model is not meeting performance criteria, it is rejected. This allows for replacing the model with better model created from one of the other training datasets (Figure 4, item 440)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Bhide with Cheng-Zhang, with a reasonable expectation of success, as it would have provided the advantage of both training and validating a model using a single data set (Bhide: paragraph 0041). This would have allowed a user to validate and/or replace models that fail to meet desired benchmarks (Bhide: paragraph 0002). With respect to claim 13, the applicant discloses the limitations substantially similar to those in claims 1 and 12. Claim 13 is similarly rejected. With respect to claim 15, the applicant discloses the limitations substantially similar to those in claim 4. Claim 15 is similarly rejected. With respect to claim 16, the applicant discloses the limitations substantially similar to those in claim 6. Claim 16 is similarly rejected. With respect to claim 17, the applicant discloses the limitations substantially similar to those in claim 10. Claim 17 is similarly rejected. With respect to claim 18, the applicant discloses the limitations substantially similar to those in claim 11. Claim 18 is similarly rejected. With respect to claim 19, the applicant discloses the limitations substantially similar to those in claim 13. Claim 19 is similarly rejected. Allowable Subject Matter Claims 2, 14, and 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Cheng, Zhang, Drew, Bhide, Ozcaglar, and Coretti. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Genner et al. (US 11496315): Discloses a hash transform implements vector permutations according to a seed from a random number generator (column 12, lines 15-26) Thomas (US 11475104): Discloses a pseudo-random number generator using a seed (claim 3) Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Feb 04, 2021
Application Filed
Sep 30, 2024
Non-Final Rejection — §103
Dec 10, 2024
Examiner Interview Summary
Dec 10, 2024
Applicant Interview (Telephonic)
Dec 16, 2024
Response Filed
Mar 24, 2025
Non-Final Rejection — §103
Jun 10, 2025
Interview Requested
Jun 11, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Examiner Interview Summary
Jun 18, 2025
Response Filed
Sep 04, 2025
Final Rejection — §103
Oct 14, 2025
Interview Requested
Oct 23, 2025
Applicant Interview (Telephonic)
Oct 27, 2025
Response after Non-Final Action
Oct 27, 2025
Examiner Interview Summary
Nov 10, 2025
Final Rejection — §103
Jan 15, 2026
Examiner Interview Summary
Jan 28, 2026
Request for Continued Examination
Feb 06, 2026
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §103
Feb 17, 2026
Examiner Interview (Telephonic)
Feb 17, 2026
Examiner Interview Summary
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month