DETAILED ACTION
This action is responsive to the claims filed on 06/15/2022. Claims 1-20 are pending for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Statutory Categories
Claims 1-8 are directed to a system.
Claims 9-13 are directed to a method.
Claim 14-20 is directed to a method.
Independent Claims – Claim 1
Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes. Independent claim 1 recites limitations that are abstract ideas in the form of mental processes:
Claim 1 recites:
for each possible candidate on the list of possible candidates, determine a confidence regarding an identity of the possible candidate based at least on the enrichment data, (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
when the confidence regarding the identity of the possible candidate satisfies a threshold condition, determine, . . . , a probability that the possible candidate will take an action of interest, and when the probability meets a threshold probability, (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
then add the possible candidate to a list of candidates; (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
This claim further recites the following additional elements for the purposes of Step 2A Prong Two analysis:
A computing system, comprising: a logic subsystem; and a storage subsystem comprising instructions executable by the logic subsystem to (a recitation of using components stated at a high level of generality and with little context as to how they are implemented to perform the function is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
obtain a list of possible candidates and enrichment data for each possible candidate on the list of possible candidates; (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
by inputting information regarding the identity of the possible candidate and the enrichment data for the possible candidate into a trained machine learning model, (a recitation of using a trained machine learning model stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
and output the list of candidates. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The additional limitations fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
This claim recites the following additional elements for the purposes of Step 2B analysis:
A computing system, comprising: a logic subsystem; and a storage subsystem comprising instructions executable by the logic subsystem to (a recitation of using components stated at a high level of generality and with little context as to how they are implemented to perform the function is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
obtain a list of possible candidates and enrichment data for each possible candidate on the list of possible candidates; (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
by inputting information regarding the identity of the possible candidate and the enrichment data for the possible candidate into a trained machine learning model, (a recitation of using a trained machine learning model stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
then add the possible candidate to a list of candidates; and output the list of candidates. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The claim also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claim is unpatentable.
Claims 9 and 18 have limitations substantially similar to claim 1, therefore a similar analysis applies.
Dependents of Claim 1
The remaining dependent claims corresponding to independent claims 1 do not recite additional elements, whether considered individually or in combination, that are sufficient to integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. The analysis of which is shown below:
The claims below recite additional limitations which fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
The claims also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claims are unpatentable.
Claim 2 recites the further limitation of:
The computing system of claim 1, wherein the enrichment data comprises one or more of demographic information, financial information, or lifestyle preferences information. (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
Claim 3 recites the further limitation of:
The computing system of claim 1, wherein the machine learning model comprises a gradient boosting classifier comprising an ensemble of decision trees. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 4 recites the further limitation of:
The computing system of claim 1, wherein the machine learning model comprises a plurality of parameters having values determined based at least on an area under a receiver operating characteristic curve metric. (recites mathematical concepts that comprise mathematical algorithms, formulas, or calculations.)
Claim 5 recites the further limitation of:
The computing system of claim 1, wherein the machine learning model is trained based at least on a data set regarding a plurality of previous possible candidates, the data set including, for each previous possible candidate of the plurality of previous possible candidates, enrichment data regarding the previous possible candidate, and a label comprising an indication of whether the previous possible candidate took the action of interest. (a recitation of training by only stating data used, stated at a high level of generality, is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 6 recites the further limitation of:
The computing system of claim 5, wherein the plurality of previous possible candidates are members of a first organization, and wherein the possible candidate is a member of a second organization different from the first organization. (this is merely additional information for an aforementioned judicial exception which further describes the gathered enrichment data in terms of the type of data (candidate organization type) and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
Claim 7 recites the further limitation of:
The computing system of claim 1, further comprising instructions executable to determine the threshold probability based at least on one or both of a precision or a recall of the machine learning model. (a determination by using data of a machine learning model stated at a high level of generality, is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 8 recites the further limitation of:
The computing system of claim 1, wherein the possible candidate comprises one of an individual or an organization. (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data (candidate type) and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
Independent Claims – Claim 9
Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes. Independent claim 9 recites limitations that are abstract ideas in the form of mental processes:
Claim 1 recites:
determining a confidence regarding an identity of the possible candidate based at least on the enrichment data, (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
and when the probability meets a threshold probability, then adding the possible candidate to a list of candidates for the possible contact for whom the probability is maximized (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
This claim further recites the following additional elements for the purposes of Step 2A Prong Two analysis:
A method, comprising obtaining a list of possible candidates; for each possible candidate on the list of possible candidates, obtaining enrichment data for the possible candidate, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
when the confidence regarding the identity of the possible candidate satisfies a threshold condition, for a plurality of possible contact/possible candidate pairs, inputting information regarding the identity of the possible candidate, the enrichment data for the possible candidate, an identity of the possible contact, and enrichment data for the possible contact, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
into a trained machine learning model to obtain a probability that the possible candidate will take an action of interest when contacted by the possible contact, (a recitation of using a trained model stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
and outputting the list of candidates for the possible contact. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The additional limitations fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
This claim recites the following additional elements for the purposes of Step 2B analysis:
A method, comprising obtaining a list of possible candidates; for each possible candidate on the list of possible candidates, obtaining enrichment data for the possible candidate, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
when the confidence regarding the identity of the possible candidate satisfies a threshold condition, for a plurality of possible contact/possible candidate pairs, inputting information regarding the identity of the possible candidate, the enrichment data for the possible candidate, an identity of the possible contact, and enrichment data for the possible contact, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
into a trained machine learning model to obtain a probability that the possible candidate will take an action of interest when contacted by the possible contact, (a recitation of using a trained model stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
and outputting the list of candidates for the possible contact. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The claim also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claim is unpatentable.
Claims 9 and 18 have limitations substantially similar to claim 1, therefore a similar analysis applies.
Dependents of Claim 9
The remaining dependent claims corresponding to independent claims 9 do not recite additional elements, whether considered individually or in combination, that are sufficient to integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. The analysis of which is shown below:
The claims below recite additional limitations which fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
The claims also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claims are unpatentable.
Claim 10 recites the further limitation of:
The method of claim 9, wherein the machine learning model comprises a gradient boosting classifier comprising an ensemble of decision trees. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 11 recites the further limitation of:
The method of claim 9, wherein the machine learning model comprises a plurality of parameters having values determined based at least on an area under a receiver operating characteristic curve metric. (recites mathematical concepts that comprise mathematical algorithms, formulas, or calculations.)
Claim 12 recites the further limitation of:
The method of claim 9, further comprising outputting the list of candidates to a device associated with the possible contact and not to a device associated with another possible contact in the list of possible contacts. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g), the choice of outputting to certain devices is being considered a reasonable mental process of evaluation that can be performed in mind.)
Claim 13 recites the further limitation of:
The method of claim 9, wherein the probability is obtained based further on information regarding an event occurring between a last contact to contact the possible candidate and the possible candidate. (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
Independent Claims – Claim 14
Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes. Independent claim 14 recites limitations that are abstract ideas in the form of mental processes:
Claim 14 recites:
tuning one or more parameters of the machine learning model based at least on a portion of the data set and an evaluation metric; (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
output a prediction, for an input possible candidate and input contact, regarding whether the input possible candidate eventually will take the action of interest; (a mental process of evaluation which can reasonably be performed in one’s mind or with aid of pen and paper)
This claim further recites the following additional elements for the purposes of Step 2A Prong Two analysis:
A method, comprising obtaining a training data set regarding a plurality of previous possible candidates and a plurality of previous contacts for the previous possible candidates, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
the data set including, for each previous possible candidate of the plurality of previous possible candidates, enrichment data regarding the previous possible candidate, enrichment data regarding a previous contact that contacted the previous possible candidate, and a label comprising a binary indication of whether the previous possible candidate eventually took an action of interest when contacted by the previous contact; (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
training a machine learning model comprising a classifier based at least on a portion of the data set to train the machine learning model to (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
and outputting the machine learning model. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The additional limitations fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
This claim recites the following additional elements for the purposes of Step 2B analysis:
A method, comprising obtaining a training data set regarding a plurality of previous possible candidates and a plurality of previous contacts for the previous possible candidates, (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
the data set including, for each previous possible candidate of the plurality of previous possible candidates, enrichment data regarding the previous possible candidate, enrichment data regarding a previous contact that contacted the previous possible candidate, and a label comprising a binary indication of whether the previous possible candidate eventually took an action of interest when contacted by the previous contact; (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
training a machine learning model comprising a classifier based at least on a portion of the data set to train the machine learning model to (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
and outputting the machine learning model. (limitation is merely inputting/outputting information and is considered insignificant extra-solution activity, see MPEP 2106.05(g))
The claim also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claim is unpatentable.
Claims 9 and 18 have limitations substantially similar to claim 1, therefore a similar analysis applies.
Dependents of Claim 14
The remaining dependent claims corresponding to independent claims 14 do not recite additional elements, whether considered individually or in combination, that are sufficient to integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. The analysis of which is shown below:
The claims below recite additional limitations which fail step 2A Prong 2 of the 101 analysis because they do not transform the claim into a practical application. These limitations are too abstract or lack technical improvement that would make the concept practically useful. Without clear utility or integration into a specific field, the claim does not relate to any particular application. It does not meet the requirements of Step 2A Prong 2, as it fails to make the concept meaningfully applicable in practice.
The claims also fails Step 2B of the analysis because the additional limitations do not amount to significantly more than the abstract idea itself. The additional limitations do not enhance the claim in a way that would move it beyond its abstract ideas as they minimally elaborate on the core concept without adding any inventive or technical substance. The claims are unpatentable.
Claim 15 recites the further limitation of:
The method of claim 14, wherein training the machine learning model comprises using gradient boosting. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 16 recites the further limitation of:
The method of claim 14, wherein each previous possible candidate of the plurality of previous possible candidates belongs to one of two imbalanced classes, (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2.)
further comprising balancing the imbalanced classes. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 17 recites the further limitation of:
The method of claim 14, wherein tuning the one or more parameters comprises applying k-fold cross validation to at least the portion of the data set. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim 18 recites the further limitation of:
The method of claim 14, wherein the evaluation metric comprises an area under a receiver operating characteristic curve metric. (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2)
Claim 19 recites the further limitation of:
The method of claim 18, wherein the input contact is a member of an organization, (this is merely additional information for an aforementioned judicial exception which further describes the enrichment data in terms of the type of data and is considered insignificant extra-solution activity because it does not integrate the exception in Step 2A Prong 2)
further comprising collecting a replacement data set regarding a plurality of previous possible candidates that are members of the organization, and replacing at least a portion of the data set with the replacement data. (a mental process of evaluation which can reasonably be performed.)
Claim 20 recites the further limitation of:
The method of claim 14, wherein training the machine learning model comprises using a logistic loss function. (a recitation of using a known classifier stated at a high level of generality is being considered merely as instructions to apply an exception under MPEP 2106.05(f))
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 2, 3, 5, 6, 8, 9, 10, 12, 13 are rejected under 35 U.S.C. 103 as being unpatentable by Garg et al. (US 10803421 B1; patented Oct. 13, 2020), hereinafter referred to as Garg, in view of Bruin et al. (De Bruin, J. (2015). Probabilistic record linkage with the Fellegi and Sunter frame-work. Master of Science Thesis at Delft University of Technology, 28.), hereafter referred to as Bruin, and in further view of Schwarm et al. (US 20180101771 A1; published Apr. 12, 2018), hereafter referred to as Schwarm.
PNG
media_image1.png
689
638
media_image1.png
Greyscale
Figure 3 of Garg
Claim 1: Garg teaches the following limitations:
A computing system, comprising: a logic subsystem; and a storage subsystem comprising instructions executable by the logic subsystem (Garg, col. 18, line 17, “A non - transitory computer - readable medium comprising machine - executable code that , when executed by a computing system comprising one or more processors , enables the computing system to perform operations that automatically predict a match between a candidate and a job position for an enterprise”, describes a “computing system” with memory (storage subsystem) and processors running stored instructions to perform each step (logic subsystem).)
to obtain a list of possible candidates (Garg, claim 17, “identify a plurality of candidates for the job position , each of the plurality of candidates being associated with a corresponding enriched talent profile”, a plurality of possible candidates are obtained/identified.)
and enrichment data for each possible candidate on the list of possible candidates; (Garg, col. 3, line 26, “creating an enriched talent profile for the candidate includes the profile data , the supplemental candidate data , the related - entity data , the similarity data , the derived personality and talent insights for the candidate , and the predicted next role”, Garg’s chart in Figure 3 (shown above claim 1) further shows retrieving, for each candidate, multiple enrichment-data types.)
Bruin, in the same field of likelihood estimation teaches the following limitation which the above prior art fails to teach:
for each possible candidate on the list of possible candidates, determine a confidence regarding an identity of the possible candidate based at least on the enrichment data, (Bruin, page 47, paragraph 1, “In some situations, the probabilities P(M = 1|Y = y) and P(M = 0|Y = y) are interesting for interpretation. It gives the probability of a true link or true non-link given the (known) comparison vector instead of conditioning on the latent variable M. In the Fellegi and Sunter model, the parameters of interest are the m, u and fi probabilities. Both probabilities P(M = 1|Y = y) and P(M = 0|Y = y) are expressible in these terms by using Bayes’ theorem… The result is:
PNG
media_image2.png
48
540
media_image2.png
Greyscale
”, this formula computes posterior probability that two records refer to the same entity given the comparison vector Y = y, i.e., the enrichment data for a candidate. It is interpreted that this meets the claims functional requirements to “determine a confidence regarding an identity… based at least on the enrichment data”, where P(M = 1|Y = y) is the confidence and Y=y is the enrichment data. The components of the formula, (i.e., P(Y = y|M = 1), P(Y = y|M = 1)P(M = 1) + P(Y = y|M = 0)P(M = 0)) correspond to the likelihood of observing that enrichment pattern under match vs non-match and the prior probability of a true match respectively.)
when the confidence regarding the identity of the possible candidate satisfies a threshold condition, (Bruin, page 37, paragraph 4, “To divide the set of comparison vectors based on the likelihood ratio, two threshold values are needed. The subsets correspond with the positive link action, the positive non-link action and the possible link action. The threshold levels
PNG
media_image3.png
80
107
media_image3.png
Greyscale
The choice of error levels as proposed in (3-18) leads to linkage rule
PNG
media_image4.png
82
301
media_image4.png
Greyscale
”, here, m(yn)/u(yn) is the likelihood ratio and under the bayes formula shown in the previous limitation, it directly corresponds to the posterior confidence P(M = 1|Y = y). Thus it only accepts pairs for which the likelihood ratio m(yn)/u(yn) meets or exceeds the chosen threshold Tµ.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Garg by incorporating teachings of Bruin and include methods of calculating identity confidence for candidates and thresholding candidates based on this confidence. A motivation for the combination would have to be the automatic weighting of features based on their information given. Putting more emphasis on rare-attribute matches and less to common ones. (Bruin, page 24, last paragraph, “The probabilistic model is well-known due to its ability to make use of the distinguishing power of identifying information [Yancey, 2002]. Two records with the same common surname should have a lower weight than the same records with an rare surname. The likelihood that two records share a rare surname is lower than the likelihood that two records share a common surname.”)
Schwarm, in the same field of machine learning candidate selection, teaches the following limitations which the above fails to teach:
determine, by inputting information regarding the identity of the possible candidate and the enrichment data for the possible candidate into a trained machine learning model, a probability that the possible candidate will take an action of interest, and (Schwarm, paragraph 128, “Once the new customer database is enriched with firmographic data as well as the employee data , at operation 511 the prediction engine is configured to employ the trained classifier to calculate a probability score for company employees and / or employee positions (e.g., management code , title ) likely to respond to a message and by what channels”, the identity (employee data) and the enrichment data (firmographic data) are inputted into a trained machine learning model to compute a probability that the candidate will take an action of interest (in this case a response to a message).)
when the probability meets a threshold probability, then add the possible candidate to a list of candidates and output the list of candidates. (Schwarm, paragraph 92, “At operation 606 the system then outputs the prioritized target list to the client user . to an output configured to output a list of classified , prioritized prospects . For example , the client user can receive an ordered set of 100 prospects based on the scores calculated .”, prioritized targets (list of candidates) are selected based on the probability scores calculated. It is interpreted that a certain level of score will classify prioritized targets, forming as the ‘threshold probability’ for selecting possible candidates for the list of candidates. The final list is then outputted to the users/clients.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Garg by incorporating teachings of Schwarm and include methods of calculating probability scores for prioritized candidate selection. A motivation for the combination would have to be the use of quantifying the prospects likelihood of response to dynamically rank and select the most promising targets. (Schwarm, paragraph 119, “Once the new customer database is enriched with firmographic data , at operation 406 the prediction engine is configured to employ the trained classifier to calculate a probability score for companies from the database of company names 405 likely to purchase a given product or a given service .”)
Claim 2: Garg, Bruin, and Schwarm teaches limitations of claim 1. Garg further teaches:
The computing system of claim 1, wherein the enrichment data comprises one or more of demographic information, financial information, or lifestyle preferences information. (Garg, col. 6, line 23, “In certain embodiments , profile data also includes the candidate's hobbies and interests.”, it is interpreted that the enrichment data of Garg comprises at least one of a candidates hobbies and interests (lifestyle preferences).)
Claim 3: Garg, Bruin, and Schwarm teaches limitations of claim 1. Schwarm further teaches:
The computing system of claim 1, wherein the machine learning model comprises a gradient boosting classifier comprising an ensemble of decision trees. (Schwarm, paragraph 7, “In at least one of the various embodiments , the classifier engine can comprise a machine learning classifier model builder selected from the group of : a decision tree , a random forest modeler , a cluster modeler , a K Means Cluster modeler , a neural net , a gradient boosted trees machine modeler , and a support vector machine ( SVM ) .”, it is interpreted that the enrichment data of Garg comprises at least one of a candidates hobbies and interests (lifestyle preferences).)
The rationale for the combination of Garg with Schwarm references is similar to that as applied in claim 1 above.
Claim 5: Garg, Bruin, and Schwarm teaches limitations of claim 1. Schwarm further teaches:
The computing system of claim 1, wherein the machine learning model is trained based at least on a data set regarding a plurality of previous possible candidates, the data set including, for each previous possible candidate of the plurality of previous possible candidates, enrichment data regarding the previous possible candidate, and a label comprising an indication of whether the previous possible candidate took the action of interest. (Schwarm, paragraph 101, “In at least one embodiment , at operation 305 the linked company win / loss database 301 data 310 , 312 , 314 and company entity data 316 , 318 from company entity database 304 are compiled into a training database 306 . In an embodiment , the database includes , for each company , one or more win / loss values 310 , firmographic and score data 316 , and company data 318 .”, it is interpreted that the data of Schwarm comprises at least one of a candidates enrichment data, such as firmographic or company entity data, and a win/loss label indicating if the candidate took the action of interest or not (win or loss).)
The rationale for the combination of Garg with Schwarm references is similar to that as applied in claim 1 above.
Claim 6: Garg, Bruin, and Schwarm teaches limitations of claim 1. Schwarm further teaches:
The computing system of claim 5, wherein the plurality of previous possible candidates are members of a first organization, and wherein the possible candidate is a member of a second organization different from the first organization. (Schwarm, paragraph 109, “The classifier thus identifies and can generate list of companies firmographically similar to those sold or marketed to the past and grouped by profile characteristics trained on the client user ' s own data . Such lists can be generated in seconds using the trained classifier and can include from thousands to tens of thousands of companies as well as associated people and their contact information , per profile , all being Al qualified prospects .”, the model identifies targets not from just a single company/organization.)
The rationale for the combination of Garg with Schwarm references is similar to that as applied in claim 1 above.
Claim 8: Garg, Bruin, and Schwarm teaches limitations of claim 1. Schwarm further teaches:
The computing system of claim 1, wherein the possible candidate comprises one of an individual or an organization. (Garg, col. 1, line 16 “This invention relates generally to machine learning in human resource applications, and more specifically to using machine learning and enriched candidate and job profiles to predict the candidates most likely to be hired and successful in a job.”, the candidates in Garg are individuals.)
Claim 9: Garg teaches the following limitations:
A method, comprising obtaining a list of possible candidates; (Garg, claim 17, “identify a plurality of candidates for the job position , each of the plurality of candidates being associated with a corresponding enriched talent profile”, a plurality of possible candidates are obtained/identified.)
for each possible candidate on the list of possible candidates, obtaining enrichment data for the possible candidate, (Garg, col. 3, line 26, “creating an enriched talent profile for the candidate includes the profile data , the supplemental candidate data , the related - entity data , the similarity data , the derived personality and talent insights for the candidate , and the predicted next role”, Garg’s chart in Figure 3 (shown above claim 1) further shows retrieving, for each candidate, multiple enrichment-data types.)
Bruin, in the same field of likelihood estimation teaches the following limitation which the above prior art fails to teach:
determining a confidence regarding an identity of the possible candidate based at least on the enrichment data, (Bruin, page 47, paragraph 1, “In some situations, the probabilities P(M = 1|Y = y) and P(M = 0|Y = y) are interesting for interpretation. It gives the probability of a true link or true non-link given the (known) comparison vector instead of conditioning on the latent variable M. In the Fellegi and Sunter model, the parameters of interest are the m, u and fi probabilities. Both probabilities P(M = 1|Y = y) and P(M = 0|Y = y) are expressible in these terms by using Bayes’ theorem… The result is:
PNG
media_image2.png
48
540
media_image2.png
Greyscale
”, this formula computes posterior probability that two records refer to the same entity given the comparison vector Y = y, i.e., the enrichment data for a candidate. It is interpreted that this meets the claims functional requirements to “determine a confidence regarding an identity… based at least on the enrichment data”, where P(M = 1|Y = y) is the confidence and Y=y is the enrichment data. The components of the formula, (i.e., P(Y = y|M = 1), P(Y = y|M = 1)P(M = 1) + P(Y = y|M = 0)P(M = 0)) correspond to the likelihood of observing that enrichment pattern under match vs non-match and the prior probability of a true match respectively.)
when the confidence regarding the identity of the possible candidate satisfies a threshold condition, (Bruin, page 37, paragraph 4, “To divide the set of comparison vectors based on the likelihood ratio, two threshold values are needed. The subsets correspond with the positive link action, the positive non-link action and the possible link action. The threshold levels
PNG
media_image3.png
80
107
media_image3.png
Greyscale
The choice of error levels as proposed in (3-18) leads to linkage rule
PNG
media_image4.png
82
301
media_image4.png
Greyscale
”, here, m(yn)/u(yn) is the likelihood ratio and under the bayes formula shown in the previous limitation, it directly corresponds to the posterior confidence P(M = 1|Y = y). Thus it only accepts pairs for which the likelihood ratio m(yn)/u(yn) meets or exceeds the chosen threshold Tµ.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Garg by incorporating teachings of Bruin and include methods of calculating identity confidence for candidates and thresholding candidates based on this confidence. A motivation for the combination would have to be the automatic weighting of features based on their information given. Putting more emphasis on rare-attribute matches and less to common ones. (Bruin, page 24, last paragraph, “The probabilistic model is well-known due to its ability to make use of the distinguishing power of identifying information [Yancey, 2002]. Two records with the same common surname should have a lower weight than the same records with an rare surname. The likelihood that two records share a rare surname is lower than the likelihood that two records share a common surname.”). Garg further teaches
for a plurality of possible contact/possible candidate pairs, (Garg, col. 12, line 5, “For each job candidate pair , data from the calibrated job profile and the candidates enriched talent profile are inputted into the DNN 750”)
inputting information regarding the identity of the possible candidate, the enrichment data for the possible candidate, (Garg, col. 9, line 7, “2.7 Enriched Talent Profile: The system creates an enriched talent profile for the candidate that includes the profile data ( 335 ) ; the similarity data 355 ; the supplemental candidate data ( 340 ) ; the related entity data ( 345 ) , the system - derived insights ( 350 ).”, identity of the possible candidate (profile data) and the enrichments data of the possible candidate (supplemental data, similarity data, etc) are encapsulated into a enriched talent profile and used as input as shown further below.)
an identity of the possible contact, and enrichment data for the possible contact, (Garg, col. 5, line 36, “(Creating a Calibrated Job Profile ): The system obtains a job description and job requirements ( 120 ) for an open job position … The system creates a calibrated job profile with the job description and requirements , the ideal candidates and their corresponding experiences , skills , and traits , and the required / preferred personality and talent traits ( 270 ) .”, the identity of the possible contact is encapsulated in the “Calibrated Job Profile” created for each job. Identity of the possible contact (job requirements 120) shows identity information of the contact as well as enrichment data in the form of other encapsulated fields like job description, ideal candidates, etc.)
into a trained machine learning model to obtain a probability that the possible candidate will take an action of interest when contacted by the possible contact (Garg, col. 12, line 8, “The DNN 750 outputs one or more hiring – related predictions ( 980 ) for each pair , resulting in a match score for each of the inputted candidates at Organization X. Examples of hiring - related predictions are the probability of the candidate being hired , and the probability of the candidate being successful in a job after a period of time ( e.g. , 1 year ). ”, the candidate identity and enrichment data (enriched talent profiles) and contact identity and enrichment data (job profiles) are input into a machine learning (DNN) to compute a probability that a candidate will take an action of interest (in this case the probability of hiring or retention).)
Schwarm, in the same field of machine learning candidate selection, teaches the following limitations which the above fails to teach:
and when the probability meets a threshold probability, then adding the possible candidate to a list of candidates for the possible contact for whom the probability is maximized and outputting the list of candidates for the possible contact. (Schwarm, paragraph 97, “At operation 606 the system then outputs the prioritized target list to the client user . to an output configured to output a list of classified , prioritized prospects . For example , the client user can receive an ordered set of 100 prospects based on the scores calculated .”, prioritized targets (list of candidates) are selected based on the probability scores calculated. It is interpreted that a certain level of score will classify prioritized targets, forming as the ‘threshold probability’ for selecting possible candidates for the list of candidates. The final list is then outputted to the users/clients.
Paragraph 94, “The client user can then select targets companies and / or employees identified with the predictive score as the highest likelihood to be in “ win ” classes .”, the selection of target companies or employees is explicitly done by maximizing the highest likelihood (when probability is maximized))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Garg by incorporating teachings of Schwarm and include methods of calculating probability scores for prioritized candidate selection. A motivation for the combination would have to be the use of quantifying the prospects likelihood of response to dynamically rank and select the most promising targets. (Schwarm, paragraph 119, “Once the new customer database is enriched with firmographic data , at operation 406 the prediction engine is configured to employ the trained classifier to calculate a probability score for companies from the database of company names 405 likely to purchase a given product or a given service .”)
Claim 10 is substantially similar to claim 3, as such a similar analysis applies.
Claim 12: Garg, Bruin, and Schwarm teaches limitations of claim 9. Garg further teaches:
The method of claim 9, further comprising outputting the list of candidates to a device associated with the possible contact and not to a device associated with another possible contact in the list of possible contacts. (Garg, col. 9, line 17, “The system initiates the process of creating a calibrated job profile in response to receiving a job description and job requirements for an open job position ( step 5