Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to papers filed on 1/27/2026.
Claims 1, 10, and 19 have been amended.
Claims 3, 4, 12, and 13 have been cancelled.
No claims have been added.
Claims 1, 2, 5-11, and 14-21 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/2026 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 5-11, and 14-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
The claims are directed to a process (method as introduced in Claim 10), and/or system (Claim 1), and/or non-transitory computer-readable storage medium with executable instructions (Claim 19), thus Claims 1, 2, 5-11, and 14-21 fall within one of the four statutory categories. See MPEP 2106.03.
Step 2A, Prong 1:
The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below.
Claims 1, 10, and 19 recite (as represented by the language of Claim 1):
store, in the memory, a plurality of questions to ask each applicant currently living in poverty from a large pool of applicants currently living in poverty, wherein the large pool of applicants exceeds forty applicants, wherein poverty is based on the International Poverty Line as defined by the World Bank;
store, in the memory, a plurality of historical applicant data, wherein the plurality of historical applicant data includes information about one or more previous programs for applicants to exit poverty based on the International Poverty Line as defined by the World Bank where each program of the plurality of programs includes a large pool of applicants and participants that have applied for and participated in the
train a model with machine learning and using the plurality of historical applicant data as training data, wherein training the model comprises determining correlations between applicant characteristics and actual success outcomes in the one or more previous programs to exit poverty based on the International Poverty Line as defined by the World Bank, and storing, in the memory, a plurality of predictive weights corresponding to those correlations, and wherein the model is trained to determine an applicant's chances of success in the program to exit poverty based on the International Poverty Line as defined by the World Bank and to determine information to improve the applicants' chances of success in the program to exit poverty based on the International Poverty Line as defined by the World Bank;
for each applicant of the large pool of applicants, the instructions cause the processor to:
instruct a display device to display a graphical user interface including the plurality of questions to a user;
receive from the user via an input device, a plurality of responses to the one or more displayed questions, wherein the plurality of responses includes responses associated with an income score, an expenses score, and an intangibles score;
compile an income score based on current income of the applicant in relation to a current income for each of the applicants, an expenses score based on current expenses of the applicant, and an intangibles score based on intangible qualities of the applicant based on the applicant's answers to the plurality of questions;
calculate a likelihood of the applicant successfully participating in the program to exit poverty based on the International Poverty Line as defined by the World Bank based on the income score, the expenses score, and the intangibles score;
calculate the applicant's overall score based on the calculated likelihood of the applicant successfully participating in the program to exit poverty based on the International Poverty Line as defined by the World Bank; and dynamically determine the applicant's ranking in comparison to the large pool of applicants, based on each applicant's overall score; and
output, to the display device, the rankings of the applicants;
receive information about one or more applicants from the large pool of applicants who graduated from the program to exit poverty based on the International Poverty Line as defined by the World Bank, the information comprising outcome data indicating whether the one or more applicants exited poverty based on the International Poverty Line as defined by the World Bank;
calculate an updated plurality of weights for the model based upon the information about one or more applicants from the large pool of applicants who graduated from the program to exit poverty based on the International Poverty Line as defined by the World Bank by comparing the outcome data with the calculated likelihood and the plurality of predictive weights; and
retrain the model in a closed-loop manner based on the updated plurality of weights and the outcome data from the one or more applicants who graduated from the program to exit poverty based on the International Poverty Line as defined by the World Bank, such that one or more of an accuracy and an efficiency of future likelihood calculations is improved by incorporating measured performance of prior participants.
The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation covers the performance of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) in the form of determining the best candidates for a program. Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of managing personal behavior or relationships or interactions between people which represents the abstract idea of certain methods of organizing human activity. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for making predictions related to the potential success of a user in a program based on historical data that represents user characteristics compared to successful completions.
Step 2A, Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as:
A computer device…the computer device including a processor in communication with a memory, the memory including computer-executable instructions that, when executed by the processor, cause the processor to;
A non-transitory computer-readable storage medium having computer-executable instructions embodied thereon, wherein when executed by a computing device having at least one processor coupled to a memory device, the computer-executable instructions cause the at least one processor to;
[storing questions, historical applicant data, weights, etc.] in the memory;
train a model with machine learning and using the plurality of historical applicant data as training data;
a display device to display a graphical user interface [to display questions and output results];
[receive user input] via an input device;
retrain the model in a closed-loop manner [based on the new information and the calculated/updated weights].
In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0036]). These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea.
It is also noted that the recitation of “such that one or more of an accuracy and an efficiency of future likelihood calculations is improved by incorporating measured performance of prior participants”, does not demonstrate that such an improvement is achieved, nor does it demonstrate how/why retraining the model in a closed-loop manner based on the updated data would provide this improvement in a meaningful manner. The training and retraining of the model (including closed loop retraining) is recited at high level of generality, and merely reciting that an improvement (or any specific improvement) is achieved does not change the nature of the model within the claim limitations.
Step 2B:
The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible.
Dependent Claims:
Claims 2, 5-9, 11, and 14-18, 20, and 21 recite further elements related to the analysis and prediction steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas, as outlined below.
Claims 2 and 11 recite “wherein the income score is based on the current income of the applicant and the applicant's family members”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
Claims 5 and 14 recite “wherein the expenses score is based on the current expenses of the applicant including a number of dependents that the applicant has”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
Claims 6 and 15 recite “wherein the intangibles score is based on the intangible qualities of the applicant including at least one of entrepreneurial spirit, weaving ability, and behavior”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
Claims 7 and 16 recite “modify the applicant's overall score based on the results of a visit to the applicant's home”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
Claims 8 and 17 recite “receive and store an answer from the applicant; compare the answer to a plurality of historical answers from a plurality of historical participants in the model; and determine a weight for the answer based on the comparison”, which further specifies additional types of data to be analyzed and additional steps in the analysis, but does not lead toward eligibility.
Claims 9 and 18 recite “wherein the model includes a plurality of weights, and… update at least one weight of the plurality of weights in the model based on the received information about the one or more applicants who graduated from the program to exit poverty based on the International Poverty Line as defined by the World Bank”, which further specifies additional types of data to be analyzed and additional steps in the analysis, but does not lead toward eligibility.
Claim 20 recites “wherein the income score is based on the current income of the applicant and the applicant's family members, wherein the expenses score is based on the current expenses of the applicant including a number of dependents that the applicant has, wherein the intangibles score is based on the intangible qualities of the applicant including at least one of entrepreneurial spirit, weaving ability, and behavior.”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
Claim 21 recites “wherein the updated plurality of weights are based upon the country of the current applicants and the performance of one or more participants in one or more nearby countries.”, which further specifies additional types of data to be analyzed, but does not lead toward eligibility.
The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, Claims 2, 5-9, 11, and 14-18, 20, and 21 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5, 6, 8-11, 14, 15, and 17-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Powell et al. (Pub. No. US 2006/0265258 A1) in view of Ma (patent No. US 7,818,254 B1) in further view of Ravallion (World Bank’s $1.25/day poverty measure- countering the latest criticisms, 2010).
In regards to Claims 1, 10, and 19, Powell Discloses:
A device/method/system for selecting participants from an applicant pool to participate in a program to exit poverty, the computer device including a processor in communication with a memory, the memory including computer- executable instructions that, when executed by the processor, cause the processor to: ([0171]; [0172])
store, in the memory, a plurality of questions to ask each applicant from a large pool of applicants ([0014]; [0015]; [0032]; [0129], shows historical data regarding previous applicants (and whether or not those applicants succeeded) being used to determine profiles for each institution, those admission profiles used to determine what data collected from users will predict success, noting that the selectable answers (values) for given questions are tailored to the admission profiles of institutions), wherein a large pool of applicants exceeds forty applicants ([0026]; [0053], shows that enrollment pool can be large, enrollment would indicate previously accepted students that would be part of the historical database (it is noted that the database would also include graduated and unretained students, making the pool even larger))
store, in the memory, a plurality of historical applicant data, wherein the plurality of historical applicant data includes information about one or more previous programs for applicants where each program of the plurality of programs includes a large pool of previous applicants and participants that have applied for and participated in the one or more previous programs;
train a model with machine learning and using the plurality of historical applicant data as training data, wherein training the model comprises determining correlations between applicant characteristics and actual success outcomes in the one or more previous programs, wherein the model is trained to determine an applicant's chances of success in the program and to determine information to improve the applicants chances of success in the program; ([0014]; Claim 12, shows that the likelihood of success is determined for multiple programs/institutions (“a plurality of previous programs for applicants”), including comparing the users answers to those multiple programs/institutions; [0026]; [0053], shows that enrollment pool can be large, enrollment would indicate previously accepted students that would be part of the historical database (it is noted that the database would also include graduated and unretained students, making the pool even larger), this enrolment data and historical data would be representative of each school (program/institution); [0014]; [0015]; [0032]; [0129]; [0133], shows historical data regarding a plurality of previous applicants (and whether or not those applicants succeeded) being used to determine profiles for each institution, those admission profiles used to determine what data collected from users will predict success, noting that the selectable answers (values) for given questions are tailored to the admission profiles of institutions; [0070]; [0073], a predictive model is used (on the profiles and historical data) to predict likelihood of admission; [0018]; [0124]; [0129]; Claim 22, demonstrates that part of the determination of admittance includes predicting likelihood of the applicant finishing the program (graduating/exiting from the program); [0070]-[0072], [expanding on the above description/examples] the source data grows and changes over time, additionally, as the database evolves over time (including new profiles and new graduation retention data), the updated data can be applied to the model, the model can also evolve and can provide updated data such as new trends and recommendations, the updating of trends would be related to identification of changes in attributes that effect the likelihood of getting into a program, changes/evolutions in criteria and what criteria would be more important to being accepted would indicate a change in the weights of those criteria/answers (see also, [0056]; [0058]; [0061]; [0067]; for further material describing the evolution of historical data modeling analysis over time); as demonstrated above, the reference discloses models (algorithms, engines, modules, predictive models, etc.) that process the data based on trends, correlations, rules, etc. developed over time based on historical data (including admission, enrollment, and success), see also [0066]; [0067]; [0072]; [0075]; [0125]; [0133]; and throughout the reference, provides additional material demonstrating the use of models used to make the predictions based on criteria learned from historical data (including, but not limited to, previous users application data and success outcomes); [0072], additionally specifies the use of “genetic algorithms” to enhance ethe programming as historical data evolves over time (“…historic user profiles develop over time as users select, apply to, and receive acceptances from educational institutions using the methods disclosed herein. This allows for a database that can evolve over time. Thus, as genetic algorithms enhance programming, the changes in the applicant pool over time and adjustments in the policies and politics of the schools are captured such that future users benefit from enhanced data analysis .”); [0073], “…the invention compares current student profiles to historic student profiles. As a next step, correlating positive characteristics between profiles in order to predict likelihood of admission…this correlation process uses positively correlated characteristics to inform application, interview and admissions strategies…” (the profile [made up of characteristics/attributes of past applicants and whether or not they were successful] used for these strategies, including correlating characteristics and success), one of ordinary skill, before the effective filing date of the claimed invention, would recognize and understand that genetic algorithms are used to train models and other algorithms, although the reference does not specifically use terms such as “training”, Examiner asserts that the reference provides enough explanations and examples that one of ordinary skill in the art would understand that the models used are trained based on at least historical data to automatically perform the analysis and correlations (including updated/retrained as data evolves over time, which further demonstrates the use of historical data to determine how the models function and processes the data)), and storing, in the memory, a plurality of predictive weights corresponding to those correlations, ([0065]-[0067]; [0073], historical data and comparisons with user profiles indicate the identification of attributes/answers that are important to likelihood of acceptance; [0066], [as previously provided in the rejection of Claims 8 and 17] the comparisons to historical data determines which answers are more important to being accepted into a program (“…understand what constitutes a successful admissions decision based on student academic performance and retention, the system's filters will automatically identify student profiles that are most often successful, the system’s filters will automatically identify student profiles that are most often successful. Conversely, a student applicant benefits from the historical data associated with the type of applicant that a particular college accepts. As an example, from the college admission perspective discussed in more detail below, one filter can be based on the constraint that that many state universities find that retention rates are inversely related to the student's distance from their home. For the student applicant that is seeking admission, a suitable filter may be based on historical data that indicates that if the student is from State A and applying to College B, that there is an increased likelihood that they will be admitted.”), although the term “weighted” is not specifically used, the material clearly indicates that certain answers would be considered more important (or higher weighted) than others and used for, making the predictions (predictive weights), for example, states/locations closer to the college would be weighted higher because the college considers applicants who live closer to have a higher likelihood of retention, the predicative weights are part of the filters (which in turn is part of the trained model) and are therefore stored along with the filters/models; [0067], [as previously provided in the rejection of Claims 8 and 17] example 2, answers indicating higher GPAs would be weighted higher for admittance to XYZ)
Additionally, Powell discloses that the previous applicants and participants can be very large ([0070]-[0072], as described above, the source data grows and changes over time, additionally, as the database evolves over time (including new profiles and new graduation retention data); [0004], …as the process of applying for a position with a particular entity generates large volumes of information, methods for using that information are also of value…”, (see also, [0056]; [0058]; [0061]; [0067]; for further material describing the evolution of historical data modeling analysis over time)),
for each applicant of the large pool of applicants, the instructions cause the processor to:
instruct a display device to display a graphical user interface including the plurality of questions to a user; ([0014], a graphical user interface displays questions to a user)
receive from the user via an input device, a plurality of responses to the one or more displayed questions, ([0014], allows users to select answers; [0115], shows the application process/profile creation including interviews related to financial data)
compile [scores for the multiple criteria used for matching the applicant and institution profiles] ([0076]-[0080], shows a points system for scoring different criteria related to admissions, the points being compiled into an overall score (continued in [0081]-[0096]); [0066], shows scoring and/or points matching correlated to determining success for an applicant) based on the applicant's answers to the plurality of questions; ([0077], the score is based on points that are applied to criteria in a profile; [0014]; [0063], profiles and profile characteristics are obtained through the questions and answers)
calculate a likelihood of the applicant successfully participating in the program based on the [multiple criteria scores]; ([0076]-[0080], shows a points system for scoring different criteria related to admissions, the points being compiled into an overall score (continued in [0081]-[0096]); [0066], shows scoring and/or points matching correlated to determining success for an applicant)
calculate the applicant's overall score based on the calculated likelihood of the applicant successfully participating in the program; ([0076]-[0080], shows a points system for scoring different criteria related to admissions, the points being compiled into an overall score (continued in [0081]-[0096]); [0066], shows scoring and/or points matching correlated to determining success for an applicant) [Examiner Note: As described in Applicant’s disclosure (including a combination of the specification and the original claim language), the calculation of likelihood and the overall score are both based on the one or more calculated scores. No significant differentiation is made between the calculated overall score and the calculated likelihood. Therefore, they will be treated as equivalent for purposes of prior art application. For example, the results of the calculation of the likelihood of success would represent the overall score (as the overall score in the claims is based on the calculated likelihood and which in the specification is based on the same one or more scores as the calculated likelihood in the claims).]
dynamically determine the [profile] ranking in comparison to the large pool of applicants, based on each [profile’s] overall score; ([0020]; [0076], shows a ranked list of recommendations based on scoring activities (referred to as “tiered” [0075]; [0076], rankings are tailored to users and can include preferences
provided by a user (dynamically determined))
output, to the display device, the rankings of the [profiles]; ([0020]; [0076], shows a ranked list of recommendations based on scoring activities (referred to as “tiered”) provided to the user)
Examiner Note: Although the specific example provided in the reference is drawn to the recommendations of institutions provided to the applicant, the reference is not limited to this use or set of results. It is noted that, Powell also discloses the use of the system for recommending the best applicants to the institutions (see at least [0018]; [0019]). The processes, activities, and results used in the examples provided above for recommending institutions to applicants can also be used to recommend applicants to institutions, as disclosed in the reference.
receive information about one or more applicants from the large pool of applicants who graduated from the program the information comprising outcome data indicating whether the one or more applicants [succeeded]; ([0014]; [0015]; [0032]; [0129], shows historical data regarding previous applicants (and whether or not those applicants succeeded) being used to determine profiles for each institution, those admission profiles used to determine what data collected from users will predict success, noting that the selectable answers (values) for given questions are tailored to the admission profiles of institutions; [0073], a model is used (on the profiles and historical data) to predict likelihood of admission, “…compares current student profiles to historic student profiles…”; [0072], shows historic user profiles developing over time (described in further detail, below); [0026]; [0053], shows that enrollment pool can be large, enrollment would indicate previously accepted students that would be part of the historical database (it is noted that the database would also include graduated and unretained students, making the pool even larger))
calculate an updated plurality of weights for the model based upon the information about one or more applicants from the large pool of applicants who graduated from the program by comparing the outcome data with the calculated likelihood and the plurality of predictive weights ([0070]-[0072], as described above, the source data grows and changes over time, additionally, as the database evolves over time (including new profiles and new graduation/retention data), the updated data can be applied to the model, the model can also evolve and can provide updated data such as new trends and recommendations, the updating of trends would be related to identification of changes in attributes that effect the likelihood of getting into a program, changes/evolutions in criteria and what criteria would be more important to being accepted would indicate a change in the weights of those criteria/answers, this demonstrates that outcomes are compared to previously calculated likelihoods, weights, etc. to determine changes in the trends, correlated characteristics, etc. (see also, [0056]; [0058]; [0061]; [0067]; for further material describing the evolution of historical data modeling analysis over time))
retrain the model in a closed-loop manner based on the updated plurality of weights and the outcome data from the one or more applicants who graduated from the program ([0070]-[0072], [expanding on the above description] the source data grows and changes over time, additionally, as the database evolves over time (including new profiles and new graduation/retention data), the updated data can be applied to the model, the model can also evolve and can provide updated data such as new trends and recommendations, the updating of trends would be related to identification of changes in attributes that effect the likelihood of getting into a program, changes/evolutions in criteria and what criteria would be more important to being accepted would indicate a change in the weights of those criteria/answers, this describes a closed loop retraining method in which feedback and new data is collected over time and used to update the model and its parameters, additionally specifies the use of “genetic algorithms” to enhance ethe programming as historical data evolves over time (“…historic user profiles develop over time as users select, apply to, and receive acceptances from educational institutions using the methods disclosed herein. This allows for a database that can evolve over time. Thus, as genetic algorithms enhance programming, the changes in the applicant pool over time and adjustments in the policies and politics of the schools are captured such that future users benefit from enhanced data analysis .”); [0073], “…the invention compares current student profiles to historic student profiles. As a next step, correlating positive characteristics between profiles in order to predict likelihood of admission…this correlation process uses positively correlated characteristics to inform application, interview and admissions strategies…”), one of ordinary skill, before the effective filing date of the claimed invention, would recognize and understand that genetic algorithms are used to continuously train models and other algorithms, although the reference does not specifically use terms such as “training”, Examiner asserts that the reference provides enough explanations and examples that one of ordinary skill in the art would understand that the models used are trained and retrained based on at least historical data to automatically perform the analysis and correlations (updated/retrained as data evolves over time to enhance the data and results) (see also, [0056]; [0058]; [0061]; [0067]; for further material describing the evolution of historical data modeling analysis over time)), such that one or more of an accuracy and an efficiency of future likelihood calculations is improved by incorporating measured performance of prior participants (one of ordinary skill in the art would recognize that the examples provided in the above citations would improve the accuracy and an efficiency of future likelihood calculations, because the model is iteratively updated to match changing trends, applicant data, student data, etc., using an outdated model that makes predictions based on old data would be recognized as not being accurate and/or efficient and the level of skill in the art demonstrates this understanding, although not applied at this time in view of the prior art reference, it is noted for future reference, that this material may also be considered non-functional material and/or intended usage of the claims)
Examiner Note: Applicant’s amendment (filed 8/26/2025) altered the language from “update the model based on the received information about the one or more applicants who graduated from the program and the updated weights” to “retrain the model based on the received information about the one or more applicants who graduated from the program and the updated weights”. No material was found in the disclosure as originally filed regarding retraining of the models. Additionally, there is no clear distinction made between “retraining” and “updating” (as the current and previous claim language implies that both would be performed in the same manner using the same data) and the specification does not provide a distinction (as there is no specific “retraining” material present). Therefore, Examiner asserts that one of ordinary skill in the art would interpret these terms as being equivalent. For further consideration of the merits, Examiner will interpret these terms as being equivalent in scope, based on the disclosure material. Applicant is advised that should Applicant provide evidence or arguments that these terms/limitations are not equivalent in scope, this may result in additional 35 USC § 112(a) and/or 35 USC § 112(b) rejections (a full determination will be made if and when required).
Powell discloses the use of financial data and non-financial (such as intangibles) data ([0060]; [0105]). Powell also discloses the use of score for multiple criteria being used to determine an overall score, as described above. Powell does not explicitly disclose the scoring criteria being specifically income, expense, and intangibles, however, Ma teaches:
wherein the plurality of responses include responses associated with an income score, an expenses score, and an intangibles score (column 14, line 24-column 16, table, shows questions and answers regarding a user’s current income, expenses, and intangible data (such as entrepreneurial spirit or behavior related to behaviors such as partnerships or multiple properties [see claim 4, below]))
compiling, by the computing device, [data related to income, expenses ,and intangibles for determining scores]; (column 14, line 24-column 16, table, shows questions and answers regarding a user’s income, expenses, and intangible data (such as entrepreneurial spirit or behavior related to behaviors such as partnerships or multiple properties [see claim 4, below]))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Powell so as to have included wherein the plurality of responses include responses associated with an income score, an expenses score, and an intangibles score and compiling, by the computing device, [data related to income, expenses ,and intangibles for determining scores] ,as taught by Ma in order to create a more robust evaluation by incorporate financial and non-financial factors (Powell, [0032]; [0105]).
In regards to the material discussing the purpose of the data collection and scoring (i.e., “applicants living in poverty”, “likelihood of the applicant successfully participating in the program to exit poverty”, etc.), it has been deemed non-functional material related to the intended use of the claimed invention and therefore provided no patentable weight. The claim limitation “to ask each applicant from a pool of applicants, wherein each applicant is currently living in poverty” simply provides a reason for “storing, in the memory, a plurality of questions” and does not significantly affect how the data is stored and/or how it is used in the remaining activities of the claim. For example, storing the questions with the purpose of presenting them to a pool of applicants who are in poverty (Applicant’s claims) or storing the questions with the purpose of presenting them to a pool of applicants who are seeking a job (prior art) would not be patentably distinct, because the type of applicant to which the questions are intended does not affect or alter the manner in which the questions re stored or used. Similarly, basing ratings on an applicant’s chance of success at the ‘exiting poverty’ program for which the questions are designed (Applicant’s claims) or basing ratings on an applicant’s chance of success at the performing the job for which the questions are designed (prior art) would not be patentably distinct, because the type of successful activity which the questions and ratings are intended to measure does not affect or alter the manner in which the ratings are calculated or used. Using the claimed system for the purposes of measuring a person in poverty’s chance of successfully leaving poverty does not alter the functioning of the claim. The claimed invention would perform the steps of storing question, storing ratings related to answers based on historical data, compiling and calculating an applicant’s scores, and ranking those scores would not be affected or altered based on the intended use of that method/system. The claims would be performed in the same manner and have the same result regardless of the type of applicant and/or desired outcome.
Powell/Ma, does not explicitly disclose the definition of poverty being based on the World Bank’s definition of an International Poverty Line, however, Ravallion teaches poverty being measured based on an international poverty line as defined by The World Bank (paragraphs 12-14, shows the World Bank defining the limits for an international poverty line for measuring poverty in various countries [including, but not limited to, the specification’s measurement of $1.25/day])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Powell/Ma so as to have included poverty being measured based on an international poverty line as defined by The World Bank, as taught by Ravallion in order to ensure clarity, understanding, and fairness of how applicants are evaluated by using a standard, uniform, and widely used definition of poverty (Ravallion, discusses determining global poverty standards (such as an international poverty line) by aggregating poverty data from a global set of countries). (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
In regards to Claims 2 and 11, as described above, the combination of Powell, Ma, and Ravallion disclose the determination of scores that include an applicant’s current income. Additionally, Powell discloses applicant data including family income ([0060])
In regards to Claims 5 and 14, as described above, the combination of Powell, Ma, and Ravallion disclose the determination of scores that include an applicant’s current expenses. Additionally, the cited material of Ma includes dependent information (“How many dependents do you have?”). This material is included in the combination of references, as applied to Claims 1, 10, and 19, above.
In regards to Claims 6 and 15, as described above, the combination of Powell, Ma, and Ravallion disclose the determination of scores that include an applicant’s intangibles including behavior and/or entrepreneurial spirit (entrepreneurial spirit or behavior related to behaviors such as partnerships or multiple properties). This material is included in the combination of references, as applied to Claims 1, 10, and 19, above.
In regards to Claims 8 and 17, Powell discloses:
wherein the computer-executable instructions further cause the processor to:
receive and store an answer from the applicant ([0014]; [0021], applicants are asked questions and/or interviewed and presented a set of answers for each question; [0056], questions are stored in a database)
compare the answer to a plurality of historical answers from a plurality of historical participants in the model; ([0014]; [0015]; [0032], historic user data is used to correlate user profiles to graduating applicant profiles (profiles showing the collected and analyzed data related to retention and graduation of previous applicants), as discussed in the parent claims, the profiles consist of user attributes and characteristics that were built through the answers (thus equating user profile data to the answers provided by the user to build the profile); [0065]-[0067]; [0073], discusses the comparison of user attributes to historical user attributes to determine what criteria/attributes are important to increasing the likelihood of a user being admitted to a program (examples include identifying what states would be more likely to be accepted at College B and what SAT scores would be more likely to be accepted at school XYZ)
and determine a weight for the answer based on the comparison; ([0066], the comparisons to historical data determines which answers are more important o being accepted into a program (“…filters will automatically identify student profiles that are most often successful. Conversely, a student applicant benefits from the historical data associated with the type of applicant that a particular college accepts. As an example, from the college admission perspective discussed in more detail below, one filter can be based on the constraint that that many state universities find that retention rates are inversely related to the student's distance from their home. For the student applicant that is seeking admission, a suitable filter may be based on historical data that indicates that if the student is from State A and applying to College B, that there is an increased likelihood that they will be admitted.”), although the term “weighted” is not specifically used, the material clearly indicates that certain answers would be considered more important (or higher weighted) than others, for example, states/locations closer to the college would be weighted higher because the college considers applicants who live closer to have a higher likelihood of retention; [0067], example 2, answers indicating higher GPAs would be weighted higher for admittance to XYZ)
In regards to Claims 9 and 18, Powell discloses:
wherein the model includes a plurality of weights ([0065]-[0067]; [0073], historical data and comparisons with user profiles indicate the identification of attributes/answers that are important to likelihood of acceptance (see detailed explanation as applied to Claims 8 and 17 regarding importance of criteria as related to weighting)), and wherein the computer-executable instructions further cause the processor to update at least one weight of the plurality of weights in the model based on the received information about the one or more applicants who graduated from the program ([0070]-[0072], as described above, the source data grows and changes over time, additionally, as the database evolves over time (including new profiles and new graduation/retention data), the updated data can be applied to the model, the model can also evolve and can provide updated data such as new trends and recommendations, the updating of trends would be related to identification of changes in attributes that effect the likelihood of getting into a program, changes/evolutions in criteria and what criteria would be more important to being accepted would indicate a change in the weights of those criteria/answers (see also, [0056]; [0058]; [0061]; [0067]; for further material describing the evolution of historical data modeling analysis over time))
Powell/Ma, does not explicitly disclose the definition of poverty being based on the World Bank’s definition of an International Poverty Line, however, Ravallion teaches poverty being measured based on an international poverty line as defined by The World Bank (paragraphs 12-14, shows the World Bank defining the limits for an international poverty line for measuring poverty in various countries [including, but not limited to, the specification’s measurement of $1.25/day])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Powell/Ma so as to have included poverty being measured based on an international poverty line as defined by The World Bank, as taught by Ravallion in order to ensure clarity, understanding, and fairness of how applicants are evaluated by using a standard, uniform, and widely used definition of poverty (Ravallion, discusses determining global poverty standards (such as an international poverty line) by aggregating poverty data from a global set of countries). (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
In regards to Claim 20, the claim is a combination of the same elements that appear in Claims 2, 5 ,and 6. The rejections applied above to Claims 2, 5 ,and 6 (including prior art, rationales, and motivations) also apply to Claim 20.
In regards to Claim 21, Powell discloses:
wherein the updated plurality of weights is based upon the [location] of the current applicants and the performance of one or more participants in one or more nearby [locations] ([0133], considers where an applicant comes from as part of success analysis, although Powell does not explicitly reference countries, one of ordinary skill int the art would recognize that where someone Is from could include countries. Additionally, Powel does reference state origins of applicants as a factor in their success (at least [0066]), one of ordinary skill would also understand that the use of states would be used in the same manner as countries since they are both geographical delineations, the characteristic data regarding states for the applicants can be included in the weighting system described above ([0065]-[0067]; [0070]-[0075])).
Claims 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Powell in view of Ma in further view of Ravallion in further view of Kuh et al. (What Matters to Student Success A Review of the Literature).
In regards to Claims 7 and 16, Powell/Ma/Ravallion does not explicitly disclose modifying the applicant's overall score based on the results of a visit to the applicant's home, however, Kuh teaches that home life is a factor relevant to student success (see at least page 12, “…the different sets of values and norms represented by home life and college need to be taken into account when studying various aspects of student success…”). Since Kuh determines that home life is a factor student success, it would have been obvious to have included home visits as part of an evaluation for success in Powell/Ma and to have the data obtained from those visits factored into the overall score.
One of ordinary skill in the art would have recognized, before the effective filing date of the claimed invention, that applying the known technique of Kuh would have yielded predictable results. It would have been recognized that applying the technique of Kuh to Powell/Ma/Ravallion would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such academic success prediction features into similar systems. Further, using an additional factor, such as home life, that is known to be a predictor of academic success in conjunction with a system that uses multiple factors for predicting academic success, would have been recognized by those ordinary skill in the art as resulting in an improved system that would allow additional reliability of the results by incorporating additional known success factors. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
Additional Prior Art Identified but not Relied Upon
Danson et al. (Pub. No. US 2015/0100528 A1). Discloses the use of historical data in a predictive model for predicting likelihood of graduating from a class/program and to training the predictive model (see at least [0015]; [0063]).
Deyo et al. (Pub. No. US 2009/0164311 A1). Discloses the use of historical answer data relating to performance data of previous applicants to determine the effectiveness of potential answers to predict job performance (see at least Abstract; Fig. 1; [0020]), the training of predictive models (algorithms) and the updating (retraining) of algorithms based on applicant performance (see at least Abstract; [0028]; [0040]; Claim 8), retraining using closed loop feedback (see at least [0038]), and use of models for accurate and efficient predictions (see at least [0013]; [0014]; [0042]).
Haggar et al. (Pub. No. US 2015/0026163 A1). Discloses the use of historical answer data to determine the rating for the answers (see at least Abstract).
Huddleston et al. (Pub. No. US 2004/0215430 A1). Discloses retraining of models to improve accuracy and/or efficiency (see at least [0020]; [0077]; [0155]).
O’Malley (Pub. No. US 2011/0276507 A1). Discloses the use of region data for previous applicants to analyze recruitment practices (see at least [0021]; [0022]).
Stimac (Pub. No. US 2003/0071852 A1). Discloses questions for an applicant pool, rating questions, compiling score based on answers, and ranking applicants (see at least Abstract; [0152]; Claim 1; claim 5; Claim 17).
Response to Arguments
Applicant’s arguments filed 9/11/2025 have been fully considered but they are not persuasive.
I. Rejection of Claims under 35 U.S.C. §101:
Applicant asserts that the claimed invention is comparable to Ex Parte Carmody and Ex Parte Desjardns. However, Applicant has not provided sufficient evidence to support these assertions, such as providing any comparison or analysis to demonstrate how the characteristics of the claims are comparable to the findings of the cited decisions. The mere fact that the claims are drawn to machine learning or retraining in some capacity does not inherently provide such a relationship to the decisions. In regards to Carmody, Applicant additionally fails to identify the alleged additional elements and/or practical application and explain how/why they get beyond the abstract ideas. In regards to Desjardns, Applicant’s citations to the specification merely described the training/retraining process. It does not clearly provide any indication of how the alleged improvement is achieved in a meaningful manner and merely asserts that it improved. See MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field.
Merely reciting in the claims that an improvement is achieved does not provide sufficient support or evidence to demonstrate the alleged improvement or to demonstrate how it is achieved in a significant manner. “[S]uch that one or more of an accuracy and an efficiency of future likelihood calculations is improved by incorporating measured performance of prior participants” does not show how/why the claim elements/features would provide this alleged improvement.
Applicant’s remarks on pages 17-18 merely describe the claims and asserts that they are like Desjardns, but as stated above, the mere fact that both include similar elements does not indicate that Applicant’s claimed invention achieves the characteristic of Desjardns and/or that they are achieved in a manner comparable to Desjardns.
Applicant’s remarks in regards to Enfish are insufficient for the same reasons provided above. Applicant fails to provide comparison or analysis to demonstrate how the characteristics of the claims are comparable to the findings of the cited decision. Applicant summarizes Enfish then the claims, then asserts that there is an improvement similar to Enfish, but does not provide any explanation of how or why.
Applicant does not explain why the claims would not fall under any of the categories for certain methods of organizing human activity.
Applicant does not provide evidence to demonstrate “a specific, computer-implemented technological process…with technical improvements”, and merely asserts that it is similar to Desjardns.
II. Rejection of Claims under 35 U.S.C. §103:
Applicant’s remarks are drawn to the newly provided claim material and are therefore moot in view of the newly provided prior art rejections, citations, and/or explanations, provided above.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Conclusion
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.S/Examiner, Art Unit 3629 March 7, 2026
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629