Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/27/2026 has been entered.
Status of the Claims
Claims 1, 12, and 13 have been amended. Claims 2 and 5 are canceled. Claims 1, 3, 4, and 6-13 are pending.
Response to Arguments
Applicant's arguments filed 02/27/2026 regarding 35 U.S.C. 101 have been fully considered but they are not persuasive.
Applicant argues that the claims recite a technical improvement in automated evaluation systems by reducing unnecessary questions, controlling system flow, enabling machine-based semantic scoring rather than keyword counting or human judgment, and streamlining automated assessment without interviewer intervention. Examiner disagrees. The Federal Circuit has explained that "the 'directed to' inquiry applies a stage-one filter to claims, considered in light of the specification, based on whether 'their character as a whole is directed to excluded subject matter."' Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016) (quoting Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1346 (Fed. Cir. 2015)). It asks whether the focus of the claims is on a specific improvement in relevant technology or on a process that itself qualifies as an "abstract idea" for which computers are invoked merely as a tool. See id. at 1335-36. Here, it is clear from the Specification (including the claim language) that the independent claims focus on an abstract idea, and not on an improvement to technology and/or a technical field. The specification is titles “Method and System for Automatically Evaluating A Candidate”, and observes in the Description that most of the recruiting processes in any industry requires an evaluation of the technical abilities and knowledge of the candidate regarding the position they are applying for and the level of experience. This evaluation usually requires asking the candidate a list of technical questions about the field of the position and its difficulty to assess their skill level based on their answers. This process can be time consuming, as most of the time it takes a person to orally ask the questions to the candidate. Depending on the position and the number of candidates, it can take a large amount of time for the interviewer. Another drawback of this process is that the subjectivity of the interviewer regarding the candidate may alter their evaluation and lead to reject a candidate, who may however be suitable for the position or, on the contrary, hire a candidate, which will eventually prove to be unsuitable for the position. A known solution to try to remedy these problems is to use an evaluation software to run through a set of questions and collect the corresponding answers of the candidate, thus avoiding the time loss for the interviewers. However, this solution still requires a human analysis of the collected answers as such a software is usually not able to assess by itself the validity of the technical answers of the candidates. A known solution to try to address this issue is to use a multiple-choice questionnaire, but this requires questions that are simple enough and that may thus not be accurate enough for a complex position with a specific and thorough set of skills, such as e.g., a technical position. It is therefore an object of the present invention to provide an easy, reliable and efficient method and device to solve at least partly the drawbacks of the prior art. (see Spec. [0002]-[0009]). That is, as indicated in the applicant’s specification, the evaluation of the technical abilities and knowledge of the candidate regarding the position they are applying for and the level of experience in a manner that is not as time consuming for the interviewer, reduce subjectivity of the interviewer, and increase analysis of validity of the answers to the questions. This solution is also indirectly captured in applicant’s argument “reducing unnecessary questions, controlling system flow, enabling machine-based semantic scoring rather than keyword counting or human judgment, and streamlining automated assessment without interviewer intervention”. The invention and claims are drawn towards automatically evaluating a candidate using a set of questions, and the claims recite limitations the directly correspond to certain methods of organizing human activity (managing personal interactions or relationships), as shown by claim limitations detailing selecting a question from the set of questions, selecting the question comprises retrieving a question having associated therewith a stored difficulty level, a stored set of expected keywords and a stored model answer; executing an iterative evaluation control loop and determining and updating a difficulty level; determining the difficulty level based on at least one previously received accuracy score and automatically selecting a subsequent question having a difficulty level corresponding to the difficulty level that is determined; sending the question that is selected to said [user interface], receiving a set of words corresponding to an answer from a candidate to the question that is selected and sent to said [user interface]; requesting said [language model module] to generate an accuracy score using said set of words that is received, said accuracy score reflecting an accuracy of the answer from the candidate relative to a model answer, to name a few of the limitations that correspond to the identified abstract idea sub grouping (the list is not exhaustive). The claim limitations also directly correspond to mental processes (observation, evaluation, judgment, opinion), as evidenced by the claim limitations detailing steps that observes and analyzes data and making a decision (judgment or option) based on the observed and evaluated data (e.g., evaluating the answer for a user and scoring the answer based on accuracy, as well scoring an overall candidate using the accuracy sore). The claims recite an abstract idea. The alleged improvement is clearly an improvement in the judicial exception itself, not an improvement in computers or technology. It is important to keep in mind that an improvement in the judicial exception itself (e.g., a recited fundamental economic concept) is not an improvement in technology (emphasis added). For example, in Trading Technologies Int’l v. IBG LLC, the court determined that the claim simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology. Similarly, the Applicant’s claim recitations are an improvement in the judicial exception, not an improvement in technology. The process for evaluating a candidate being improved by ultimately automating the process via computer and removing human intervention, does not constitute an improvement in computers or technology.
Desjardins is inapplicable to the applicant’s invention. In Desjardins, the application is directed to the field of machine learning techniques, and the claims are aimed at specific and technically detailed ways of performing machine learning. The application was highly technical in nature with regard to specific machine learning processes. Therefore, the claims fall into the category of an improvement to a computer/technology because of the specific technical nature of the claims. The specification of the application also gave a technical explanation of the improvement related to the specific machine learning techniques. Applicant’s case not analogous to Desjardins. Applicant’s specification and claims describe an improvement in evaluating candidates, and simply utilizes a computer to perform that automation. In referencing any sort of machine learning techniques, applicant’s specification merely states that the language module used in processing words and determining an accuracy score may function based on artificial intelligence and large language models. Nothing in the claims nor specification describe improving machine learning techniques, but instead using a computer to perform the limitations that has been identified as corresponding to the judicial exception. Thus, the additional elements (including the language model), which are computer components recited at a high-level of generality performing the above-mentioned limitations, amounts to apply it” or merely using a computer as a tool to implement the judicial exception. At best, applicant’s invention is more analogous to Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025). The Court in Recentive stated that "[P]atents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025) (slip op. at 18). It’s also important to note from Recentive “…the claimed methods are not rendered patent eligible by the fact that (using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficiency than could previously be achieved." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 15. Also, from Recentive "The requirements that the machine learning model be 'iteratively trained' or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 12. Examiner also notes that the decision in Desjardins does not alter the holding in Recentive.
Applicant references McRO, Inc. v. Bandai Namco Games America Inc., 837 F.3d 1299 (Fed. Cir. 2016), but their claims and invention are in no way analogous to McRO. The basis for the court’s decision in McRO was that the claims improved a computer-related technology by enabling the computer to perform functions that previously could not be performed by a computer and that required the subjective judgement of a human. The court emphasized both the specific claiming of the rules and the specification’s explanation of how the claimed rules enabled the automation of these specific animation tasks that previously could not be automated. This enabling of functionality that could not previously be performed by a computer was what amounted to the improvement in computer-related technology, not the simple recitation of a set of particular rules. Examiner notes that the claim limitations are not analogous to a computer-related technology that enables new functions that a computer could not have previously performed. Applicant argues that a human an interviewer does not “execute a defined control loop; dynamically update a stored difficulty level parameter based on computed accuracy metrics; perform structured semantic and structural analysis relative to a stored model answer; or classify answers into predetermined rubrics within a machine scoring pipeline”. To suggest this is the case is intellectual disingenuous, at best. While using the computer as a tool may user particular steps that are not performed by a human, those steps are not necessarily indicative of the absence of an abstract idea. Generally speaking, using the computer as a tool to perform certain functions necessarily requires steps that will not be performed by a human.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using a generic computer, and generally linking the judicial exception to a particular field of use. Mere instructions to apply an exception using a generic computer cannot provide an inventive concept. Thus, when viewed as an ordered combination, nothing in the claims add significantly more (i.e. an inventive concept) to the abstract idea. The claims are not patent eligible.
The 35 U.S.C. 101 rejection is maintained.
Applicant’s arguments, see pg.23, filed 02/27/2026, with respect to 35 U.S.C. 102 and 35 U.S.C. 103 have been fully considered and are persuasive. The 35 U.S.C. 102 and 35 U.S.C. 103 rejections have been withdrawn.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 4, and 6-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
Claims 1, 3, 4, and 6-11 recite a method (i.e. process), claim 12 recites non-transitory computer-readable medium (i.e. machine or article of manufacture), and claim 13 recites a system (i.e. machine). Therefore claims 1, 3, 4, and 6-13 within one of the four statutory categories of invention.
Independent claims 1, 12, and 13 recite the limitations: a first phase being iterative and comprising, at each iteration, steps of selecting a question from the set of questions, said question that is selected being different from questions selected at previous iterations, if any, wherein said selecting the question comprises retrieving a question having associated therewith a stored difficulty level, a stored set of expected keywords and a stored model answer; executing an iterative evaluation control loop and determining and updating a difficulty level; determining the difficulty level based on at least one previously received accuracy score and automatically selecting a subsequent question having a difficulty level corresponding to the difficulty level that is determined; sending the question that is selected to said [user interface], receiving a set of words corresponding to an answer from a candidate to the question that is selected and sent to said [user interface], requesting said [language model module] to generate an accuracy score using said set of words that is received, said accuracy score reflecting an accuracy of the answer from the candidate relative to a model answer; receiving the accuracy score that is generated, wherein the generating the accuracy score comprises comparing the set of words that is received with the stored set of expected keywords, analyzing semantic relevance of the set of words that is received relative to the stored model answer, analyzing structural coherence of the set of words that is received relative to the stored model answer, and classifying the set of words that is received into predetermined rubrics to assign scores; updating the difficulty level that is determined based on the accuracy score that is generated; wherein the first phase is carried out until a predeterminate condition has been reached, wherein the predetermined condition comprises achieving a target accuracy threshold corresponding to the difficulty level, thereby terminating the iterative evaluation control loop, a second phase comprising computing an evaluation score of the candidate using the accuracy score that is received from all of each question of the set of questions. The invention and claims are drawn towards automatically evaluating a candidate using a set of questions, and the claims recite limitations the directly correspond to certain methods of organizing human activity (managing personal interactions or relationships), as shown by limitations detailing selecting a question from the set of questions, selecting the question comprises retrieving a question having associated therewith a stored difficulty level, a stored set of expected keywords and a stored model answer; executing an iterative evaluation control loop and determining and updating a difficulty level; determining the difficulty level based on at least one previously received accuracy score and automatically selecting a subsequent question having a difficulty level corresponding to the difficulty level that is determined; sending the question that is selected to said [user interface], receiving a set of words corresponding to an answer from a candidate to the question that is selected and sent to said [user interface]; requesting said [language model module] to generate an accuracy score using said set of words that is received, said accuracy score reflecting an accuracy of the answer from the candidate relative to a model answer, to name a few of the limitations that correspond to the identified abstract idea sub grouping (the list is not exhaustive). The claim limitations also directly correspond to mental processes (observation, evaluation, judgment, opinion), as evidenced by the claim limitations detailing steps that observes and analyzes data and making a decision (judgment or option) based on the observed and evaluated data (e.g., evaluating the answer for a user and scoring the answer based on accuracy, as well scoring an overall candidate using the accuracy sore). The claims recite an abstract idea.
Note: the features or elements in brackets in the above Step 2A Prong One section are inserted for reading clarity, but are analyzed as “additional elements” under Step 2A Prong Two and Step 2B below.
The judicial exception is not integrated into a practical application simply because the claims recite the additional elements of: an automatic evaluation system comprising a control module, a database, a user interface, a language model module, a non-transitory computer program (claim 12), and computer (claim 12) . The additional elements are computer components recited at a high-level of generality performing the above-mentioned limitations. The combination of the additional elements are no more than mere instructions to apply the judicial exception using a generic computer. Additionally, the language model module amounts to generally linking the judicial exception to a particular field of use (generating accuracy scores in evaluating candidate answers). Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using a generic computer, and generally linking the judicial exception to a particular field of use. Mere instructions to apply an exception using a generic computer cannot provide an inventive concept. Thus, when viewed as an ordered combination, nothing in the claims add significantly more (i.e. an inventive concept) to the abstract idea. The claims are not patent eligible.
Dependent claim 8 recites the limitation: prior to submitting the question that is selected to the [user interface], a step of converting words of the question that is selected into an audio stream by a [text conversion module], providing said audio stream to the [user interface] and diffusing said audio stream that is received by the [user interface] to the candidate. The claim is further directed to the abstract idea analyzed above. The claim also recites the additional elements of the user interface and a text conversion module. The additional elements amount to “apply it” or merely using a computer as a tool to implement the judicial exception. The text conversion module also amounts to generally linking the judicial exception to a particular field of use. Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Further, when viewed as an ordered combination, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. The claim is not patent eligible.
Dependent claim 9 recites the limitation: when the answer has been given orally by the candidate, a step of recording said answer that has been given orally as an audio stream and converting said audio stream into a set of words by an [audio conversion module]. The claim is further directed to the abstract idea analyzed above. The claim also recites the additional element of the audio conversion module. The additional element amounts to “apply it” or merely using a computer as a tool to implement the judicial exception, and generally linking the judicial exception to a particular field of use. Accordingly, in combination, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Further, when viewed as an ordered combination, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. The claim is not patent eligible.
Dependent claims 3, 4, 6, 7, 10, and 11 recite additional limitations that are further directed to the abstract idea analyzed in the rejected claims above. The claims also recite additional elements that have been analyzed in the rejected claims above. Thus, claims 3, 4, 6, 7, 10, and 11 are also rejected under 35 U.S.C. 101. The claims are not patent eligible.
Allowable Subject Matter
Claims 1, 3, 4, and 6-13 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action.
The closest patent or patent application prior art reference found that is relevant to the applicant’s invention includes Wu (2018/0150739) which discloses a system and method for automatically interviewing a technical candidate and determines relevance score for one or more provided answers from the candidate. The relevance scores are used to determined the next type of question and the appropriate difficulty level for the next question to ask the candidate during an automated interview. Wu does not appear to disclose some of the features of the applicant’s invention such as an iterative evaluation control loop and a predetermined condition achieving a target accuracy threshold corresponding to the difficulty level resulting in termination of the evaluation control loop. The claims appear to overcome the prior art reference.
The closest non-patent literature prior art reference found that is relevant to the applicant’s invention includes the publication “HR based Chatbot using Deep Neural Network” which explores a project of implementing an HR chatbot that uses natural language processing and deep learning to allow effective communication between HR and employees as well as HR and interview candidates. The publication does not appear to disclose the detailed features and limitations of the applicant’s invention. The claims appear to overcome the prior art reference.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIONE N SIMPSON whose telephone number is (571)272-5513. The examiner can normally be reached M-F; 7:30 a.m.-4:30 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon Campbell can be reached at 571-272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DIONE N. SIMPSON
Primary Examiner
Art Unit 3628
/DIONE N. SIMPSON/ Primary Examiner, Art Unit 3628