Prosecution Insights
Last updated: April 17, 2026
Application No. 19/186,083

ARTIFICIAL INTELLIGENCE AGENT SYSTEM FOR INTERVIEW AND COMPREHENSIVE SKILLS ASSESSMENT OF CANDIDATES

Non-Final OA §101§103
Filed
Apr 22, 2025
Examiner
KIRK, BRYAN J
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
1 (Non-Final)
32%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
75%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
70 granted / 217 resolved
-19.7% vs TC avg
Strong +43% interview lift
Without
With
+42.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
252
Total Applications
across all art units

Statute-Specific Performance

§101
32.2%
-7.8% vs TC avg
§103
37.9%
-2.1% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 217 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is a non-final, first office action in response to the claims filed 04/22/2025. Claims 1 – 7 are currently pending and have been examined. Claim Objections Claim 1 is objected to because of the following informalities: the limitation “wherein a microphone of the client device is operable to audio responses of the candidate” is recited, instead of the grammatically-correct “wherein a microphone of the client device is operable to record audio responses of the candidate.” Appropriate correction is required. Claim 1 is objected to because of the following informalities: the limitation “wherein a camara of the client device is operable to record video data” is recited in the “generating” step, instead of the grammatically-correct “wherein a camera of the client device is operable to record video data.” Appropriate correction is required. Claim 4 is objected to because of the following informalities: the limitation “the transcript analyzer module” is recited, instead properly reciting “the transcript analyzer assessment module,” in order to refer to the previously-introduced transcript analyzer assessment module in claim 3. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1 – 7 are directed to a method (i.e., a process). Therefore, claims 1 – 7 all fall within the one of the four statutory categories of invention. Step 2A, Prong One Independent claim 1 substantially recites: generating a human expert-like fully interactive interview… analyzing a job post to extract skill and behavior traits of the job post… analyzing a candidate resume of a candidate to extract skills and work proof… curating a plurality of questions and an interview structure based on data of the candidate resume and the extracted skill and behavior traits of the job post… conducting an interview of the candidate… using the plurality of questions that are output to the candidate… using audio responses from the candidate. The limitations stated above are processes that, under the broadest reasonable interpretation, covers performance of the limitation in a business relation or commercial interaction, or while managing personal behavior or relationships or interactions between people. That is, the functions in the context of this claim encompass managing a candidate interview for a job opportunity. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in a commercial interaction, or while managing personal behavior or relationships or interactions between people, but for the recitation of generic computer components, then it falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas e.g., “commercial or legal interactions (including marketing or sales activities or behaviors; business relations, and following rules or instructions).” Accordingly, claim 1 is directed to an abstract idea. Step 2A, Prong Two The judicial exception is not integrated into a practical application. Claim 1, as a whole, amounts to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent), (ii) adding insignificant extra-solution activity to the judicial exception, as well as (iii) generally linking the recited judicial exception to a particular field or technological environment. Claim 1 recites the additional computer-related elements of: “via an artificial intelligence agent system,” “via a job benchmark analysis module running on a computing platform,” “via a candidate resume analysis module running on the computing platform,” “via a question selector and plan generator module running on the computing platform,” “on a client device via an interactive interview agent module running on the computing platform,” “displayed on a display screen of the client device,” “wherein a speaker of the client device is operable to output audio data of the artificial intelligence agent,” and “via the speaker of the client device.” Claim 1 further recites the additional elements of “generating the artificial intelligence agent,” “wherein the artificial intelligence agent is displayed,” “wherein a camara of the client device is operable to record video data of the candidate, and wherein a microphone of the client device is operable to audio responses of the candidate,” “via the artificial intelligence agent, wherein the artificial intelligence agent interacts with the candidate using a large language model,” “recorded by the microphone of the client device, and using video data of the candidate recorded by the camera of the client device,” “wherein the audio responses are processed via the large language model so that the artificial intelligence agent can handle distractions, interruption, and arbitrary questions from the candidate,” and “the artificial intelligence agent records audio responses, via the microphone, from the candidate for each question of the plurality of questions.” The additional elements of “via an artificial intelligence agent system,” “via a job benchmark analysis module running on a computing platform,” “via a candidate resume analysis module running on the computing platform,” “via a question selector and plan generator module running on the computing platform,” “on a client device via an interactive interview agent module running on the computing platform,” “displayed on a display screen of the client device,” “wherein a speaker of the client device is operable to output audio data of the artificial intelligence agent,” and “via the speaker of the client device” are recited at a high-level of generality, such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). The additional elements of “wherein a camara of the client device is operable to record video data of the candidate, and wherein a microphone of the client device is operable to audio responses of the candidate,” “recorded by the microphone of the client device, and using video data of the candidate recorded by the camera of the client device,” and “the artificial intelligence agent records audio responses, via the microphone, from the candidate for each question of the plurality of questions” amount to insignificant extra-solution activity, such as mere data gathering (See MPEP 2106.05(g)). The additional elements of “generating the artificial intelligence agent,” “wherein the artificial intelligence agent is displayed,” “via the artificial intelligence agent, wherein the artificial intelligence agent interacts with the candidate using a large language model,” and “wherein the audio responses are processed via the large language model so that the artificial intelligence agent can handle distractions, interruption, and arbitrary questions from the candidate” merely generally link the recited judicial exception to a particular technological environment or field of use (See MPEP 2106.05(I)(A) & MPEP 2106.05(h)) and also amount to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). Accordingly, these additional elements, when viewed as a whole/ordered combination, do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent), (ii) adding insignificant extra-solution activity to the judicial exception, as well as (iii) generally linking the recited judicial exception to a particular field or technological environment, and do not provide integration of the recited abstract ideas into a practical application. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)); and (ii) adding insignificant extra-solution activity (e.g., pre-solution activity, such as mere data retrieval / electronic scanning) to the judicial exception (See MPEP2106.05(g)), as well as (iii) generally linking the recited judicial exception to a particular technological environment or field of use (See MPEP 2106.05(I)(A) & MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Furthermore, the extra-solution functionality of “wherein a camara of the client device is operable to record video data of the candidate, and wherein a microphone of the client device is operable to audio responses of the candidate,” “recorded by the microphone of the client device, and using video data of the candidate recorded by the camera of the client device,” and “the artificial intelligence agent records audio responses, via the microphone, from the candidate for each question of the plurality of questions” is recited at a high-level of generality and performs generic computer functions (data gathering) that are well-understood, routine and conventional activities previously known in the industry (See MPEP 2106.05(d)(II)). Additionally, paragraph 25 of Applicant’s instant specification generically discloses wherein the “client device… is a type of computer or computing device comprising circuitry and configured to generally perform functions such as recording audio, photos, and videos,” while paragraph 129 discloses that “Audio data, recorded by a microphone 404C, and video data, recorded by a camera 404D, of the candidate 101 may be recorded,” which demonstrates the well-understood, routine, conventional nature of the audio and video recording functionality in the claims. Therefore, the additional elements as recited in claim 1: “via an artificial intelligence agent system,” “via a job benchmark analysis module running on a computing platform,” “via a candidate resume analysis module running on the computing platform,” “via a question selector and plan generator module running on the computing platform,” “on a client device via an interactive interview agent module running on the computing platform,” “displayed on a display screen of the client device,” “wherein a speaker of the client device is operable to output audio data of the artificial intelligence agent,” “via the speaker of the client device,” “generating the artificial intelligence agent,” “wherein the artificial intelligence agent is displayed,” “wherein a camara of the client device is operable to record video data of the candidate, and wherein a microphone of the client device is operable to audio responses of the candidate,” “via the artificial intelligence agent, wherein the artificial intelligence agent interacts with the candidate using a large language model,” “recorded by the microphone of the client device, and using video data of the candidate recorded by the camera of the client device,” “wherein the audio responses are processed via the large language model so that the artificial intelligence agent can handle distractions, interruption, and arbitrary questions from the candidate,” and “the artificial intelligence agent records audio responses, via the microphone, from the candidate for each question of the plurality of questions” fail to integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. There is no indication that the combination of elements, taken both individually and as an ordered combination, improves the functioning of a computer or improves any other technology. Thus, the claims are not patent eligible. Furthermore, dependent claims 2 – 7 are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The additional limitations of “transcript analyzer assessment module” in claim 3, “transcript analyzer module” in claim 4, and “traits analyzer assessment module” in claim 7 amount to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). The limitations of the claims, when considered both individually and as an ordered combination, do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea with generic computer components that conduct generic computer functions within a certain field of use, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.   The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 7 are rejected under 35 U.S.C. 103 as being unpatentable over Gomes et al. (US 20200327505 A1), in view of Nagar et al. (US 20230376328 A1), in view of Gupta et al. (US 20240096312 A1). As per claim 1, Gomes discloses a method ([0002] & claim 1 of Gomes) for generating a human expert-like fully interactive interview via an artificial intelligence agent system, the method comprising: • analyzing a job post to extract skill and behavior traits of the job post via a job benchmark analysis module running on a computing platform ([0022], [0024] – [0026], [0034], & [0037], analyzing a new job opportunity description and identifying “target items” such as “target education degree value for the candidate of at least one Bachelor of Arts or Science from a four-year college,” “an expected minimum time (for example, six months, one year, etc.) of job experience within previous employment history,” and ““Hadoop” programming skills” which are required candidate parameters for the new job opportunity.); • analyzing a candidate resume of a candidate to extract skills and work proof via a candidate resume analysis module running on the computing platform ([0012] – [0013], [0015] – [0018] & [0020] – [0021], analyzing a candidate resume and related searched documents to extract “activities details encompassing (or descriptive of) skill sets and past experiences” to generate ““resume” values of attributes of the candidate.”); • curating a plurality of questions and an interview structure based on data of the candidate resume and the extracted skill and behavior traits of the job post via a question selector and plan generator module running on the computing platform (See [0023], [0025] – [0026], [0029] & [0033] – [0034], & [0036], noting that an “AI bot” generates queries designed to acquire “the target data values” (e.g., required job post skills) that are not yet for the candidate based on a resume review, such as “a chat bot query of “How many years of Hadoop programming experience do you have?.”” As per [0043], Gomes discloses wherein “interactive question and answer dialogues” are curated based on “finding similarities between transcript conversation, job description and resume,” and as per [0047], the “AI chat bot interviews” are designed to obtain “target item values” associated with the “job opening requirements.”); • generating the artificial intelligence agent on a client device via an interactive interview agent module running on the computing platform ([0028], generating “an Artificial Intelligence (AI) chat bot agent to engage the candidate in an automated interview process” “on a smart phone of the candidate.” Also see [0050] – [0052], noting “computer server 110” interacting with a candidate using “the AI chat bot interview process” implemented on “smartphone 102b” of the candidate.), To the extent to which Gomes does not appear to explicitly disclose the following limitations, Nagar teaches: • wherein the artificial intelligence agent is displayed on a display screen of the client device ([0018], [0074], [0091], & [0097], the “virtual agent 603” is displayed as an “avatar” on a display screen of the client device.), • wherein a speaker of the client device is operable to output audio data of the artificial intelligence agent ([0074], [0091], [0097], “UI module 513 may interface with the user 601 as part of the conversational workflow using the selected persona by conversing via audio components that include the persona of a real or fictitious person or character.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the aforementioned teachings of Nagar in the invention of Gomes with the motivation to provide “a personalized interaction experience with AI-directed virtual agents or assistants that offer an engaging experience to users, that keeps the customer engaged with the interactions, improves user retention and/or increases a completion rate of the users' intended transactions, reducing and/or avoiding occurrences of premature termination of sessions with the virtual agent,” as evidenced by Nagar ([0016]). Gomes further discloses: • wherein a camara of the client device is operable to record video data of the candidate, and wherein a microphone of the client device is operable to audio responses of the candidate ([0028], recording “audio and image (video, still images, etc.) response data from the candidate via a microphone and camera devices.”); • and conducting an interview of the candidate via the artificial intelligence agent, …using audio responses from the candidate recorded by the microphone of the client device, and using video data of the candidate recorded by the camera of the client device, …while the artificial intelligence agent records audio responses, via the microphone, from the candidate for each question of the plurality of questions (([0028], “the configured processor evokes an Artificial Intelligence (AI) chat bot agent to engage the candidate in an automated interview process that acquires audio and image (video, still images, etc.) response data from the candidate via a microphone and camera devices.”). To the extent to which Gomes / Nagar does not appear to explicitly disclose the following limitation, Gupta teaches: • wherein the artificial intelligence agent interacts with the candidate using a large language model… and wherein the audio responses are processed via the large language model so that the artificial intelligence agent can handle distractions, interruption, and arbitrary questions from the candidate (See [0049] & [0051] – [0053], noting that “conversation server 108 dynamically generates a first question associated with the first conversation state by obtaining a prompt for a large language model (LLM) at the large language model server 114” and “monitors in real-time, using the custom ML models 112 and the LLM with the content boundary, a second response provided by the user 102A to the first follow-up question asked by the artificially intelligent bot 110 at the user device 104A.” Examiner’s note: The limitation “so that the artificial intelligence agent can handle distractions, interruption, and arbitrary questions from the candidate” appears to be an intended use of the method/system, cannot affect the limitations recited in the claim, and is therefore given little to no patentable weight, as the claimed method is not capable of performing the intended use i.e., to enforce or control the aforementioned functions.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the aforementioned teachings of Gupta in the invention of Gomes / Nagar with the motivation to provide “an efficient automated conversation that takes less time to retrieve contextual information from the user and enhances efficiency and effectiveness of conversational AI driven interactions,” as evidenced by Gupta ([0057]). To the extent to which Gomes does not appear to explicitly disclose conducting an interview of the candidate: • using the plurality of questions that are output to the candidate via the speaker of the client device, Nagar teaches this functionality in at least [0074], [0091], [0097]. Rationale to combine Nagar persists. As per claim 2, Gomes / Nagar / Gupta discloses the limitations of claim 1. Gomes further discloses: • wherein the interactive interview agent module is configured to dynamically generate a set of questions required to assess a specific domain skill of the candidate (See [0023], [0025] – [0026], [0029] & [0033] – [0034], & [0036], noting that an “AI bot” generates queries designed to acquire “the target data values” (e.g., required job post skills) that are not yet for the candidate based on a resume review, such as “a chat bot query of “How many years of Hadoop programming experience do you have?.”” As per [0043], Gomes discloses wherein “interactive question and answer dialogues” are curated based on “finding similarities between transcript conversation, job description and resume,” and as per [0047], the “AI chat bot interviews” are designed to obtain “target item values” associated with the “job opening requirements.”). As per claim 3, Gomes / Nagar / Gupta discloses the limitations of claim 1. Gomes further discloses: • the step of performing an automated skills assessment for arbitrary skills of the candidate via a transcript analyzer assessment module ([0026] & [0034] – [0035], assessing a candidate’s “target interpersonal communication speaking characteristics or tendency values that are required or strongly associated to the new job opportunity” based on candidate query responses.). As per claim 4, Gomes / Nagar / Gupta discloses the limitations of claim 3. Gomes further discloses: • wherein after the interview of the candidate is conducted via the artificial intelligence agent, the transcript analyzer module is configured to generate and analyze transcripts of the audio responses and video data to score the competency level of the specific domain skill of the candidate ([0026] & [0034] – [0035], “determining at 220 that the requisite target item values have been achieved” after receiving responses during an interview from a candidate, generating a score of “target interpersonal communication speaking characteristics or tendency values.”). As per claim 5, Gomes / Nagar / Gupta discloses the limitations of claim 4. Gomes further discloses: • wherein the interactive interview agent module scores the competency level of the specific domain skill using scores of each skill required in the analyzed job post ([0026] & [0034] – [0035], “determining at 220 that the requisite target item values have been achieved” after receiving responses during an interview from a candidate, generating a score of “target interpersonal communication speaking characteristics or tendency values” based on “the candidate for suitability for the new job opportunity as a function of the entirety of the candidate resume metadata inclusive of the updated target item values.”). As per claim 6, Gomes / Nagar / Gupta discloses the limitations of claim 5. Gomes further discloses: • wherein the interactive interview agent module provides a rationale for the scoring ([0026] & [0034] – [0035], generating a score of “target interpersonal communication speaking characteristics or tendency values” based on rationale such as “the candidate for suitability for the new job opportunity as a function of the entirety of the candidate resume metadata inclusive of the updated target item values.”). As per claim 7, Gomes / Nagar / Gupta discloses the limitations of claim 6. Gomes further discloses: • wherein the interactive interview agent module uses questions with situational context to probe behavior of the candidate ([0028] – [0032], “the configured processor determines the value of the answer in years to a chat bot query of “How many years of Hadoop programming experience do you have?”); and assigns quantitative values to qualitative interpersonal behavior values (for example, demeanor, attention, clarity of diction values to qualities of speech (wordiness or amount of usage of redundant words or phrases, clarity of audio, alignment of face or body to camera, strength of match of facial images to face images labelled with different (happy, sad, angry, etc.) expressions, etc.).” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRYAN J KIRK whose telephone number is (571)272-6447. The examiner can normally be reached Monday -Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon Campbell can be reached at (571)272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRYAN J KIRK/Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Apr 22, 2025
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12511607
READER DEVICE TECHNOLOGY FOR DETERMINING THAT AN ASSET IS LOADED TO THE ASSIGNED LOGISTICS VEHICLE
2y 5m to grant Granted Dec 30, 2025
Patent 12469092
MOBILE DEVICE CROSS-SERVICE BROKER
2y 5m to grant Granted Nov 11, 2025
Patent 12469350
PAPERLESS VENUE ENTRY AND LOCATION-BASED SERVICES
2y 5m to grant Granted Nov 11, 2025
Patent 12417425
SYSTEM AND METHOD FOR CONDITIONAL DELIVERY OF A TRANSPORT CONTAINER
2y 5m to grant Granted Sep 16, 2025
Patent 12380518
Slot Based Allocation For Fueling
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
32%
Grant Probability
75%
With Interview (+42.6%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 217 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month