DETAILED ACTION
Status of Claims
This communication is in response to the application filed on 11/05/2025.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 5-7 are canceled. Claims 1-4 and 8-9 are amended, currently pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4 and 8-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-4 and 8-9 are drawn to a server (comprising memory and processor) which is within the four statutory categories (i.e. a machine).
Since the claims are directed toward statutory categories, it must be determined if the claims are directed towards a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea). Based upon consideration of all of the relevant factors with respect to the claim as a whole, Claims 1-9 are determined to be directed to an abstract idea. The rationale for this determination is explained below:
Independent claim as a whole directed toward an abstract idea of generating interview questions (i.e. acquire answer sentences given by an interviewee for an interviewer's interview questions about a job that the interviewee applies for; evaluate each of a plurality of evaluation items required for evaluating suitability for the job on the basis of the acquired answer sentences; an additional item to be additionally evaluated from among the plurality of evaluation items on the basis of the evaluation; and generate a tail question on the basis of at least one of the selected additional item, the interview questions, or the acquired answer sentences, and to provide the tail question to the interviewee; further evaluate the interviewee's level based on the answer sentences given by the interviewee to the interview questions, adjust a difficulty of the tail question to be higher than that of the interview questions when the interviewee's level is above a predetermined level, and adjust the difficulty of the tail question to be lower than that of the interview questions when the interviewee's level is below the predetermined level; identify, from the acquired answer sentences, the situation, task, action, and result items defined in a situation, task, action, and result (STAR) technique, identify a missing item from among the items, and generate a tail question for the interviewee about the missing items; and after providing the tail question to the interviewee, additionally provide an example answer for the tail question to the interviewee when the interviewee's nervousness exceeds a predetermined level, or the interviewee is silent for a predetermined period of time or more, or the interviewee stops speaking while answering. ) which falls under abstract idea bucket of Mental process since above claimed steps can be reasonable performed in human mind using Mental processes — evaluating answers, selecting items, adjusting difficulty, identifying STAR elements, and deciding whether to give example answers are all types of mental judgment or observation that could be done by a human interviewer without a computer.. (conducting and adapting an interview)
Because the claim recites abstract ideas, the analysis proceeds to determine whether the claim recites additional elements that recite a practical application of the abstract ideas. According to MPEP 2106.04(d), additional elements that recite an instruction to apply the abstract ideas using server (comprising a processor and memory and various units) and pre-trained answer analysis model that recite that generally link the use of the abstract ideas to a particular technological environment or field of use are not indicative of a practical application. Here, the additional elements of the processor memory fail to recite a practical application because they are instructions to apply the abstract ideas using computers. Therefore, the claim as a whole fails to recite a practical application of the abstract ideas.
The dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons as given above.
Regarding Claim 2: recite abstract ideas similar to those discussed above in connection with independent claims, understand a content of data submitted by the interviewee, which includes a cover letter, a resume, credentials, and a portfolio, and generates and to generate an additional question about the content of the submitted data.
Regarding Claim 3: recite abstract ideas similar to those discussed above in connection with independent claims, wherein the server select and combine some of all questions and answers provided to and by the interviewee as well as an immediately previous question and answer to generate the tail question.
Regarding Claim 4: recite abstract ideas similar to those discussed above in connection with independent claims, wherein the plurality of items include key competencies required for the job.
Regarding Claim 8: recite abstract ideas similar to those discussed above in connection with independent claims, extract an item that is unidentified from the content of the data submitted by the interviewee and generates an additional question for evaluating the item.
Regarding Claim 9: recite abstract ideas similar to those discussed above in connection with independent claims, when the interviewee's nervousness exceeds a certain level or the interviewee is silent for a certain period of time or more or stops speaking while answering, the server generates the configuration unit is further configured to generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message.
Therefore, the claims 1-4 and 8-9 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Wong et al. (US 11,238,411 B1) in view of Niemi (US 2016/0293036 A1) and further in view of NPL-1 (How to use ChatGPT to Prepare for Behavioral interviews, January 1, 2023)
Regarding Claim 1:
Wong teaches a server for generating a question on the basis of artificial intelligence (AI), the server comprising:
Wong teaches a memory configured to store at least one instruction; and a processor; and a processor configured to execute the at least one instruction, wherein the processor comprises: (See at least paragraph fig. 6 as well as associated text)
Wong teaches a collection unit configured to acquire answer sentences given by an interviewee for an interviewer's interview questions about a job that the interviewee applies for; an evaluation unit configured to evaluate each of a plurality of evaluation items required for evaluating suitability for the job on the basis of the acquired answer sentences; (By disclosing, In one novel aspect, the adaptive recruitment computer system generates a question bank based on a job description, selects questions adaptively from the question bank during an interview with the candidate, and generates a feedback report for the candidate based on the evaluation of the candidate's answers. In one embodiment, the computer system categorizes a job requirement into a set of job skills based on a body of knowledge (BOK) skill knowledge base, generates a question bank comprising a list of questions based on the set of job skills and a BOK question knowledge base, selects adaptively a subset of questions from the generated question bank for an online interview with a candidate based on a trained learning model, wherein each question selected is based on evaluations of one or more answers from the candidate to corresponding prior questions using a recurrent neural network (RNN) model, and generates a feedback report for the candidate, wherein the feedback report based on evaluations of answers from the candidate and a BOK candidate knowledge base, wherein the BOK candidate knowledge base receives updates from the computer system. In one embodiment, each job skill has a set of attributes comprising a multi-level industry taxonomy, a skill level, and cross disciplinary references. In another embodiment, each question in the BOK question knowledge base has a skill level attribute, and wherein the generated question bank includes questions of different skill levels based on the skill level of the job skill attributes. In one embodiment, a data mining program is implemented to create and update one or more BOK knowledge bases comprising the BOK skill knowledge base, the BOK question knowledge base, and the BOK candidate knowledge base. In another embodiment, the computer system further obtains candidate information prior to the interview and generates a candidate profile and authentication information. In one embodiment, the candidate profile is generated from the candidate information based on the BOK candidate knowledge base. See at least col 1-2, col 9-10)
Wong teaches a selection unit configured to select an additional item to be additionally evaluated from among the plurality of evaluation items on the basis of the evaluation; and a generation unit configured to generate a tail question on the basis of at least one of the selected additional item, the interview questions, or the acquired answer sentences, and to provide the tail question to the interviewee; (By disclosing, At step 521, the adaptive recruitment computer system determines if the skill evaluation for the current skill is concluded. In one embodiment, the combined assessment/evaluation is done using an RNN model based on a BOK question with answers knowledge base. The skill evaluation is done when the combination evaluation indicates a skill level requirement satisfied or a skill level requirement not satisfied and a predefined threshold of trying is reached. If step 521 determines yes, the adaptive recruitment computer system, at step 522, selects a new question designed for a new skill. The new skill is selected based on the BOK skill knowledge base 501. If step 521 determines no, the adaptive recruitment computer system moves to step 531 and determines if the current skill level needs to be adjusted. If step 531 determines yes, a new question with a higher or a lower level is selected. If step 531 determines no, a new question in the same skill set and the same skill level is selected. See at least col 1-2, col 9-10)
Wong teaches wherein the generation unit is further configured to: further evaluate the interviewee's level based on the answer sentences given by the interviewee to the interview questions, (By disclosing, Each question is assigned/labeled with one or more attributes including an industry taxonomy, a skill name/index, and a skill level. At step 511, a question is selected from question bank 503 based on the evaluation of the candidate's prior answers. At step 512, the candidate's answer to the selected question is obtained and evaluated. In one embodiment, both the answer to the question and the emotional factor of the answer are evaluated. The evaluation of this question and evaluations of the prior questions are combined to generate a skill evaluation report. See at least col 1-2, col 9-10)
Wong does not specifically disclose adjust a difficulty of the tail question to be higher than that of the interview questions when the interviewee's level is above a predetermined level, and adjust the difficulty of the tail question to be lower than that of the interview questions when the interviewee's level is below the predetermined level;. However Niemi teaches adjust a difficulty of the tail question to be higher than that of the interview questions when the interviewee's level is above a predetermined level, and adjust the difficulty of the tail question to be lower than that of the interview questions when the interviewee's level is below the predetermined level; (By disclosing, An embodiment of the present invention is a computer adaptive assessment tool that adjusts the difficulty of test items according to the estimated abilities of individual test taker(s). The tool uses a customized system including an Item Response Theory (IRT) engine in order to generate more difficult items for higher-performing test takers and easier items for lower-performing test takers. the level of difficulty of the first question is assessed, and a second question is provided having a higher level of difficulty than the first question. If a test taker incorrectly answers a first question, then the second question can have the same or lower level of difficulty as the first question. For example, FIGS. 6 and 13 show a chart of questions provided and their difficulty level. As a question was correctly answered, or passed, the level of difficulty of the subsequent question increased. When a question is incorrectly answered, or failed, the level of difficulty of the subsequent question decreased. By providing questions responsive to a difficulty level, an assessment of the test takers skills and abilities can be determined. FIGS. 6 and 13 show for example an ability level is correlated with a difficulty level at which the test taker begins to incorrectly answer questions. FIG. 12 shows an embodiment of a user interface indicating the test results of a test taker. For example, the test results can indicate a proficiency level, a raw and/or scaled score, and the amount of time spent on the test. The results can also provide information such as time spent and raw/scaled score information for question types and/or categories of questions answered, so that a test taker can identify knowledge gaps. See at least paragraph [0008] & [0056]-[0082]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Wong with the technique of adjusting a difficulty of the tail question to be higher than that of the interview questions when the interviewee's level is above a predetermined level, and adjust the difficulty of the tail question to be lower than that of the interview questions when the interviewee's level is below the predetermined level; as disclosed by Niemi to identify skill assessment gaps of interviewee. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination.
Wong in view of Niemi does not specifically disclose identify, from the acquired answer sentences, the situation, task, action, and result items defined in a situation, task, action, and result (STAR) technique, However NPL-1 teaches STAR technique. (See at least page 7-9) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Wong with the technique use of known STAR technique as disclosed by NPL-1 to evaluate experience stories told by a candidate. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination.
Wong teaches identify a missing item from among the items, and generate a tail question for the interviewee about the missing items; (See at least col 1-2, col 9-10)
Wong does not specifically disclose after providing the tail question to the interviewee, additionally provide an example answer for the tail question to the interviewee when the interviewee's nervousness exceeds a predetermined level, or the interviewee is silent for a predetermined period of time or more, or the interviewee stops speaking while answering. However NPL-1 teaches providing an example answer for the tail question to the interviewee (See at least page 7-9) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Wong with the technique providing an example answer for the tail question to the interviewee as disclosed by NPL-1 to help candidate understand how to answer question. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination. (Examiner’s Note: Per MPEP 2111.04, a clause introduced by “when,” “if,” or similar language is given patentable weight only in situations where the condition is satisfied. In anticipation/obviousness, if the prior art satisfies the condition, the conditional step must be present; otherwise, it may not limit the art. it carries patentable weight only when its condition is met. If an embodiment or prior art does not meet the “nervousness/silence/stops speaking” condition (as recited), the additional step of “providing an example answer” typically is not required for that comparison.)
Regarding Claim 2:
Wong in view of Niemi and NPL1 teaches limitation shown above. Wong further teaches the server understands generation unit is configured to understand a content of data submitted by the interviewee, which includes a cover letter, a resume, credentials, and a portfolio, and generates and to generate an additional question about the content of the submitted data. (By disclosing, Procedure 310 collects candidate information and generates candidate profile including authentication information; procedure 320 performs authentication, procedure 330 conducts adaptive interview; and procedure 340 generates feedback to the candidate. In procedure 310, the adaptive recruitment system collects information from potential candidates in an initial phase. At step 301, candidate information 301 is collected. Candidate information takes various formats and comes to the system on different channels. In one embodiment, the candidate information is collected through a third-party channel, such as Talent Scout (TS). The candidate information includes one or more subjects, including resume, recommendations, and reference list. The candidate information is updated throughout the process, including initial interview evaluation, audio/video assessment results, and authentication information. In one embodiment, the authentication is a candidate's voice sample extracted from an interview. At step 311, the candidate information packet is compiled. See at least col 4) Wong in view of Niemi does not specifically disclose content of data submitted by the interviewee, which includes a cover letter. However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporate known technique of cover letter to show genuine interest and motivation for job by applicant. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination. Wong in view of Niemi does not specifically disclose generates an additional question about the content of the submitted data. However NPL-1 teaches generates an additional question about the content of the submitted data. (See at least page 7-9) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Wong with the technique of generates an additional question about the content of the submitted data as disclosed by NPL-1 to know more detail about interviewee’s experience. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination.
Regarding Claim 3:
Wong in view of Niemi teaches limitation shown above. Wong further teaches wherein the server selects and combines generation unit is further configured to select and combine some of all questions and answers provided to and by the interviewee as well as an immediately previous question and answer to generate the tail question. (By disclosing, At step 521, the adaptive recruitment computer system determines if the skill evaluation for the current skill is concluded. In one embodiment, the combined assessment/evaluation is done using an RNN model based on a BOK question with answers knowledge base. The skill evaluation is done when the combination evaluation indicates a skill level requirement satisfied or a skill level requirement not satisfied and a predefined threshold of trying is reached. If step 521 determines yes, the adaptive recruitment computer system, at step 522, selects a new question designed for a new skill. The new skill is selected based on the BOK skill knowledge base 501. If step 521 determines no, the adaptive recruitment computer system moves to step 531 and determines if the current skill level needs to be adjusted. If step 531 determines yes, a new question with a higher or a lower level is selected. If step 531 determines no, a new question in the same skill set and the same skill level is selected. See at least col 1-2, col 9-10)
Regarding Claim 4:
Wong in view of Niemi teaches limitation shown above. Wong further teaches wherein the plurality of items include key competencies required for the job. (By disclosing, the one or more processor 601 is configured to categorize a job requirement into a set of job skills using a recurrent neural network (RNN) based on a body of knowledge (BOK) skill knowledge base; generate a question bank comprising a list of questions based on the set of job skills and a BOK question knowledge base; select adaptively a subset of questions from the generated question bank for an online interview with a candidate based on a predefined rule, wherein each question selected is based on evaluations of one or more answers from the candidate to corresponding prior questions using a RNN model; and generate a feedback report for the candidate, wherein the feedback report using the RNN model based on evaluations of answers from the candidate and a BOK candidate knowledge base, wherein the BOK candidate knowledge base receives updates from the computer system. See at least fig. 6 as well as associated text).
Regarding Claim 8:
Wong in view of Niemi teaches limitation shown above. Wong further teaches wherein the server extracts the generation unit is further configured to extract an item that is unidentified from the content of the data submitted by the interviewee and generates an additional question for evaluating the item. (See at least col 1-2, col 9-10)
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wong et al. (US 11,238,411 B1) in view of Niemi (US 2016/0293036 A1) and further in view of NPL-1 (How to use ChatGPT to Prepare for Behavioral interviews, January 1, 2023) and Wong1 (US 10937446 B1).
Regarding Claim 9:
Wong in view of Niemi teaches limitation shown above. Wong does not specifically disclose when the interviewee's nervousness exceeds a certain level or the interviewee is silent for a certain period of time or more or stops speaking while answering, the server generates the configuration unit is further configured to generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message. However Wong1 teaches assessment of emotion and generating emotion response message. (By disclosing, speech emotion response is generated by the computer system. In a traditional face-to-face or online video interview, the emotion response is observed by the interviewer and may be used to generate a more comprehensive result. However, face-to-face or video emotion recognition by the interviewer are highly subjective and varies with the interviewer. Speech emotion response generated by the computer system combines the assessment result of the answer's contents with the sentiment classifier generated by the computer system indicating the concurrent emotional reactions. The combined results provide a more comprehensive result than the current online test result and a more objective interpretation of the emotional reaction. See at least column 2-6 & 9-10) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Wong in view of Niemi with the technique of when the interviewee's nervousness exceeds a certain level or the interviewee is silent for a certain period of time or more or stops speaking while answering, the server generates the configuration unit is further configured to generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message as it is evident that human sentiment identification and response as disclosed by Wong1 to create more human like atmosphere for interview. Furthermore, merely combining well known elements in the prior art with predictable results does not render an invention patentably distinct over such combination. Per MPEP 2111.04, a clause introduced by “when,” “if,” or similar language is given patentable weight only in situations where the condition is satisfied. In anticipation/obviousness, if the prior art satisfies the condition, the conditional step must be present; otherwise, it may not limit the art. it carries patentable weight only when its condition is met. If an embodiment or prior art does not meet the “nervousness/silence/stops speaking” condition (as recited), the additional step of “generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message “typically is not required for that comparison.)
Response to Arguments
As to the remark, Applicant asserted that
Amended claim 1 as a whole integrates the recited judicial exception of the abstract idea of generating interview questions into the practical application of using AIto improve the depth and quality of job interviews by analyzing an interviewee's answers during an interview to identify STAR items, generating tail questions to inquire about the missing items, adjusting the difficulty of the tail questions in response to analysis of the interviewee's answers, and detect specific triggers such as the interviewee's nervousness level, silence, or speaking cessation in order to provide the interviewee with assistance in the form of an example answer, withdrawal of the rejection of claims 1-9 under 35 USC §101 is respectfully requested.
Wong patent discloses a system that uses AI to generate questions based on interviewee answers and provides feedback to the interviewee, there is no suggestion of analyzing the answer to identify missing STAR items and generating tail questions to inquire about the missing items, adjusting the tail questions based on an analysis of the interviewee's"evaluation level," or generating additional tail questions in response to detection of specific triggers such as the interviewee's nervousness or silence, much less of providing a sample answer as claimed.
Claim 9 recites the generation of an "icebreaking question" or a "relaxation message" to put a nervous interviewee at ease. The Applicant can find no teaching in the Wong1 patent, which is alleged to teach the limitations of claim 9, of any message resembling the claimed "icebreaking question" (also known as "small talk") or "relaxation message." Wong1 merely teaches evaluation of an interviewee's emotional response to questions, rather than generation of questions or messages intended to change the emotional state of the interviewee.
Examiner respectfully traverses Applicant’s remark for the following reasons:
With respect to (a) Examiner would like to point out to applicant that The claim reads like an automated interview method — a business/human activity — implemented with generic computer units.
Cases such as Electric Power Group v. Alstom (830 F.3d 1350), SAP v. InvestPic (898 F.3d 1161), and Alice Corp. v. CLS Bank (573 U.S. 208) hold that collecting, analyzing, and presenting information in a particular field is abstract unless tied to a specific technical improvement.
The recited “units” (collection unit, evaluation unit, selection unit, generation unit) perform functions that are essentially:
Collecting information (answer sentences from an interviewee)Evaluating information (suitability for a job based on evaluation items)Selecting additional evaluation items Generating follow-up questions (“tail questions”) based on prior answers or items
Adjusting difficulty based on “interviewee’s level”Identifying STAR elements (Situation, Task, Action, Result) and missing itemsProviding example answers based on nervousness/silence/stopping speaking These steps are likely to be characterized as:
Organizing human activity (conducting and adapting job interviews, hiring decisions) and/or Mental processes (evaluating answers, selecting items, deciding on question difficulty, recognizing missing STAR elements, deciding whether to give example answers) Possibly mathematical concepts (predetermined levels, thresholds) in adjusting difficulty. Under the 2019 PEG, these are judicial exceptions.
Because the “units” can be generic modules on a standard computer, the abstract idea is merely being implemented on generic computer components, without integration into a practical application that improves the computer itself or another technology.
With respect to (b)-(c) Examiner would like to point out to applicant that limitations in claim 1: after providing the tail question to the interviewee, additionally provide an example answer for the tail question to the interviewee when the interviewee's nervousness exceeds a predetermined level, or the interviewee is silent for a predetermined period of time or more, or the interviewee stops speaking while answering and claim 9: when the interviewee's nervousness exceeds a certain level or the interviewee is silent for a certain period of time or more or stops speaking while answering, the server generates the configuration unit is further configured to generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message. These are conditional steps that only require limited patentable wight. Per MPEP 2111.04, a clause introduced by “when,” “if,” or similar language is given patentable weight only in situations where the condition is satisfied. In anticipation/obviousness, if the prior art satisfies the condition, the conditional step must be present; otherwise, it may not limit the art. it carries patentable weight only when its condition is met. If an embodiment or prior art does not meet the “nervousness/silence/stops speaking” condition (as recited), the additional step of “providing an example answer” , “generate an icebreaking question for changing an atmosphere or outputs a nervousness relaxation message “typically is not required for that comparison.)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEHA PATEL whose telephone number is (571)270-1492. The examiner can normally be reached Monday-Friday, 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tariq Hafiz can be reached at (571) 272-5350. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NEHA PATEL/Supervisory Patent Examiner, Art Unit 3699