Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,225

METHOD AND SYSTEM FOR PERFORMING VIDEO-BASED AUTOMATIC IDENTITY VERIFICATION

Non-Final OA §103§112
Filed
Apr 15, 2024
Examiner
SHERMAN, STEPHEN G
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Hyperverge Technologies Private Limited
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
1334 granted / 1626 resolved
+20.0% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
1656
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1626 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings The drawings are objected to under 37 CFR 1.83(a) because they fail to show the “sentence level embedding matching module [304]” and the “speech to text unit [124]” as described in the specification (pages 17 and 27). Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “a set of facial landmarks on the first human object” and “one or more regions of interest (ROIs) based on the set of facial landmarks” of claim 2, for example, must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Page 20 states “the processing unit [104]” however, the processing unit is labeled as 102 in the Drawings. Page 20 states “face verification unit [102] however, the face verification unit is labeled as 120 in the Drawings. Page 23 states “prompt generation unit [102] however, the prompt generation unit is labeled as [108] in the Drawings. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “prompt generation unit [108] configured to…” in claim 13 (No Detailed Disclosure.); “capturing unit [106] configured to…” in claim 13 (No Detailed Disclosure.); “deepfake detection unit [110] configured to…” in claim 13 (No Detailed Disclosure.); “lip reading unit [112] configured to…” in claim 13 (No Detailed Disclosure.); “face detection unit [116] configured to…” in claim 13 (No Detailed Disclosure.); and “face verification unit [120] configured to…” in claim 13 (No Detailed Disclosure.). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 13-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitations “ “prompt generation unit [108] configured to…”, “capturing unit [106] configured to…”, “deepfake detection unit [110] configured to…”, “lip reading unit [112] configured to…”, “face detection unit [116] configured to…” and “face verification unit [120] configured to…” in claim 13 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Regarding the “capturing unit”, the disclosure is devoid of any structure that performs the function in the claim. Regarding the “prompt generation unit”, “deepfake detection unit”, “lip reading unit”, “face detection unit” and “face verification unit”, no association between the structure and the function can be found in the specification. Specifically, these units are disclosed as a part of a “processing unit” which also lacks structure, as page 8 says “the user device may also comprise a "processor" or "processing unit", wherein…” then describes what a processor can be, however, never described what a “processing unit” is structurally. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claims 14-24 are rejected due to their dependency from claim 13. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. For examination purposes, the examiner will interpret that any structure that performs the claimed functions are the “units” as claimed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 6, 9-10, 12-13, 16, 19 and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Ortiz et al. (US 2021/0173916) in view of Benkreira et al. (US 10,452,897). Regarding claim 1, Ortiz et al. disclose a method [300] for performing video-based automatic identity verification of a first user, the method comprising: generating, by a prompt generation unit [108], one or more prompts (Figure 4, 402 and paragraph [0236].); capturing, via a capturing unit [106], a video, wherein the video comprises the first user speaking the one or more prompts wherein face of the first user is a first human object (Figure 4, 403 and paragraph [0236].); generating, by a deepfake detection unit [110], a deepfake detection score of the video based on one or more deepfake-techniques (Figure 3, validation, and Figure 4, 404 and paragraph [0237], and Figure 5 and paragraph [0239], step A2; detecting, by a lip reading unit [112], a correctness of a speech in the video based on the deepfake detection score, a visual cues match, and a transcription match, wherein the speech is associated with the first user speaking the one or more prompts, and wherein the visual cues match comprises one of a valid lip contour movement match and an invalid lip contour movement match (Figure 4, steps 405 and 406, and Figure 5, steps B4-B6, and paragraphs [0036], [0237] and [0240].); detecting, by a face detection unit [116], in one or more frames of the video, the first human object based on the correctness of the speech (Figure 4, steps 404-406 and 115, and Figure 5, steps B7-B9, and paragraphs [0237] and [0240].); generating, by a face verification unit [120], a first similarity score, based on the detection of the first human object (Paragraphs [0033]-[0034], [0036] and [0192]), wherein the first similarity score is generated based on a similarity within a plurality of feature vectors of the first human object present in a plurality of frames of the video (Paragraphs [0033]-[0034], [0036]. [0187] and [0192]); and automatically performing, by the face verification unit [120], the identity verification of the first user, based on the first similarity score (Figure 4, 115). Ortiz et al. fail to teach: capturing, via the capturing unit [106], an image of an identification document comprising an image of a second user wherein face of the second user is a second human object; detecting, by the face detection unit [116], in one or more frames of the video, in the image of the identification document, the second human object; generating, by the face verification unit [120], a second similarity score, based on the detection of the second human object; the second similarity score is generated based on a similarity between the plurality of feature vectors of the first human object present in the plurality of frames of the video and a feature vector of the second human object present in the image of the identification document automatically performing, by the face verification unit [120], the identity verification of the first user, based on the second similarity score. Benkreira et al. disclose a method for performing video-based automatic identity verification of a first user, the method comprising: capturing, via a capturing unit [106], an image of an identification document comprising an image of a second user wherein face of the second user is a second human object (Figure 2, step 201, and column 12, lines 18-24); detecting, by a face detection unit [116], in one or more frames of the video, in the image of the identification document, the second human object (Figure 2, steps 201-202, and column 12, lines 44-54.); generating, by a face verification unit [120], a second similarity score, based on the detection of the second human object (Figure 2, step 202, [facial match score], and column 12, lines 44-54.); the second similarity score is generated based on a similarity between the plurality of feature vectors of a first human object present in the plurality of frames of the video and a feature vector of the second human object present in the image of the identification document (Figure 2, step 202 and column 12, lines 44-54.); and automatically performing, by the face verification unit [120], the identity verification of the first user, based on the second similarity score (Figure 2, steps 206-207, and column 13, lines 28-50 and column 14,lines 9-25.). Hence the prior art includes each element claimed although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of the actual combination of the elements in a single prior art reference. In combination, Ortiz et al. performs the same function as it does separately of performing video-based automatic identity verification of a first user using a video of the first user speaking prompts, and Benkreira et al. performs the same function as it does separately of performing video-based automatic identity verification of a first user using an image containing an identification document. Therefore, one of ordinary skill in the art before the effective filing date of the claimed invention could have combined the elements as claimed by known methods, and that in combination, each element merely performed the same function as it does separately. The results of the combination would have been predictable and resulted in performing video-based automatic identity verification of a first user based on the first user speaking prompts and an identification document. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 3, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 1, wherein the valid lip contour movement match and the invalid lip contour movement match is based on the one or more prompts (Ortiz et al.: Paragraphs [0152], [0237], [0289] and [0292].). Regarding claim 6, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 1, wherein the valid lip contour movement match is generated in an event: one or more word sequence predictions in the list of most probable word sequence predictions match with one or more word sequences associated with the one or more prompts (Ortiz et al.: Paragraph [0181]); a pre-defined threshold number of words of the one or more word sequence predictions match with a pre-defined threshold number of words in the one or more prompts (Ortiz et al.: Paragraphs [0350] and [0431].); and the pre-defined threshold number of words of the one or more word sequence predictions are present in a same order as the pre-defined threshold number of words in the one or more prompts (Ortiz et al.: Paragraphs [0430]-[0431].). Regarding claim 9, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 1, wherein prior to automatically performing, by the face verification unit [120], the identity verification of the first user, the method comprises checking, by a liveness checking unit [118], a liveness of at least one of the first human object in the plurality of frames of the video and the identification document comprising the second human object (Ortiz et al.: Paragraph [0223].). Regarding claim 10, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 1, wherein the detecting, by the lip reading unit [112], the correctness of the speech is further based on: an event where the deepfake detection score of the video is above a deepfake threshold (Ortiz et al.: Figure 4, 404 and paragraph [0034]). Regarding claim 12, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 2, wherein the one or more mapping techniques comprises a fuzzy match technique, a phonetics match technique (Paragraph [0019]) or a combination thereof. Regarding claim 13, this claim is rejected under the same rationale as claim 1. Regarding claim 16, this claim is rejected under the same rationale as claim 3. Regarding claim 19, this claim is rejected under the same rationale as claim 6. Regarding claim 22, this claim is rejected under the same rationale as claim 9. Regarding claim 23, this claim is rejected under the same rationale as claim 10. Claims 11 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Ortiz et al. (US 2021/0173916) in view of Benkreira et al. (US 10,452,897) and further in view of Kato et al. (US 2003/0179933). Regarding claim 11, Ortiz et al. and Benkreira et al. disclose the method [300] as claimed in claim 1. Ortiz et al. and Benkreira et al. fail to teach wherein the detecting, by the face detection unit [116], in one or more frames of the video, the first human object and in the image of the identification document, the second human object, is further based on a neural network-based rotation-invariant model implemented in the face detection unit [116]. However, using neural network-based rotation-invariant models for face detection was well known, as evidenced by Kato et al., which teaches a neural network-based rotation-invariant model implemented in a face detection unit (Paragraph [0008]). Therefore, it would have been obvious to “one of ordinary skill” in the art before the effective filing date of the claimed invention to use the rotation-invariant teachings of Kato et al. in the method taught by the combination of Ortiz et al. and Benkreira et al. The motivation to combine would have been in order to estimate an angle with resistance to a change in the input image by virtue of generalization ability of the neural net, and a normalized image can be obtained stably (See paragraph [0008] of Kato et al.). Regarding claim 24, this claim is rejected under the same rationale as claim 11. Allowable Subject Matter Claims 2, 4-5 and 7-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 14-5, 17-18 and 20-21 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The primary reasons for indicating allowable subject matter in claim 2 is the inclusion of the limitations reciting “wherein the visual cues match is performed based on: performing re-iteratively, a set of steps until an occurrence of an end-of-sentence token…wherein the VSR model unit [1122] is a neural network based unit comprising a conformer encoder [202] and the transformer decoder [204]; obtaining, by the conformer encoder [202], a set of lip features based on the performance of the VSR on the ROIs, wherein the set of lip features comprises a corresponding vector for one or more sets of frames corresponding to a phoneme; and predicting, by the transformer decoder [204] using a language model unit [206], a next most probable phoneme based on the set of lip features; generating, by the transformer decoder [204], one or more word sequence predictions based on the set of steps; predicting, by a beam search unit [208], a list of most probable word sequence predictions based on the one or more word sequence predictions, wherein the list comprises a pre-defined number of the most probable word sequence predictions; mapping, by the beam search unit [208], each word in the list of the most probable word sequence predictions to a corresponding nearest word of interest from a pre-defined list of probable words using one or more mapping techniques; and - performing, by the lip reading unit [112], the visual cues match based on the mapping” which, in combination with the other recited features, is not taught and/or suggested either singularly or in combination within the prior art. Claims 4-5 are objected to due to their dependency from claim 2. The primary reasons for indicating allowable subject matter in claim 7 is the inclusion of the limitations reciting “wherein the transcription match is performed based on: generating, by a sentence level embedding matching module, one or more first embeddings corresponding to a transcription of the speech…generating, by the sentence level embedding matching module, one or more second embeddings corresponding to the one or more prompts; calculating, by the sentence level embedding matching module, a similarity metric between the one or more first embeddings and the one or more second embeddings; and performing, by the sentence level embedding matching module, the transcription match based on the similarity metric” which, in combination with the other recited features, is not taught and/or suggested either singularly or in combination within the prior art. The primary reasons for indicating allowable subject matter in claim 8 is the inclusion of the limitations reciting “wherein the automatically performing, by the face verification unit [120], the identity verification of the first user, further comprises: performing, by the face verification unit [120], a first comparison of the first similarity score with a pre-defined first threshold; performing, by the face verification unit [120], a second comparison of the second similarity score with a pre-defined second threshold; generating, by the face verification unit [120], one of a successful identity verification prompt and an unsuccessful identity verification prompt based on the first comparison and the second comparison, wherein the successful identity verification prompt is generated in an event the first similarity score is higher than the pre-defined first threshold, and the second similarity score is higher than the pre-defined second threshold, and the unsuccessful identity verification prompt is generated in an event at least one of: the first similarity score is lower than the pre-defined first threshold, and the second similarity score is lower than the pre-defined second threshold; and automatically performing, by the face verification unit [120], the identity verification of the first user based on one of the successful identity verification prompt and the unsuccessful identity verification prompt” which, in combination with the other recited features, is not taught and/or suggested either singularly or in combination within the prior art. Claim 14 is indicated as having allowable subject matter for the same reasons as claim 2. Claim 15 and 17-18 are objected to due to their dependency from claim 14. Claim 20 is indicated as having allowable subject matter for the same reasons as claim 7. Claim 21 is indicated as having allowable subject matter for the same reasons as claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN G SHERMAN whose telephone number is (571)272-2941. The examiner can normally be reached Monday - Friday, 8:00am - 4pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMR AWAD can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEPHEN G SHERMAN/Primary Examiner, Art Unit 2621 6 February 2026
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603045
ELECTRONIC DEVICE FOR REDUCING OUTPUT VARIATION FACTORS OF PIXEL CIRCUITS
2y 5m to grant Granted Apr 14, 2026
Patent 12597219
HEAD MOUNTABLE DISPLAY
2y 5m to grant Granted Apr 07, 2026
Patent 12592044
Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements
2y 5m to grant Granted Mar 31, 2026
Patent 12591302
GENERATING AI-CURATED AR CONTENT BASED ON COLLECTED USER INTEREST LABELS
2y 5m to grant Granted Mar 31, 2026
Patent 12586407
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR ADJUSTING IMAGE PARAMETERS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1626 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month