Prosecution Insights
Last updated: April 19, 2026
Application No. 18/237,517

SYSTEMS AND METHODS FOR REAL-TIME USER VERIFICATION AND MODIFICATION OF A MULTIMEDIA STREAM

Final Rejection §103
Filed
Aug 24, 2023
Examiner
MAI, KEVIN S
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
AnonyDoxx, LLC
OA Round
2 (Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
5y 3m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
125 granted / 428 resolved
-28.8% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
39 currently pending
Career history
467
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
DETAILED ACTION This Office Action has been issued in response to Applicant's Amendment filed February 6, 2026. Claims 1, 3, 11, and 20 have been amended. Claims 1-20 have been examined and are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed February 6, 2026 have been fully considered but they are moot in view of the new grounds of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5-8, 10-13, 15-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. 2020/0143329 to Gamaliel (hereinafter “Gamaliel”) and further in view of US Pub. No. 2020/0285836 to Davis et al. (hereinafter “Davis”) and further in view of US Pub. No. 2021/0042399 to Foster et al. (hereinafter “Foster”). As to Claim 1, Gamaliel discloses a system for real-time verification and modification of a multimedia stream, comprising: a processor; and a memory, including instructions stored thereon, which, when executed by the processor, cause the system to: access a real-time multimedia stream that includes a user, wherein the real-time multimedia stream includes a real-time image of the user and real-time speech of the user (Paragraph [0047] of Gamaliel discloses the processing techniques, correlating and combining may occur in real time, i.e., continuously and concurrently, with a live interview (whether audio or audio-video)); determine a facial [mapping] of the user based on the real-time image of the user (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview); authenticate an identity of the user based on the facial [mapping] of the user and a [multi-point biometric still frame scan with a liveness check] performed prior to the accessing of the real-time multimedia stream (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview. Paragraph [0042] of Gamaliel discloses identity of the candidate to be interviewed is typically verified before starting the interview); in response to the authenticated identity of the user generate a mask of the user based on the facial [mapping] of the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); modify the real-time multimedia stream to display the mask on the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); determine feedback based on analyzing the real-time multimedia stream of the user, wherein the feedback includes a feedback score or a visual representation of a quality of an engagement of the user (Paragraph [0051] of Gamaliel discloses emotion recognition analysis, or emotion detection analysis, to produce one or more emotion data files, each containing one or more emotional indicators (or emotional labels) expressed or manifested by one or more users participating in the communication); and display the feedback in real time (Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). Gamaliel does not explicit disclose mapping and multi-point biometric still frame scan. However, Davis discloses this. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training. For example, the images may be provided to a deep neural network model that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Paragraph [0029] of Davis discloses 128-dimensional embedding vectors of the plurality of facial images. Paragraph [0023] of Davis discloses in order to capture a live image of the user. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the facial recognition system as disclosed by Gamaliel, with facial mapping as disclosed by Davis. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device ready for improvement to yield predictable results. Gamaliel and Davis are directed toward facial recognition and as such it would be obvious to use the techniques of one in the other. Paragraph [0021] of Davis disclose this allows methods such as clustering, similarity decisions and classification to be done more easily. Gamaliel does not explicit disclose with a liveness check. However, Foster discloses this. Paragraph [0051] of Foster discloses the artificial intelligence engine 270 (e.g., face analyzer 210) is trained to generate the image score (e.g., face score) based on a liveness analysis of the image (e.g., face image). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the facial recognition system as disclosed by Gamaliel, with performing a liveness check as disclosed by Foster. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device ready for improvement to yield predictable results. Gamaliel and Foster are directed toward facial recognition and as such it would be obvious to use the techniques of one in the other. Paragraph [0040] of Foster discloses generates the authentication score based on the provided image score (e.g., face score). Foster discloses liveness as a known aspect to consider when authenticating. As to Claim 2, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the instructions, when executed by the processor, further cause the system to generate a modified real-time speech of the user based on the real-time speech of the user, wherein the modified real-time speech of the user is different from the real-time speech of the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes in the audio data of an audio file may be masked or altered (or both) to produce a modified audio file). As to Claim 3, Gamaliel-Davis-Foster discloses the system of claim 2, wherein the modified real-time speech includes a modified emotion and/or demeanor (Paragraph [0053] of Gamaliel discloses the audio data may be altered to protect gender identity, as well as minimizing regional or ethnic accents and voice inflections. Paragraph [0053] of Gamaliel discloses as well as voice inflections and tone expressed by the candidate, without allowing the interviewer to directly witness such behavior and attributes by the candidate). As to Claim 5, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the instructions, when executed by the processor, further cause the system to: determine that the feedback score is below a threshold value; and provide a visual warning that the feedback is below the threshold value (Paragraph [0051] of Gamaliel discloses emotion recognition analysis, or emotion detection analysis, to produce one or more emotion data files, each containing one or more emotional indicators (or emotional labels) expressed or manifested by one or more users participating in the communication. The emotions identified by the emotional indicators include, for example without limitation: anger, disgust, fear, neutral, sadness, happy, curious, interested, confident, surprised, confused, defensive, concerned, engaged, enthusiastic, combinations thereof, and any others desired to be analyzed and identified. The emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview. Wherein identifying any particular emotional state necessarily compares a value to a threshold). As to Claim 6, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the real-time multimedia stream of a user includes biometric data, wherein determining feedback includes: providing biometric data as an input to a machine learning network (Paragraph [0054] of Gamaliel discloses acquiring, analyzing and/or recording biometric information of a candidate during all or a portion of audio or audio-video interviews. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training); and predicting the feedback by the machine learning network (Paragraph [0054] of Gamaliel discloses biometric information and techniques relating to same may be useful as part of the methods for unbiased recruitment, for example, to verify identity of candidates being interviewed, or to provide data for emotional or personality analysis and reporting. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training). Examiner recites the same rationale to combine used for claim 1. As to Claim 7, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the instructions, when executed by the processor, further cause the system to: display a review screen, wherein the review screen includes a multimedia review; and replay the multimedia stream (Paragraph [0075] of Gamaliel discloses comprise a reporting component which collects and records statistics and other information concerning the interview and participating users and provides a summary report including facial emotion statistics, voice emotion statistics, LSM/rapport score, hyperlink to transcripts, hyperlink to audio or audio-visual recording). As to Claim 8, Gamaliel-Davis-Foster discloses the system of claim 1, wherein authenticating the identity of the user is further based on a unique ID stored in a blockchain (Paragraph [0008] of Davis discloses (i) retrieve a block in a blockchain corresponding to the username and (ii) extract at least one of a machine learning classifier model and a plurality of facial images from the retrieved block; and (d) the processor is further configured to (i) determine, with at least one of the machine learning classifier model and the plurality of facial images, whether the live image matches the plurality of facial images and (ii) selectively provide access to the device based on the determination). Examiner recites the same rationale to combine used for claim 1. As to Claim 10, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the instructions, when executed by the processor, further cause the system to generate a final report after completion of the real-time multimedia stream, wherein the final report includes a scrollable timeline and feedback correlated to a time of the timeline (Paragraph [0075] of Gamaliel discloses comprise a reporting component which collects and records statistics and other information concerning the interview and participating users and provides a summary report including facial emotion statistics, voice emotion statistics, LSM/rapport score, hyperlink to transcripts, hyperlink to audio or audio-visual recording. Audio and video recordings inherently have a timeline. Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). As to Claim 11, Gamaliel discloses a computer-implemented method for real-time verification and modification of a multimedia stream, the method comprising: accessing a real-time multimedia stream that includes a user, the real-time multimedia stream including a real-time image of the user and a real-time speech of the user (Paragraph [0047] of Gamaliel discloses the processing techniques, correlating and combining may occur in real time, i.e., continuously and concurrently, with a live interview (whether audio or audio-video)); determining a facial [mapping] of the user based on the real-time image of the user (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview); authenticating an identity of the user based on the facial [mapping] of the user and a [multi-point biometric still frame scan with a liveness check] performed prior to the accessing of the real-time multimedia stream (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview); generating a mask of the user based on the facial [mapping] of the user in response to the authenticated identity of the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); modifying the real-time multimedia stream to display the mask on the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); determining feedback based on analyzing the real-time multimedia stream (Paragraph [0051] of Gamaliel discloses emotion recognition analysis, or emotion detection analysis, to produce one or more emotion data files, each containing one or more emotional indicators (or emotional labels) expressed or manifested by one or more users participating in the communication); and displaying the feedback in real time (Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). Gamaliel does not explicit disclose mapping and multi-point biometric still frame scan. However, Davis discloses this. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training. For example, the images may be provided to a deep neural network model that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Paragraph [0029] of Davis discloses 128-dimensional embedding vectors of the plurality of facial images. Paragraph [0023] of Davis discloses in order to capture a live image of the user. Examiner recites the same rationale to combine used for claim 1. Gamaliel does not explicit disclose with a liveness check. However, Foster discloses this. Paragraph [0051] of Foster discloses the artificial intelligence engine 270 (e.g., face analyzer 210) is trained to generate the image score (e.g., face score) based on a liveness analysis of the image (e.g., face image). Examiner recites the same rationale to combine used for claim 1. As to Claim 12, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, further comprising generating a modified real-time speech of the user based on the real-time speech of the user, wherein the modified real-time speech of the user is different from the real-time speech of the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes in the audio data of an audio file may be masked or altered (or both) to produce a modified audio file). As to Claim 13, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 12, wherein the modified real-time speech includes a modified accent, emotion, demeanor, and/or inflection (Paragraph [0053] of Gamaliel discloses the audio data may be altered to protect gender identity, as well as minimizing regional or ethnic accents and voice inflections). As to Claim 15, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, wherein the feedback includes a feedback score, and wherein the method further comprises: determining that the feedback score is below a threshold value; and providing a warning that the feedback is below the threshold value (Paragraph [0051] of Gamaliel discloses emotion recognition analysis, or emotion detection analysis, to produce one or more emotion data files, each containing one or more emotional indicators (or emotional labels) expressed or manifested by one or more users participating in the communication. The emotions identified by the emotional indicators include, for example without limitation: anger, disgust, fear, neutral, sadness, happy, curious, interested, confident, surprised, confused, defensive, concerned, engaged, enthusiastic, combinations thereof, and any others desired to be analyzed and identified. The emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview. Wherein identifying any particular emotional state necessarily compares a value to a threshold). As to Claim 16, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, wherein the real-time multimedia stream of a user includes biometric data, wherein determining feedback includes: providing biometric data as an input to a machine learning network (Paragraph [0054] of Gamaliel discloses acquiring, analyzing and/or recording biometric information of a candidate during all or a portion of audio or audio-video interviews. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training); and predicting the feedback by the machine learning network (Paragraph [0054] of Gamaliel discloses biometric information and techniques relating to same may be useful as part of the methods for unbiased recruitment, for example, to verify identity of candidates being interviewed, or to provide data for emotional or personality analysis and reporting. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training). Examiner recites the same rationale to combine used for claim 1. As to Claim 17, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, wherein authenticating the identity of the user is further based on a unique ID stored in a blockchain (Paragraph [0008] of Davis discloses (i) retrieve a block in a blockchain corresponding to the username and (ii) extract at least one of a machine learning classifier model and a plurality of facial images from the retrieved block; and (d) the processor is further configured to (i) determine, with at least one of the machine learning classifier model and the plurality of facial images, whether the live image matches the plurality of facial images and (ii) selectively provide access to the device based on the determination). Examiner recites the same rationale to combine used for claim 1. As to Claim 19, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, further comprising generating a final report after completion of the real-time multimedia stream, wherein the final report includes a scrollable timeline and feedback correlated to a time of the timeline (Paragraph [0075] of Gamaliel discloses comprise a reporting component which collects and records statistics and other information concerning the interview and participating users and provides a summary report including facial emotion statistics, voice emotion statistics, LSM/rapport score, hyperlink to transcripts, hyperlink to audio or audio-visual recording. Audio and video recordings inherently have a timeline. Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). As to Claim 20, Gamaliel discloses a non-transitory computer-readable medium storing instructions which, when executed by a processor, cause the processor to perform a computer-implemented method for real-time verification and modification of a multimedia stream, comprising: accessing a real-time multimedia stream that includes a user, the real-time multimedia stream including a real-time image of the user and a real-time speech of the user (Paragraph [0047] of Gamaliel discloses the processing techniques, correlating and combining may occur in real time, i.e., continuously and concurrently, with a live interview (whether audio or audio-video)); determining a facial [mapping] of the user based on the real-time image of the user (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview); authenticating an identity of the user based on the facial [mapping] of the user and [a multi-point biometric still frame scan with a liveness check] performed prior to the accessing of the real-time multimedia stream (Paragraph [0079] of Gamaliel discloses the facial recognition unit is provided to verify identification of the candidate at the time of interview); generating a mask of the user based on the facial [mapping] of the user in response to the authenticated identity of the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); modifying the real-time multimedia stream to display the mask on the user (Paragraph [0045] of Gamaliel discloses all of the personal attributes of the video images of a video file may be masked or altered (or both) to produce a modified video file); determining feedback based on analyzing the real-time multimedia stream (Paragraph [0051] of Gamaliel discloses emotion recognition analysis, or emotion detection analysis, to produce one or more emotion data files, each containing one or more emotional indicators (or emotional labels) expressed or manifested by one or more users participating in the communication); and displaying the feedback in real time (Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). Gamaliel does not explicit disclose mapping and multi-point biometric still frame scan. However, Davis discloses this. Paragraph [0021] of Davis discloses the images may then be provided to a machine-learning, facial recognition classifier system for training. For example, the images may be provided to a deep neural network model that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Paragraph [0029] of Davis discloses 128-dimensional embedding vectors of the plurality of facial images. Paragraph [0023] of Davis discloses in order to capture a live image of the user. Examiner recites the same rationale to combine used for claim 1. Gamaliel does not explicit disclose with a liveness check. However, Foster discloses this. Paragraph [0051] of Foster discloses the artificial intelligence engine 270 (e.g., face analyzer 210) is trained to generate the image score (e.g., face score) based on a liveness analysis of the image (e.g., face image). Examiner recites the same rationale to combine used for claim 1. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gamaliel-Davis-Foster and further in view of US Pub. No. 2022/0245594 to Baid (hereinafter “Baid”). As to Claim 4, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the real-time image of the user is modified by a [pre-determined] avatar [selected by the user] (Paragraph [0053] of Gamaliel discloses the candidate's image on one or more video images is altered (e.g., blurred, modified, or even replaced such as with an emoji, avatar, etc.)). Gamaliel-Davis-Foster does not explicitly disclose pre-determined avatar selected by the user. However, Baid discloses this. Paragraph [0071] of Baid discloses the interviewee avatar image may be manually set, e.g., based on a selection by the interviewee. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the anonymous interview system as disclosed by Gamaliel, with a user selected avatar as disclosed by Baid. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device ready for improvement to yield predictable results. Gamaliel and Baid are directed toward anonymous interview systems and as such it would be obvious to use the techniques of one in the other. Paragraph [0053] of Gamaliel discloses the candidate's image on one or more video images is altered (e.g., blurred, modified, or even replaced such as with an emoji, avatar, etc.). Paragraph [0071] of Baid discloses randomly selecting or manually selecting as known alternative methods of avatar selection. As to Claim 14, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, wherein the real-time image of the user is modified by a [pre-determined] avatar [selected by the user] (Paragraph [0053] of Gamaliel discloses the candidate's image on one or more video images is altered (e.g., blurred, modified, or even replaced such as with an emoji, avatar, etc.)). Gamaliel-Davis-Foster does not explicitly disclose pre-determined avatar selected by the user. However, Baid discloses this. Paragraph [0071] of Baid discloses the interviewee avatar image may be manually set, e.g., based on a selection by the interviewee. Examiner recites the same rationale to combine used for claim 4. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gamaliel-Davis-Foster and further in view of US Pub. No. 2020/0242343 to Schwindt et al. (hereinafter “Schwindt”). As to Claim 9, Gamaliel-Davis-Foster discloses the system of claim 1, wherein the visual representation of the quality of an engagement of the user includes a [circular graph and/or a bar chart] (Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). Gamaliel-Davis-Foster does not explicitly disclose circular graph and/or a bar chart. However, Schwindt discloses this. Paragraph [0041] of Schwindt discloses The facial expression 138 can be analyzed by facial recognition software 53 for determining a likely emotional state of the pilot. These can be captured and displayed as bar graphs. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the facial recognition system as disclosed by Gamaliel, with using bar graphs as disclosed by Schwindt. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device ready for improvement to yield predictable results. Gamaliel and Schwindt are directed toward facial recognition systems and as such it would be obvious to use the techniques of one in the other. Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols. Where Schwindt discloses bar graphs as known other graphic symbols for presenting emotion. As to Claim 18, Gamaliel-Davis-Foster discloses the computer-implemented method of claim 11, wherein the feedback further includes a [circular graph and/or a bar chart] (Paragraph [0051] of Gamaliel discloses the emotional indicators may be provided as text, or as emoticons, emojis, or other graphic symbols, in one or more of the video images of the interview). Gamaliel-Davis-Foster does not explicitly disclose circular graph and/or a bar chart. However, Schwindt discloses this. Paragraph [0041] of Schwindt discloses The facial expression 138 can be analyzed by facial recognition software 53 for determining a likely emotional state of the pilot. These can be captured and displayed as bar graphs. Examiner recites the same rationale to combine used for claim 9. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kevin S Mai whose telephone number is (571)270-5001. The examiner can normally be reached Monday to Friday 9AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN S MAI/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Aug 24, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Feb 06, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506731
Conference Data Sharing Method and Conference Data Sharing System Capable of Communicating with Remote Conference Members
2y 5m to grant Granted Dec 23, 2025
Patent 12413610
ASSESSING SECURITY OF SERVICE PROVIDER COMPUTING SYSTEMS
2y 5m to grant Granted Sep 09, 2025
Patent 12406064
PRE-BOOT CONTEXT-BASED SECURITY MITIGATION
2y 5m to grant Granted Sep 02, 2025
Patent 12363200
PROVIDING EVENT STREAMS AND ANALYTICS FOR ACTIVITY ON WEB SITES
2y 5m to grant Granted Jul 15, 2025
Patent 12204570
SYSTEM AND METHOD FOR PROVIDING MESSAGE CONTENT BASED ROUTING
2y 5m to grant Granted Jan 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
55%
With Interview (+25.5%)
5y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month