Prosecution Insights
Last updated: April 19, 2026
Application No. 18/641,043

USER FEEDBACK AND CONTENT ADAPTATION FOR INFORMATION ASSIMILATION

Non-Final OA §103
Filed
Apr 19, 2024
Examiner
HUERTA, ALEXANDER Q
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Logitech Europe S A
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
80%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
351 granted / 520 resolved
+9.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
16 currently pending
Career history
536
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 11, 2025 has been entered. Response to Arguments On pages 8-9 of the Applicant’s Response, Applicant argues that Aronsson does not cure the deficiencies of Gates and Bustamante. Specifically, Applicant argues that the combination does not disclose “wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data.” The Examiner respectfully disagrees because Aronsson discloses a system for dynamic content modification based on user reactions. Aronsson further teaches a sensor module that detects various sensor measurements from a user than can be indicative of a user emotional state in reaction to viewed content, which can include a variety of physical parameters such as facial expressions and features, heart rate, blood pressure, pupil size etc. A controller or processing device is configured to receive the various sensor inputs, and determines the state or emotional condition of the user in reaction to the viewed content. For example, a combination of high heart rate, high blood pressure, and small pupil size may be associated with an excited state, whereas the reverse may be associated with a relaxed or even bored state ([0042]). Additionally, Aronsson discloses that “a user dislikes horror-type content, or if a fear reaction is determined as to a scene that perhaps is not intended to be scary (e.g., a young viewer becomes afraid during a scene that an adult actually may find humorous), the "fear" reaction would be indicative of an unfavorable attitude toward the content ([0067]). In other words, Aronsson teaches a system that is continuously monitoring the user’s emotional states and reactions to scenes and thus compares the viewer’s previously recorded emotional state to the viewer’s current emotional state, such as a viewer becoming afraid during a scene or becoming bored. Therefore, Aronsson discloses “wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gates, III et al. (US Pub. 2014/0040932) in view of Bustamante et al. (US Pub. 2023/0164387), Aronsson et al. (US Pub. 2013/0283162) and in further view of Selen et al. (US Pub. 2017/0352285), herein referenced as Gates, Bustamante, Aronsson, and Selen, respectively. Regarding claim 1, Gates discloses “A method of presenting a video stream that is to be received by one or more content consumers ([0009], [0017], [0020], Figs. 1-5), comprising: receiving, by a first electronic device used by a content consumer, the video stream … ([0017]-[0022], Figs. 1-4, i.e., receiving audio-visual core portion 102, such as a television broadcast); collecting by a second camera device connected to the first electronic device, by use of a first program executed by a first processor of the first electronic device, content consumer data corresponding to the content consumer that is receiving the video stream ([0127], [0129], Figs. 1-5, i.e., sensor 250 monitors at least one characteristic of at least one viewer, such as facial features. Additionally, the sensing device can include a Microsoft Kinect consisting of a camera); receiving, by use of the first program, the content consumer data ([0036], [0127], [0129], Fig. 27, i.e., selection signals indicative of viewer preference including monitored viewer characteristics are provided to one or more dynamic customization service providers 420); sending, by use of the first program, first instructions based on the content consumer data to the first processor … and updating a visible characteristic of the video stream being provided to the content consumer based on the sent first instructions ...” ([0019], [0024], [0031]-[0032], [0130]-[0135], Figs. 1-5, 30-33, i.e., receiving dynamically customized audio-visual content based on monitored viewer characteristics and reactions). Gates fails to explicitly disclose receiving, by a first electronic device used by a content consumer, the video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device. Bustamante teaches the technique of receiving, by a first electronic device used by a content consumer, the video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device ([0011], [0015], [0017], [0035], [0050], Figs. 2-3, i.e., cameras providing alternative or simultaneous perspectives on an event. For instance, watching a sports team score from multiple cameras in the stadium. Additionally, the system could be used in a multi-view application, such as a video conference). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of receiving, by a first electronic device used by a content consumer, the video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device as taught by Bustamante, to improve the dynamically customized audio-visual system of Gates for the predictable result of capturing a plurality of different views and perspectives providing a richer experience ([0013]). The combination fails to explicitly disclose wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data; and updating a visible characteristic of the video stream being provided to the content consumer based on the sent first instructions to recapture the loss of attention of the content consumer. Arronson teaches the technique of providing wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data; and updating a visible characteristic of the video stream being provided to the content consumer based on the sent first instructions to recapture the loss of attention of the content consumer ([0008]-[0010], [0051], [0054], [0064], [0067], [0084], Figs. 1, 7-8, i.e., sensor measurements may be indicative of a user's emotional state or condition as the user is watching the audiovisual content, such as excited versus bored and attentive vs distracted. The system provides dynamic content modifications based on user reactions to provide an enhanced viewing experience and thus recapture the loss of attention). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data; and updating a visible characteristic of the video stream being provided to the content consumer based on the sent first instructions to recapture the loss of attention of the content consumer as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. The combination still fails to disclose generating a screen flash, providing a warning to the content consumer, and combinations thereof. Selen teaches the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof ([0012], [0017], [0035], [0052], Fig. 1, i.e., a warning issued by the Student Application 230 to a distracted student may be the illumination or flashing of a light, display, or LED of the student's mobile device 120, or displaying a message on the screen of the student's mobile device 120). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof as taught by Selen, to improve the dynamically customized audio-visual system of Gates for the predictable result of gaining the viewer’s attention to alert them of content modification. Regarding claim 2, Gates discloses “further comprising updating an audible characteristic of the video stream being provided to the content consumer, wherein updating the audible characteristic comprises … reducing an amount of content provided in the video stream, and combinations thereof.” ([0079]-[0080], [0089], [0135], i.e., deleting or omitting scenes and removing profanity). The combination fails to explicitly disclose “further comprising updating an audible characteristic of the video stream being provided to the content consumer, wherein updating the audible characteristic comprises increasing a volume of the video stream … and combinations thereof.” Arronson teaches the technique of updating an audible characteristic of the video stream being provided to the content consumer, wherein updating the audible characteristic comprises increasing a volume of the video stream … and combinations thereof ([0079], [0085], [0089], Fig. 7, i.e., the music volume is increased). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of updating an audible characteristic of the video stream being provided to the content consumer, wherein updating the audible characteristic comprises increasing a volume of the video stream … and combinations thereof as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. Therefore, the combination teaches “further comprising updating an audible characteristic of the video stream being provided to the content consumer, wherein updating the audible characteristic comprises increasing a volume of the video stream, adding subtitles to the video stream, reducing an amount of content provided in the video stream, and combinations thereof.” Regarding claim 3, Gates discloses “wherein the content consumer data includes information relating to an emotional state of the content consumer.” ([0127], [0132], i.e., monitoring at least one characteristic of at least one viewer including facial features, smile, frown, scowl, displeasure, interest, lack of interest, laughter, tears, fear, anxiety, sadness, disgust, shock, distaste, etc.). Regarding claim 4, Gates discloses “wherein the content consumer data comprises biodata of the content consumer, and scheduling information of the content consumer.” ([0127], [0131], [0144], i.e., facial, body and voice recognition and providing a time period available for viewing for at least one viewer at 3206 (e.g. receiving a manual input from a viewer, reading a viewer's calendar or scheduling software). Regarding claim 5, the combination fails to explicitly disclose “wherein the emotional state of the content consumer comprises at least one of a level of confusion, a level of attentiveness, and a level of comprehension of the content consumer.” Aronsson teaches the technique of providing wherein the emotional state of the content consumer comprises at least one of a level of confusion, a level of attentiveness, and a level of comprehension of the content consumer ([0051], [0064], [0084], i.e., sensor measurements may be indicative of a user's emotional state or condition as the user is watching the audiovisual content, such as attentive vs distracted). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the emotional state of the content consumer comprises at least one of a level of confusion, a level of attentiveness, and a level of comprehension of the content consumer as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. Regarding claim 6, Gates discloses “providing, by use of the first processor, status signals to a second program in the second electronic device used by a content producer based on the first instructions; providing, by use of the second program in the second electronic device, second instructions to a second processor in the second electronic device based on the status signals; and providing, by use of the second processor, suggestions to the content producer on how to update an audible or visible characteristic of the video stream based on the second instructions.” ([0042]-[0044], Fig. 5, i.e., one or more core content providers 510 receive the one or more selection inputs 512 (or default inputs if specific inputs are not provided), and modify an audio-visual core portion using the one or more dynamic customization systems 512 to provide a dynamically customized audio-visual content 470 to a display 472 visible to one or more viewers 440, 442 in a viewing area 460). Regarding claim 7, the combination fails to explicitly disclose “wherein the suggestions include a suggestion to increase a volume of audio of the video stream, slow down a cadence of audio of the video stream, provide recommendations to the content consumer, or combinations thereof.” Aronsson teaches the technique of providing wherein the suggestions include a suggestion to increase a volume of audio of the video stream, slow down a cadence of audio of the video stream, provide recommendations to the content consumer, or combinations thereof ([0079]-[0080], [0085], [0089], i.e., based on the modification instructions such liked scene can be extended, the music volume increased). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the suggestions include a suggestion to increase a volume of audio of the video stream, slow down a cadence of audio of the video stream, provide recommendations to the content consumer, or combinations thereof as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. Regarding claim 8, Gates discloses “collecting, by use of the first program, content consumer metadata corresponding to the content consumer prior to initiation of the video stream; sending, by use of the first program, second instructions based on the content consumer metadata to the first processor in the first electronic device; and updating, by use of the first processor, an audible or visible characteristic of the video stream based on the second instructions.” ([0129]-[0133], [0136]-[0137], Figs. 30-31, i.e., providing dynamically customized audio-visual content based on a viewing history). Regarding claim 9, Gates discloses “wherein the content consumer metadata is based on content consumer data collected and processed during previous video streams.” ([0129]-[0133], [0136]-[0137], Figs. 30-31, i.e., providing dynamically customized audio-visual content based on a viewing history). Regarding claim 10, Gates discloses “A video streaming system ([0009], [0017], [0020], Figs. 1-4) comprising: a first electronic device used by a content consumer that is configured to receive a video stream … ([0017]-[0022], Figs. 1-5, i.e., receiving audio-visual core portion 102, such as a television broadcast), the first electronic device comprising a first program that is executed by a first processor of a second camera device ([0022], [0025], [0127], Figs. 1-5), the first program configured to: receive content consumer data corresponding to the content consumer that is receiving the video stream captured by the second camera device and collected by the first ([0127], [0129], Figs. 1-5, i.e., sensor 250 monitors at least one characteristic of at least one viewer, such as facial features. Additionally, the sensing device can include a Microsoft Kinect consisting of a camera); and send first instructions based on the content consumer data to the first processor … and the first processor is configured to update a visible characteristic of the video stream based on the first instructions ...” ([0019], [0024], [0031]-[0032], [0130]-[0135], Figs. 1-5, 30-33, i.e., receiving dynamically customized audio-visual content based on monitored viewer characteristics and reactions). Gates fails to explicitly disclose a first electronic device used by a content consumer that is configured to receive a video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device. Bustamante teaches of technique of providing a first electronic device used by a content consumer that is configured to receive a video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device ([0011], [0015], [0017], [0035], [0050], Figs. 2-3, i.e., cameras providing alternative or simultaneous perspectives on an event. For instance, watching a sports team score from multiple cameras in the stadium. Additionally, the system could be used in a multi-view application, such as a video conference). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a first electronic device used by a content consumer that is configured to receive a video stream generated from a first camera device connected to a second electronic device in communication with the first electronic device as taught by Bustamante, to improve the dynamically customized audio-visual system of Gates for the predictable result of capturing a plurality of different views and perspectives providing a richer experience ([0013]). The combination fails to explicitly disclose wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the first processor is configured to update a visible characteristic of the video stream based on the first instructions to recapture the loss of attention of the content consumer. Arronson teaches the technique of providing wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the first processor is configured to update a visible characteristic of the video stream based on the first instructions to recapture the loss of attention of the content consumer ([0008]-[0010], [0051], [0054], [0064], [0067], [0084], Figs. 1, 7-8, i.e., sensor measurements may be indicative of a user's emotional state or condition as the user is watching the audiovisual content, such as excited versus bored and attentive vs distracted. The system provides dynamic content modifications based on user reactions to provide an enhanced viewing experience and thus recapture the loss of attention). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the first processor is configured to update a visible characteristic of the video stream based on the first instructions to recapture the loss of attention of the content consumer as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. The combination still fails to disclose generating a screen flash, providing a warning to the content consumer, and combinations thereof. Selen teaches the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof ([0012], [0017], [0035], [0052], Fig. 1, i.e., a warning issued by the Student Application 230 to a distracted student may be the illumination or flashing of a light, display, or LED of the student's mobile device 120, or displaying a message on the screen of the student's mobile device 120). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof as taught by Selen, to improve the dynamically customized audio-visual system of Gates for the predictable result of gaining the viewer’s attention to alert them of content modification. Regarding claim 11, claim 11 is interpreted and thus rejected for the reasons set forth above in the rejection claim 2. Regarding claim 12, claim 12 is interpreted and thus rejected for the reasons set forth above in the rejection claim 3. Regarding claim 13, claim 13 is interpreted and thus rejected for the reasons set forth above in the rejection claim 4. Regarding claim 14, claim 14 is interpreted and thus rejected for the reasons set forth above in the rejection claim 5. Regarding claim 15, claim 15 is interpreted and thus rejected for the reasons set forth above in the rejection claim 6. Regarding claim 16, claim 16 is interpreted and thus rejected for the reasons set forth above in the rejection claim 8. Regarding claim 17, Gates discloses “A first electronic device comprising: a program executed by a processor of a first camera device ([0022], [0025], [0127], Figs. 1-5), the program configured to: receive content consumer data corresponding to a content consumer captured by the first camera device and collected by the processor ([0127], [0129], Figs. 1-5, i.e., sensor 250 monitors at least one characteristic of at least one viewer, such as facial features. Additionally, the sensing device can include a Microsoft Kinect consisting of a camera); and send instructions based on the content consumer data to the processor… and the processor is configured to update a visible characteristic of a video stream received by the first electronic device … that is generated from a second camera device connected to a second electronic device in communication with the first electronic device based on the instructions.” ([0019], [0024], [0031]-[0032], [0130]-[0135], Figs. 1-5, 30-33, i.e., receiving dynamically customized audio-visual content based on monitored viewer characteristics and reactions). Gates fails to explicitly disclose a video stream received by the first electronic device that is generated from a second camera device connected to a second electronic device in communication with the first electronic device. Bustamante teaches the technique of providing a video stream received by the first electronic device that is generated from a second camera device connected to a second electronic device in communication with the first electronic device ([0011], [0015], [0017], [0035], [0050], Figs. 2-3, i.e., cameras providing alternative or simultaneous perspectives on an event. For instance, watching a sports team score from multiple cameras in the stadium. Additionally, the system could be used in a multi-view application, such as a video conference). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a video stream received by the first electronic device that is generated from a second camera device connected to a second electronic device in communication with the first electronic device as taught by Bustamante, to improve the dynamically customized audio-visual system of Gates for the predictable result of capturing a plurality of different views and perspectives providing a richer experience ([0013]). The combination fails to explicitly disclose wherein the instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the processor is configured to update a visible characteristic of a video stream received by the first electronic device to recapture the loss of attention of the content consumer. Aronsson teaches the technique of providing wherein the instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the processor is configured to update a visible characteristic of a video stream received by the first electronic device to recapture the loss of attention of the content consumer ([0008]-[0010], [0051], [0054], [0064], [0067], [0084], Figs. 1, 7-8, i.e., sensor measurements may be indicative of a user's emotional state or condition as the user is watching the audiovisual content, such as excited versus bored and attentive vs distracted. The system provides dynamic content modifications based on user reactions to provide an enhanced viewing experience and thus recapture the loss of attention). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the instructions are based on detecting a loss of attention of the content consumer by comparing the content consumer data with previously collected content consumer data, and the processor is configured to update a visible characteristic of a video stream received by the first electronic device to recapture the loss of attention of the content consumer as taught by Aronsson, to improve the dynamically customized audio-visual system of Gates for the predictable result of determining if a viewer is engaged or not to better tailor content to their preferences. The combination still fails to disclose generating a screen flash, providing a warning to the content consumer, and combinations thereof. Selen teaches the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof ([0012], [0017], [0035], [0052], Fig. 1, i.e., a warning issued by the Student Application 230 to a distracted student may be the illumination or flashing of a light, display, or LED of the student's mobile device 120, or displaying a message on the screen of the student's mobile device 120). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of generating a screen flash, providing a warning to the content consumer, and combinations thereof as taught by Selen, to improve the dynamically customized audio-visual system of Gates for the predictable result of gaining the viewer’s attention to alert them of content modification. Regarding claim 18, Gates discloses “wherein the first camera device and the program are disposed within the first electronic device.” ([0022], [0127], Figs. 1-3, i.e., sensor 150 may be a separate component or may alternately be integrated into the same component with the display 130 or the processing component 110, wherein sensor may be a Microsoft Kinect). Regarding claim 19, Gates discloses “wherein the camera device is external to the electronic device and the program is disposed within the camera device.” ([0022], [0127], Figs. 1-3, i.e., sensor 150 may be a separate component or may alternately be integrated into the same component with the display 130 or the processing component 110, wherein sensor may be a Microsoft Kinect). Regarding claim 20, claim 20 is interpreted and thus rejected for the reasons set forth above in the rejection claim 4. Citation of Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Woods et al. (US Pub. 2015/0106829) discloses a system for compensating for disabilities when presenting media asset. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 February 11, 2026
Read full office action

Prosecution Timeline

Apr 19, 2024
Application Filed
May 12, 2025
Non-Final Rejection — §103
Aug 13, 2025
Examiner Interview Summary
Aug 13, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Response Filed
Sep 09, 2025
Final Rejection — §103
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 11, 2025
Request for Continued Examination
Dec 19, 2025
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604061
CLOSED CAPTIONING SUMMARIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12593088
METHODS AND APPARATUS TO DETERMINE MEDIA EXPOSURE OF A PANELIST
2y 5m to grant Granted Mar 31, 2026
Patent 12587717
FACILITATING VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12587694
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12563266
USER-BASED CONTENT FILTERING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
80%
With Interview (+12.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month