Prosecution Insights
Last updated: April 19, 2026
Application No. 18/899,569

PROVIDING ENHANCED CONTENT WITH IDENTIFIED COMPLEX CONTENT SEGMENTS

Final Rejection §103
Filed
Sep 27, 2024
Examiner
DANG, HUNG Q
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
1257 granted / 1841 resolved
+10.3% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
95 currently pending
Career history
1936
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1841 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments filed 12/10/2025 have been considered but are moot in view of a new ground of rejections. Further, on page 8, Applicant argues that, “… … However, nowhere does Goyal teach or contemplate adjusting a threshold complexity score based on the inputs of other users, as required by Applicant's amended independent claims …” In response, Examiner respectfully submits that, without acquiescing to any of Applicant’s characterization of cited prior art, the claim recites “adjusting recorded complexity score,” which is the complexity score recorded in the user profile, not adjusting the “complexity threshold” established based on the recorded complexity score. Nevertheless, the amendment necessitated a new ground of rejections as detailed below. Claim Objections The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 4 and 14 are objected to because they fail to specify a further limitation of the subject matter claimed. Claims 4 and 14 recite limitations that further limit the subject matter of corresponding independent claims 1 and 11. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-8, 11-14, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Goyal et al. (US 2018/0288481 A1 – hereinafter Goyal), Candelore (US 2016/0050467 A1 – hereinafter Candelore), and Abou Mahmoud et al. (US 2017/0064244 A1 – hereinafter Abou Mahmoud). Regarding claim 1, Goyal discloses a method, comprising: displaying a content item on a device, the content item comprising a plurality of scenes ([0057]; [0060]; [0066]-[0067]; [0072]; [0078] – displaying a media content, the media content comprising a plurality of scenes, i.e. at least a scene corresponds to a lesson so that the user while in the current lesion can ‘skip to the next lesson’ in [0059]); receiving input indicating a first scene of the plurality of scenes is complex ([0057]; [0060]; [0066]-[0067]; [0072] – receiving input, which is derived from user’s actions and/or note-taking etc., indicating a first scene is complex, for example, when the user slows down playback speed, replays the lesson, or pauses playback and/or takes notes etc.); in response to receiving the input indicating the first scene of the plurality of scenes is complex: enabling the user better understand or expand upon the subject matter ([0058] – pausing presentation or replaying some portion of the media); recording a comprehension level corresponding to the first scene in a user profile ([0067]; [0073] – recording a corresponding comprehension level in forms of mapping of the comprehension level with complexity of subject matter); and establishing, based on the recorded comprehension level, a complexity threshold to identify segments of the content item that may be complex ([0067]-[0068]; [0073] – based on the recorded comprehension level as mapped to the complexity of subject matter of the first scene, establishing a rule comprising a complexity threshold, to identify scenes that may be complex, i.e. using the complexity of subject matter of the first scene as complexity threshold, determining scenes of the same or similar level of complexity, if the user indicates the first scene as complex, the second scenes having the same or similar level of complexity are also determined as complex as further described at least in [0078]); determining, based on recorded comprehension level, that a second scene of the plurality of scenes is at least as complex as the first scene, wherein the second scene is displayed after the first scene ([0067]-[0068]; [0073]; [0078] – determining a second scene, e.g. a second lesson or a subsequent portion, is at least a complex as an earlier lesson or portion having a same or similar level of complexity); based at least in part on determining the second scene is at least as complex as the first scene, enabling the user better understand or expand upon the subject matter ([0067]; [0078] – based on the second scene or portion having a same or similar level of complexity as the earlier scene or portion, enabling the user better understand or expand upon the subject matter). However, Goyal does not disclose: “enabling the user better understand or expand upon the subject matter” as “generating a text description of each of the first scene and the second scene, wherein the text description is different than closed captions for the corresponding scene; and displaying the text description of the first or the second scene simultaneously with the first or second scene, respectively,” the “comprehension level” as “complexity score” as recited, adjusting the recorded complexity score based on inputs received from other users, and the determining step is based on the adjusted recorded complexity score. Candelore discloses enabling a user better understand or expand upon a subject matter by generating a text description of a scene, wherein the text description is different than closed captions for the scene ([0044]-[0046] — generating a text description of the scene as shown in Fig. 4 as with, thus different from, closed captioning data); and displaying the text description of the scene simultaneously with the scene (Fig. 4; [0044]-[0046] — displaying the text description simultaneously with the scene in view of Goyal disclosing the scene). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Candelore into each of the first scene and the second scene in the method taught by Goyal to provide better explanation data to help viewers understand the subject matter of the corresponding scene. Goyal and Candelore do not disclose 2), 3), and 4) above. Abou Mahmoud discloses 2) the “comprehension level” as “complexity score” (Fig. 2; [0041] – recording user comprehension level as target rate, which corresponds to a complexity score), 3) adjusting a recorded complexity score based on inputs received from other users ([0054] – receiving inputs from other users in a group identified by a group identifier, each input contributes to a corresponding target rate, then aggregating, e.g. calculating average target rate, the target rates for a particular recording), and 4) a determining step is based on the adjusted recorded complexity score ([0041]; [0054] - setting a target rate for a segment for playback based on one or more of the target rates from among target rates set by group identifier, thus determining whether a subsequent segment is complex to a user using the aggregated target rate). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Abou Mahmoud into the method taught by Goyal and Candelore to facilitate management of user profiles by aggregating user profiles of those users who share group characteristics. Regarding claim 2, see the teachings of Goyal, Candelore, and Abou Mahmoud as discussed in claim 1 above, in which Candelore in view of Goyal also discloses the text description of the second scene comprises a synopsis of the second scene ([0044]-[0046]; Fig. 4 – the text describes a summary of the scene, which, in view of Goyal, is the second scene). The motivation for incorporating the teachings of Candelore into the method has been discussed in claim 1 above. Regarding claim 3, see the teachings of Goyal, Candelore, and Abou Mahmoud as discussed in claim 1 above, in which Candelore in view of Goyal also discloses the text description of the second scene comprises an identification of persons involved in the second scene and a description of events of the second scene ([0044]-[0046]; Fig. 4 – the text mentions names of the person involved in the scene, which, in view of Goyal, is the second scene). The scope of claim 4 is accommodated by the scope of claim 1. Thus, claim 4 is rejected accordingly as the same reason discussed in claim 1. Regarding claim 7, Goyal in view of Candelore and Abou Mahmoud also discloses the method of claim 1, further comprising: identifying a first complexity score for the first scene ([0057]; [0060]; [0066]-[0067]; [0072] – identifying a complexity level for the first scene or portion in view of Abou Mahmoud disclosing the complexity level as a complexity score); and identifying a second complexity score for the second scene, wherein determining the second scene is at least as complex as the first scene comprises determining the second complexity score is equal to or greater than the first complexity score ([0067]; [0078] – determining a second scene, e.g. a second lesson or a subsequent portion, is at least a complex as an earlier lesson or portion having a same or similar level of complexity in view of Abou Mahmoud disclosing the complexity level as a complexity score). Regarding claim 8, Goya in view of Candelore and Abou Mahmoud also discloses the method of claim 1, further comprising: based at least in part on receiving the input indicating the first scene is complex, automatically determining an issue with the first scene ([0057]; [0060]; [0066]-[0067]; [0072] – receiving input, which is derived from user’s actions, facial expressions, gestures, and/or note-taking etc., indicating a first scene is complex, for example, when the user slows down playback speed, replays the lesson, or pauses playback and/or takes notes etc., determining an issue as subject being difficult to understand), wherein determining the second scene is at least as complex as the first scene comprises automatically determining the issue exists with the second scene ([0066]-[0068] – the second scene is as complex when determining the issue exists, e.g. having similar subject). Claim 11 is rejected for the same reason as discussed in claim 1 above in view of Goya also disclosing a system (Fig. 5; [0069]-[0073]) comprising: control circuitry configured to display a content item on a device, the content item comprising a plurality of scenes ([0069] – control circuitry to display a media content comprising a plurality of scenes as discussed in claim 1 above); input/output circuitry configured to perform the recited steps (Fig. 5; [0069]-[0073] - input/output circuitry configured to perform the recited steps as discussed in claim 1 above). Claim 12 is rejected for the same reason as discussed in claim 2 above. Claim 13 is rejected for the same reason as discussed in claim 3 above. Claim 14 is rejected for the same reason as discussed in claim 4 above. Claim 17 is rejected for the same reason as discussed in claim 7 above. Claim 18 is rejected for the same reason as discussed in claim 8 above. Claims 5, 9-10, 15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goyal, Candelore, and Abou Mahmoud as applied to claims 1-4, 7-8, 11-14, and 17-18 above, and further in view of Bostick et al. (US 2018/0270283 A1 – hereinafter Bostick). Regarding claim 5, see the teachings of Goya, Candelore, and Abou Mahmoud as discussed in claim 1 above. Goya also discloses receiving input indicating a reason why the first scene is complex ([0057]; [0060]; [0066]-[0067]; [0072] – receiving input, which is derived from user’s actions and/or note-taking etc., indicating a first scene is complex, for example, when the user slows down playback speed, replays the lesson, or pauses playback and/or takes notes etc.), wherein determining the second scene is at least as complex as the first scene is based at least in part on the received input indicating the reason the first scene is complex ([0060]-[0061]). However, Goya, Candelore, and Abou Mahmoud do not disclose displaying a prompt on the device requesting information on why the first scene is complex. Bostick discloses displaying a prompt on a device requesting information on why a first scene is complex (Figs. 5A-5B). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Bostick into the method taught by Goya, Candelore, and Abou Mahmoud to enhance the user interface of the method by confirming with the user regarding the complexity of the scene thus avoiding errors in determining the user capability of understanding. Regarding claim 9, see the teachings of Goya, Candelore, and Abou Mahmoud as discussed in claim 8 above. However, Goya, Candelore, and Abou Mahmoud do not disclose the issue comprises at least one of dialogue complexity, timeline difficulty, display issues, or audio issues. Bostick discloses an issue associated with complexity of a scene comprises at least one of dialogue complexity, timeline difficulty, display issues, or audio issues ([0046] – at least dialog complexity). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Bostick into the method taught by Goya, Candelore, and Abou Mahmoud to identify the reason why the scene is complex in order to provide a correct way to enable the user to better understand the subject. Regarding claim 10, see the teachings of Goya, Candelore, and Abou Mahmoud as discussed in claim 1 above. Goya also discloses receiving input indicating the third scene is one of complex or not complex ([0057]; [0060]; [0066]-[0067]; [0072] – receiving input, which is derived from user’s actions and/or note-taking etc., indicating any scene is complex, for example, when the user slows down playback speed, replays the lesson, or pauses playback and/or takes notes etc. during playback of a third scene); and based at least in part on the received input, identifying the third scene as one of complex or not complex ([0057]; [0060]; [0066]-[0067]; [0072] – determining the third scene as complex when the user slows down playback speed, replays the lesson, or pauses playback and/or takes notes etc.). However, Goya, Candelore, and Abou Mahmoud do not disclose displaying a prompt on the device asking whether a third scene of the plurality of scenes is complex. Bostick discloses a prompt on a device asking whether a scene of a plurality of scenes is complex (Figs. 5A-5B); receiving input indicating the scene is one of complex or not complex (Figs. 5A-5B – selecting to take an action or to do nothing); and based at least in part on the received input, identifying the third scene as one of complex or not complex (Figs. 5A-5B – determining the scene is complex when the user selects to replay, to turn on subtitle, to slow down playback speed, etc. and determining the scene is not complex when the user selects to do nothing). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Bostick into the method taught by Goya, Candelore, and Abou Mahmoud to enhance the user interface of the method by confirming with the user regarding the complexity of the scene thus avoiding errors in determining the user capability of understanding. Claim 15 is rejected for the same reason as discussed in claim 5 above. Claim 19 is rejected for the same reason as discussed in claim 9 above. Claim 20 is rejected for the same reason as discussed in claim 10 above. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Goya, Candelore, and Abou Mahmoud as applied to claims 1-4, 7-8, 11-14, and 17-18 above, and further in view of Sullivan et al. (US 9,852,215 B1 — hereinafter Sullivan). Regarding claim 6, see the teachings of Goya, Candelore, and Abou Mahmoud as discussed in claim 1 above. Goya also discloses identifying commentary indicating the first scene is complex, wherein determining the second scene is at least as complex as the first scene comprises identifying commentary indicating the second scene is complex ([0057]; [0059]). However, Goya, Candelore, and Abou Mahmoud do not disclose the commentary are posted on a social network. Sullivan discloses identifying commentary posted on a social network indicating a first portion of media is complex, wherein determining a second portion is at least as complex as the first portion comprises identifying commentary posted on the social network indicating the second portion is complex (column 14, lines 32-53 – any portion of the media can be given directly by users’ comments on a social network). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Sullivan into determining complexity of each of the scenes in the method taught by Goya, Candelore, and Abou Mahmoud to enhance the source of data for determining complexity levels of the scenes, making the method more robust. Claim 16 is rejected for the same reason as discussed in claim 6 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUNG Q DANG/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Aug 27, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Feb 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594460
MANAGING BLOBS FOR TRACKING OF SPORTS PROJECTILES
2y 5m to grant Granted Apr 07, 2026
Patent 12588818
DETECTION OF A MOVABLE OBJECT WHEN 3D SCANNING A RIGID OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12592258
METHOD AND APPARATUS FOR INTERACTIVE VIDEO EDITING PLATFORM TO CREATE OVERLAY VIDEOS TO ENHANCE ENTERTAINMENT VIDEO GAMES WITH EDUCATIONAL CONTENT
2y 5m to grant Granted Mar 31, 2026
Patent 12587693
ARTIFICIALLY INTELLIGENT AD-BREAK PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574649
ENCODING AND DECODING METHOD, ELECTRONIC DEVICE, COMMUNICATION SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
87%
With Interview (+18.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 1841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month