Prosecution Insights
Last updated: April 19, 2026
Application No. 19/037,688

SYSTEMS AND METHODS FOR FAST, INTUITIVE, AND PERSONALIZED LANGUAGE LEARNING FROM VIDEO SUBTITLES

Non-Final OA §102§103§DP
Filed
Jan 27, 2025
Examiner
KHALID, OMER
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
324 granted / 488 resolved
+8.4% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 7-14, 17-20 of U.S. Patent No. 12,238,385. Although the claims at issue are not identical, they are not patentably distinct from each other because the differences between the claimed subject matter and the patented claims amount to obvious variations of the same invention. The patented claims disclose a method that includes identify a vocabulary level of a user in a first language and utilizing subtitle information to assist language learning. The instant claims recite substantially the same core concept, but additionally recite operations such as displaying subtitle information, predicting vocabulary familiarity, and identifying unknown words. These differences would have been obvious to one of ordinary skill in the art at the time of invention because they represent routine and predictable processing steps inherently associated with subtitle-based language analysis system. Displaying subtitles is a conventional operation in subtitle processing systems, since subtitles must be presented to the user in order to enable analysis or extraction of linguistic information. Predicting or determining vocabulary familiarity represents a predictable analytical refinement of identifying a user’s vocabulary level. Identifying unknown words is merely a logical and expected result of comparing subtitle content to a user’s vocabulary level. Therefore, the instant claims do not render the claims patentably distinct from the patent claims. Instant Application 19/037,688 Patented 12,238,385 1. A method comprising: identifying, using control circuitry and based on a user profile associated with a user, a vocabulary level of the user in a first language; during playback on a first device, of a media asset on a first device associated with the user, displaying subtitles in the first language for at least a portion of the media asset, the subtitles comprising a plurality of words in the first language; determining, based on the vocabulary level and the user profile, a subset of words of the plurality words that are unknown to the user; and generating for display, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words. 5. The method of claim 1, further comprising: receiving an input to bookmark a word of the subset of words; and based on receiving the input: identifying a start time and an end time for display of a subtitle in which the word is contained; and storing the start time, the end time, and an identifier of the media asset in association with the word. 6. The method of claim 5, further comprising: storing a segment of the media asset beginning at the start time and ending at the end time; and based on receiving a selection of the bookmark, replaying the stored segment. 2. The method of claim 1, wherein identifying a vocabulary level of the user in the first language further comprises: determining, based on the user profile, a language learning history related to the first language; and determining, based on the language learning history, a familiarity level of the user with the first language. 3. The method of claim 1, wherein generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: transmitting, to the second device, the subset of words and the respective explanations. 4. The method of claim 1, wherein generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: adding the subset of words and the respective explanation of each word to a list of words and corresponding explanations that have previously been generated for display during playback of the media asset; and automatically scrolling the list such that the subset of words is visible on a display of the second device. 7. The method of claim 5, further comprising: based on receiving a selection of the bookmark: retrieving the identifier of the media asset; and based on the identifier of the media asset, the start time, and the end time, replaying a segment of the media asset beginning at the start time and ending at the end time. 8. The method of claim 7, wherein replaying the segment of the media asset corresponding to the display time comprises instructing the first device to perform a rewind operation to the start time. 9. The method of claim 7, wherein replaying the segment of the media asset corresponding to the display time comprises: instructing the first device to pause playback of the media asset; retrieving, at the second device, the segment of the media asset; and replaying at least audio data of the segment on the second device. 10. The method of claim 1, wherein generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: retrieving a portion of the subtitles, the portion including a word of the subset of words; and transmitting, to the second device, the portion and the respective explanation of the word of the subset of words. 11. A system comprising: input/output circuitry; and control circuitry configured to: identify, based on a user profile associated with a user, a vocabulary level of the user in a first language; during playback of a media asset on a first device associated with the user, display subtitles in the first language for at least a portion of the media asset, the subtitles comprising a plurality of words in the first language; determine, based on the vocabulary level and the user profile, a subset of words of the plurality words that are unknown to the user; generate for display, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words. 15. The system of claim 11, wherein the control circuitry is further configured to: receive an input to bookmark a word of the subset of words; and based on receiving the input: identify a start time and an end time for display of a subtitle in which the word is contained; and store the start time, the end time, and an identifier of the media asset in association with the word. 16. The system of claim 15, wherein the control circuitry is further configured to: store a segment of the media asset beginning at the start time and ending at the end time; and based on receiving a selection of the bookmark, replay the stored segment. 12. The system of claim 11, wherein the control circuitry is configured, when identifying a vocabulary level of the user in the first language, to: determine, based on the user profile, a language learning history related to the first language; and determine, based on the language learning history, a familiarity level of the user with the first language. 13. The system of claim 11, wherein the control circuitry is configured, when generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words, to: transmit, to the second device, the subset of words and the respective explanations. 14. The system of claim 11, wherein the control circuitry is configured, when generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words, to: adding the subset of words and the respective explanation of each word to a list of words and corresponding explanations that have previously been generated for display during playback of the media asset; and automatically scrolling the list such that the subset of words is visible on a display of the second device. 17. The system of claim 15, wherein the control circuitry is further configured to: based on receiving a selection of the bookmark: retrieve the identifier of the media asset; and based on the identifier of the media asset, the start time, and the end time, replay a segment of the media asset beginning at the start time and ending at the end time. 18. The system of claim 17, wherein replaying the segment of the media asset corresponding to the display time comprises instructing the first device to perform a rewind operation to the start time. 19. The system of claim 17, wherein the control circuitry is configured, when replaying the segment of the media asset corresponding to the display time, to: instruct the first device to pause playback of the media asset; retrieve, at the second device, the segment of the media asset; and replay at least audio data of the segment on the second device. 20. The system of claim 11, wherein the control circuitry is configured, when generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words, to: retrieve a portion of the subtitles, the portion including a word of the subset of words; and transmit, to the second device, the portion and the respective explanation of the word of the subset of words. 1. A method for fast, intuitive, personalized language learning from subtitles, the method comprising: identifying a vocabulary level of a user in a first language; during playback, on a first device, of a media asset, extracting subtitles in the first language for at least a portion of the media asset, the subtitles comprising a plurality of words in the first language; predicting, based on the vocabulary level, a subset of words of the plurality words that are new to the user; generating for display, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words; receiving an input to bookmark a word of the subset of words; based on receiving the input: identifying a start time and an end time for display of a subtitle in which the word is contained; and storing the start time, the end time, and an identifier of the media asset in association with the word; storing a segment of the media asset beginning at the start time and ending at the end time; and based on receiving a selection of the bookmark, replaying the stored segment. 2. The method of claim 1, wherein identifying a vocabulary level of the user in the first language further comprises: retrieving, from a user profile associated with the user, a language learning history for the first language; and determining, based on the language learning history, a familiarity level of the user with the first language. 2. The method of claim 1, wherein identifying a vocabulary level of the user in the first language further comprises: retrieving, from a user profile associated with the user, a language learning history for the first language; and determining, based on the language learning history, a familiarity level of the user with the first language. 3. The method of claim 1, wherein generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: transmitting, to the second device, the subset of words and the respective explanations. 4. The method of claim 1, wherein generating for display, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: adding the subset of words and the respective explanation of each word to a list of words and corresponding explanations that have previously been generated for display during playback of the media asset; and automatically scrolling the list such that the subset of words is visible on a display of the second device. 7. The method of claim 1, further comprising: in response to receiving a selection of the bookmark: retrieving the identifier of the media asset; and based on the identifier of the media asset, the start time, and the end time, replaying a segment of the media asset beginning at the start time and ending at the end time. 8. The method of claim 7, wherein replaying the segment of the media asset corresponding to the display time comprises instructing the first device to perform a rewind operation to the start time. 9. The method of claim 7, wherein replaying the segment of the media asset corresponding to the display time comprises: instructing the first device to pause playback of the media asset; retrieving, at the second device, the segment of the media asset; and replaying at least audio data of the segment on the second device. 10. The method of claim 1, wherein generating for display, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words further comprises: retrieving a sentence from the subtitles, the sentence including a word of the subset of words; and transmitting, to the second device, the sentence and the respective explanation of the word of the subset of words. 11. A system for fast, intuitive, personalized language learning from subtitles, the system comprising: input/output circuitry; and control circuitry configured to: identify a vocabulary level of a user in a first language; during playback, on a first device, of a media asset, extract subtitles in the first language for at least a portion of the media asset, the subtitles comprising a plurality of words in the first language; predict, based on the vocabulary level, a subset of words of the plurality words that are new to the user; generate for display, via the input/output circuitry, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words receive an input to bookmark a word of the subset of words; based on receiving the input: identify a start time and an end time for display of a subtitle in which the word is contained; and store the start time, the end time, and an identifier of the media asset in association with the word; store a segment of the media asset beginning at the start time and ending at the end time; and based on receiving a selection of the bookmark, replay the stored segment. 12. The system of claim 11, wherein the control circuitry configured to identify a vocabulary level of the user in the first language is further configured to: retrieve, using the input/output circuitry, from a user profile associated with the user, a language learning history for the first language; and determine, based on the language learning history, a familiarity level of the user with the first language. 12. The system of claim 11, wherein the control circuitry configured to identify a vocabulary level of the user in the first language is further configured to: retrieve, using the input/output circuitry, from a user profile associated with the user, a language learning history for the first language; and determine, based on the language learning history, a familiarity level of the user with the first language. 13. The system of claim 11, wherein the control circuitry configured to generate for display, via the input/output circuitry, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words is further configured to: transmit, to the second device, the subset of words and the respective explanations. 14. The system of claim 11, wherein the control circuitry configured to generate for display, via the input/output circuitry, on the second device associated with the user, the subset of words and a respective explanation of each word of the subset of words is further configured to: add the subset of words and the respective explanation of each word to a list of words and corresponding explanations that have previously been generated for display during playback of the media asset; and automatically scroll the list such that the subset of words is visible on a display of the second device. 17. The system of claim 11, wherein the control circuitry is further configured to in response to receiving a selection of the bookmark: retrieve the identifier of the media asset; and based on the identifier of the media asset, the start time, and the end time, replay a segment of the media asset beginning at the start time and ending at the end time. 18. The system of claim 17, wherein the control circuitry configured to replay the segment of the media asset corresponding to the display time is further configured to instruct the first device to perform a rewind operation to the start time. 19. The system of claim 17, wherein the control circuitry configured to replay the segment of the media asset corresponding to the display time is further configured to: instruct the first device to pause playback of the media asset; retrieve, at the second device, the segment of the media asset; and replay at least audio data of the segment on the second device. 20. The system of claim 11, wherein the control circuitry configured to generate for display, using the input/output circuitry, on a second device associated with the user, the subset of words and a respective explanation of each word of the subset of words is further configured to: retrieve a sentence from the subtitles, the sentence including a word of the subset of words; and transmit, to the second device, using the input/output circuitry, the sentence and the respective explanation of the word of the subset of words. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 1. Claim(s) 1, 2, 5, 11, 12 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application 2017/0344530 Krasadakis. 2. Regarding Claim 1, Krasadakis discloses A method comprising: Identifying (Fig. 2; [0039], “System 200 analyzes content, identifies words”), using control circuitry (Fig. 10; [0041], “Application-specific Integrated Circuits (ASICs)”) and based on a user profile associated with a user (([0022], “user segments or profiles can be used to set up an initial language profile for a user that can be modified as additional activity is received… update the person's expected vocabulary.” [0096], “profiles can be used to set up an initial language profile for a user”), a vocabulary level of the user in a first language (Fig. 5: 342; Fig 6: 354; [0070], “The user language data store 226 can include raw language data gathered by the user activity monitor 230. The raw language data can include a user's reading data and reading patterns. The raw language data can include knowledge about an individual user, such as the languages used by the user and proficiency the user has in each language. The user language data store 226 can also include a phrasebook that lists recently looked up words. In one aspect, the phrasebook can be part of the Vocabulary Analytics Store (VAS) 227”…[0071] “The VAS 227 can comprise a subset of user-specific language data. The subset can include all of the signals described previously as input to the machine classifier or other statistical modeling technique”…[0091],… “determine the user's vocabulary and ultimately determine whether other words are known or unknown”…[0095], “The binary classification machine can be trained to analyze user data in the VAS related to vocabulary knowledge”); during playback ([0030], “a user consumes content (videos)”) of a media asset on a first device associated with the user (Fig. 1: 102a-102n user device [e.g. 102a]; [0036] “user device maybe a video player”), displaying ([0045], FIG. 3, a content display) subtitles in the first language for at least a portion of the media asset, the subtitles comprising a plurality of words in the first language ([0043],“video content, the annotation could take the form of overlaid text or rich text aligned with the exact moment of the appearance of the unknown word in the video… annotation can take the form of subtitles aligned with the exact moment of the appearance of the unknown word in the audio”); determining, based on the vocabulary level ([0091], “user’s vocabulary…”) and the user profile ([0095]-[0096], “The user can be associated with profiles. These profiles can be used to determine whether a word is likely known or unknown”), a subset of words of the plurality words that are unknown to the user (Figs. 2, 4-6, 9: 930 determine that a subset of words in the text are potentially unknown to the user, see [100], “The vocabulary enrichment component 220 includes an unknown word predictor 222, an annotation engine 224, a user language data store 226, a public language data store 228, and a user activity monitor 230. These components can work together to identify words that are potentially unknown to the user and generate annotations within the content that can allow the user to better understand unknown words within the particular context”); and generating for display (Fig. 1: 102(a-n), see [0036], “smart watch, mobile device” [ e.g., each device generates for display]), on a second device associated with the user (Fig. 1: user device 102b [e.g., second device]), the subset of words and a respective explanation of each word of the subset of words (Figs. 4-6, [0039], “vocabulary enrichment component 220 could be located on different computing devices.” [0050], “The definition 342 of circumambulate 320 is “to walk all the way around something.””). 3. Regarding Claim 2, Krasadakis discloses The method of claim 1, wherein identifying a vocabulary level of the user in the first language (Fig. 5: 342; Fig 6: 354; [0070], “The raw language data can include knowledge about an individual user, such as the languages used by the user and proficiency the user has in each language.” [0091],… “determine the user's vocabulary and ultimately determine whether other words are known or unknown”…) further comprises: determining, based on the user profile ([0095], “These profiles can provide additional input to the classifier and be used to determine whether a word is likely known or unknown”), a language learning history related to the first language ([0022], “user segments or profiles can be used to set up an initial language profile for a user that can be modified as additional activity is received… update the person's expected vocabulary.” [0057], “The reading data can include text from content read by the user, a classification of content read by the user, reading analytics, and other data related to the user's reading habits.” [0070], “The user language data store 226 can include raw language data gathered by the user activity monitor 230. The raw language data can include a user's reading data and reading patterns. The raw language data can include knowledge about an individual user, such as the languages used by the user and proficiency the user has in each language” [0095], “The binary classification machine can be trained to analyze user data in the VAS related to vocabulary knowledge”); and determining, based on the language learning history, a familiarity level of the user with the first language ([0053], “the system can learn the languages understood by an individual person, derive preferences from observing user events, and select the language of the annotation accordingly. a translation of an unknown word from a first language into a user's native language (or any language with which the user has a higher fluency level than the content language) is provided when all available synonyms in the first language are also likely unknown to the user. The user's known languages can be explicitly provided by the user or learned by observing the language of content the user consumes or composes”). 4. Regarding Claim 5, Krasadkis in view of Wan discloses The method of claim 1, further comprising: receiving an input to bookmark a word of the subset of words ([0070], “The user language data store 226 can also include a phrasebook [e.g., bookmark] that lists recently looked up words.” [0092], “The requested words can be entered in a phrasebook that can be part of or separate from the VAS.” Fig. 9: block 950,“select an unknown word from the subset that has a confidence score that is higher”); and based on receiving the input: identifying ([0044], “components [e.g. 220, 222, 224, 226, 228, and 230] can work together to identify words that are potentially unknown to the user”) a start time and an end time for display of a subtitle in which the word is contained ([0034], “session organizes all the words consumed by the user along timestamps [e.g. start and end times] and is a key input for VAS post processing and enrichment”); and storing the start time [e.g., timestamp is storing start time], the end time [e.g., timestamp storing end time], and an identifier of the media asset in association with the word ([0039], “identifies words in the content that are likely unknown for a particular user at a point in time.” ([0044], “components [e.g., 220, 222, 224, 226, 228, and 230] can work together to identify words that are potentially unknown to the user”). 5. Claim 11 is a system claim, rejected with respect to the same limitation rejected in the method 6. Claim 12 is a system claim, rejected with respect to the same limitation rejected in the method claim 2. 7. Claim 15 is a system claim, rejected with respect to the same limitation rejected in the method claim 5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claim(s) 3, 10, 13, and 20are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application 2017/0344530 Krasadakis in view of U.S. Patent Application 2011/0164175, Chung et al. (hereinafter Chung). 9. Regarding Claim 3, Krasadakis discloses The method of claim 1, wherein generating for display (Fig. 1: 102(a-n), see [0036], smart watch, mobile device), on the second device associated with the user (Fig. 1: user device 102b), the subset of words and a respective explanation of each word of the subset of words (Fig. 5: 342, [0050], The definition 342 of circumambulate 320 is “to walk all the way around something.”) further comprises: the subset of words (Fig 9; [0100], “a subset of words in the text”) and the respective explanations (Figs. 4-6, 9, [0039], “vocabulary enrichment component 220 could be located on different computing devices.” [0049]-[0050], “The definition 342 of circumambulate 320 is “to walk all the way around something.””). Krasadakis does not explicitly disclose transmitting, to the second device However, Chung teaches transmitting, to the second device (Figs. 1-2, [0037], Communications path 424 may allow transfer of data such as audio, video, text, etc. between wireless communications device 406 and user equipment 402 and user computer equipment 404. [0038], subtitles may be streamed from user equipment 402 (e.g., a set-top box) to wireless communications device 406 over communications path 424… subtitles may be obtained by wireless communications device 406 from a media provider (e.g., media content source 416 (FIG. 12)) via the Internet) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the combination of Krasadakis to allow transmission of subtitles and its explanations of a subset of words onto a second device as taught in Chung for the purposes of improving user’s language learning seamlessly in a connected environment. 10. Regarding Claim 10, Krasadakis discloses The method of claim 1, wherein generating for display (Fig. 1: 102(a-n), see [0036], smart watch, mobile device), on the second device associated with the user (Fig. 1: user device 102b), the subset of words and a respective explanation of each word of the subset of words further comprises (Fig. 5: 342, [0050], The definition 342 of circumambulate 320 is “to walk all the way around something.”): retrieving a portion of the subtitles, the portion including a word of the subset of words (Fig. 4: 330; [0042], retrieve content 210 for identification of unknown words [i.e., portion] upon receiving an indication that the user is accessing content. [0049], [0051], The system can retrieve real examples of the unknown word being used, from the content consumed across users, geographies, and context. The system can select the most relevant and/or most popular examples. The example can be a complete sentence using the particular word); and Krasadakis does not explicitly disclose transmitting, to the second device However, Chung teaches transmitting, to the second device (Figs. 1-2, [0037], Communications path 424 may allow transfer of data such as audio, video, text, etc. between wireless communications device 406 and user equipment 402 and user computer equipment 404. [0038], subtitles may be streamed from user equipment 402 (e.g., a set-top box) to wireless communications device 406 over communications path 424… subtitles may be obtained by wireless communications device 406 from a media provider (e.g., media content source 416 (FIG. 12)) via the Internet) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Krasadakis to allow transmission of subtitles and its explanations of a subset of words onto a second device as taught in Chung for the purposes of improving user’s language learning seamlessly in a connected environment. 11. Claim 13 is a system claim, rejected with respect to the same limitation rejected in the method claim 3. 12. Claim 20 is a system claim, rejected with respect to the same limitation rejected in the method claim 10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 13. Claim(s) 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application 2017/0344530 Krasadakis in view of U.S. Patent Application 2022/0382448, Lu et al. (hereinafter Lu). 14. Regarding Claim 4, Krasadakis discloses The method of claim 1, wherein generating for display (Fig. 1: 102(a-n), see [0036], smart watch, mobile device), on the second device associated with the user (Fig. 1: user device 102b), the subset of words and a respective explanation of each word of the subset of words further comprises (Fig. 5: 342, [0050], The definition 342 of circumambulate 320 is “to walk all the way around something.”): adding the subset of words ([0075], words that are searched by the user are added to the phrasebook. For example, words looked up in a dictionary or submitted to a translation service could be included in the phrasebook) and the respective explanation of each word to a list of words (Fig. 5: 342, [0049]-[0050], The definition 342 of circumambulate 320 is “to walk all the way around something.”. [0065], a general list of unknown words for a specific user, such as those found in a phrasebook associated with the user, could be used to generate a list of synonyms for these words) and corresponding explanations that have previously been generated for display during playback ([0087], [0099], video recording) of the media asset ([0046], The user could have seen a word previously…[0097], recently looked up words by the user are automatically added to the phrasebook); and Krasadakis does not explicitly disclose automatically scrolling the list such that the subset of words is visible on a display of the second device. However, Lu teaches automatically scrolling the list such that the subset of words is visible on a display of the second device (Fig 7c; [0021], interface that currently displays the first language (which may also be referred to as the source text), and automatically scrolls down a screen). It would have been obvious to modify Krasadakis to add an automatic scrolling on a smartphone/tablet/smartwatch device as taught in Lu for the purposes of minimizing user input for going through a list of words while reading. Hence, improving on user reading experience in a digital space. 15. Claim 14 is a system claim, rejected with respect to the same limitation rejected in the method claim 4. Allowable Subject Matter Claims 6-9, 16-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 6, Krasadakis discloses The method of claim 5, Krasadakis discloses further comprising: storing a segment of the media asset beginning at the start time and ending at the end time ([0034], This session organizes all the words consumed by the user along with timestamps is a key input for VAS post processing and enrichment); and in response to receiving a selection of the bookmark, replaying the stored segment. Krasadakis does not explicitly disclose storing a segment of the media asset; in response to receiving a selection of the bookmark, replaying the stored segment. Claim 16 is a system claim, objected with respect to the same limitation objected in the method claim 6. Regarding Claim 7, Krasadakis discloses The method of claim 5, further comprising: retrieving the identifier of the media asset ([0039], identifies words in the content that are likely unknown for a particular user at a point in time); and Krasadakis does not explicitly disclose in response to receiving a selection of the bookmark; based on the identifier of the media asset, the start time, and the end time, replaying a segment of the media asset beginning at the start time and ending at the end time. Claim 17 is a system claim, objected with respect to the same limitation objected in the method claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMER KHALID whose telephone number is (571)270-5997. The examiner can normally be reached Monday- Friday 9am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Miller can be reached at (571) 272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OMER KHALID/Examiner, Art Unit 2422 /JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Jan 27, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598399
IMAGE SYNCHRONIZATION FOR MULTIPLE IMAGE SENSORS
2y 5m to grant Granted Apr 07, 2026
Patent 12576814
Method for Determining a Cleaning Information, Method for Training of a Neural Network Algorithm, Control Unit, Camera Sensor System, Vehicle, Computer Program and Storage Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12563165
INSTALLATION INFORMATION ACQUISITION METHOD, CORRECTION METHOD, PROGRAM, AND INSTALLATION INFORMATION ACQUISITION SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12549690
VIDEO TRANSMISSION SYSTEM, VIDEO TRANSMISSION APPARATUS, VIDEO TRANSMISSION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12548344
VIDEO PROCESSING DEVICE AND VIDEO PROCESSING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+23.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month