Prosecution Insights
Last updated: April 19, 2026
Application No. 18/122,467

SYNCHRONIZING FILTER METADATA WITH A MULTIMEDIA PRESENTATION

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Clearplay Inc.
OA Round
5 (Non-Final)
32%
Grant Probability
At Risk
5-6
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Receipt Date12/03/2025 has been entered. Response to Amendment Claims 1, 3-12, 14, and 16-25 were previously pending and subject to a final action 06/03/2025. In the response filed on 12/03/2025, claims 1, and 14 were amended. Therefore, claims 1, 3-12, 14, and 16-25 are currently pending and subject to the non-final action below. Response to Arguments Applicant’s arguments, see pages 6-9, filed 12/03/2025 with respects to claims 1, 3-12, 14, and 16-25 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant’s argument: The Present Application at paragraph [0030] discloses synchronization information as including raw samples (e.g., pixels or waveforms) for direct matching against the presentation. Robson's sync info is metadata (timestamps/codes), not the content data itself. In particular, Robson's VBI/CC codes are pre-inserted markers (e.g., duration/start/intensity), not comprising raw content samples for analysis. See Robson, paras. [0151]-[0153]. As such, Robson's "analysis" is mere extraction/parsing of codes/words from close captioning, not "locat[ing] a matching attribute... comprising... image data, audio data, or close caption data." No teaching of matching of raw image pixels (e.g., frame patterns) or audio waveforms is taught by Robson, only text words in close caption data for heuristic mute estimation, which is timing-based, not attribute-matching of raw data. Further, the starting position of the portion of Robson is not "offset within the multimedia content presentation from the matched attribute of the multimedia content presentation". Rather, Robson only defines the start time of the filtered portion of the content such that no synchronization of the filter code and the presentation is required or performed. Thus, for at least these reasons, Robson does not disclose the features of independent claim 1 such that the claim is allowable over Robson. Yun also fails to describe the feature such that Yun cannot remedy the deficiencies of Robson discussed above. Similar to Robson, Yun utilizes a start time of the video from a filter table to determine the filtering action such that Yun fails to disclose "analyzing the multimedia content presentation to locate a matching attribute of the multimedia content presentation comprising the at least one of image data, audio data, or close caption data of the multimedia content presentation" of the synchronization information "comprising at least one of image data, audio data, or close caption data of the multimedia content presentation." In particular, Yun teaches filter tables with fixed start/end timecodes (Fig. 8; Abstract) for skipping portions, but these are pre-defined timestamps, not offsets derived from matched raw attributes. Therefore, Yun cannot remedy the deficiencies of Robson such that amended claim 1 is allowable over the combination of Robson in view of Yun. Similar to Robson and Yun, Durden fails to disclose the features of amended claim 1 such that the reference fails to remedy the deficiencies of Robson in view of Yun. Similar to Robson and Yun above, Durden only uses timestamps to synchronize the filter data to the multimedia content such that Durden cannot disclose the features of "analyzing the multimedia content presentation to locate a matching attribute of the multimedia content presentation comprising the at least one of image data, audio data, or close caption data of the multimedia content presentation" of the synchronization information comprising "at least one of image data, audio data, or close caption data of the multimedia content presentation." The system of Durden uses pre- assigned timestamps/offsets for ratings/attributes, which are metadata-driven, not derived from "matching" raw content attributes as claimed. Even assuming arguendo the suggestions of the Office action, there is no teaching, suggestion, or motivation in the references to combine them as proposed without hindsight from the Present Application. Robson and Yun use pre-inserted codes/timecodes for fixed filtering, while Durden adds timestamps for attributes. None recognizes or addresses the problem solved by the claims: dynamic synchronization via raw content matching for variable- timing streams (e.g., delays in live broadcasts). Although not identical, amended independent claim 14 includes similar features of amended claim 1 such that claim 14 is also allowable over Robson in view of Yun and Durden for at least the reasons provided above. Examiner Response: After careful consideration and review, the examiner respectfully disagrees. During examination, the claims must be interpreted as broadly as their terms reasonably allow. In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 U.S.P.Q.2d 1827, 1834 (Fed. Cir. 2004). Independent claim 1, Robson teaches: A method for applying multimedia content filter data with a multimedia content presentation comprising: (Robson − [0021] The present invention achieves these objects and others by providing a system, method, and computer program product for the selective filtering of objectionable content from a program. The selective filtering of objectionable content from a program is accomplished by applying an encoding process and a decoding process to the audio and/or video signal of the program.) receiving data comprising: (Robson − [0032] As used herein, the term "audio-video device" is intended to refer to any device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components.) synchronization information comprising at least one of image data, audio data, or close caption data of the multimedia content presentation; (Robson − [0215] In yet another embodiment using digital video, the filtering information could be… sent in a separate stream according to the Synchronized Accessible Media Interchange (SAMI) format or the Synchronized Multimedia Integration Language (SMIL).) and filter data identifying a portion of the multimedia content presentation to be skipped, (Robson – [0210] To determine when filtering should start, the system calculates how many words in the caption need to be skipped [0042-0043] If the filtering information matches any of the filtering criteria, then the program material is filtered according to the filtering information at step 140. Thus, the filtering information also includes information sufficient to permit the filtering device to filter the objectionable content, such as information sufficient to locate the content (temporally and/or spatially) [0159] If, at step 350, it is determined that the intensity level of the potentially objectionable content is of a level that should be filtered, the area of the video display to be filtered is identified from the filter code's AREA attribute at step 360, and at step 370, the start and stop points for filtering the video are determined by using the DURATION and START attributes as described above.) analyzing the multimedia content presentation to locate a matching attribute of the multimedia content presentation comprising the at least one of image data, audio data, or close caption data of the multimedia content presentation; (Robson – [0021] [0151-0153] B. Decoding Process, Upon being supplied the encoded program in the form of an audio and/or video signal representing the encoded program--either by reception of a transmission or by playback from a recorded medium--the filtering device performs the decoding process. [0158] If, at step 340, it is determined that the filter code of the filtering information does match the filtering criteria, (indicating that the potentially objectionable content is of a type that should be filtered), then at step 350 the intensity level of the filter code is compared with corresponding filtering criteria to determine if the material should be filtered. [0200-0205] The filtering device include filtering words in closed caption that match a word list for closed caption data.) determining, based on the location of the matching of the attribute of the multimedia content presentation with the at least one of image data, audio data, or close caption data of the synchronization information, (Robson − [0021] [0081-0082] d. Duration Attribute, [0082] The DURATION attribute, which is required for all codes, specifies the number of frames (or alternately, thirtieths of a second) the program is to be filtered from the reception of the START attribute (discussed below). In the case of A-codes, it specifies how long the audio will be filtered (e.g., muted). For V-codes and S-codes, the DURATION attribute specifies how long a portion of the screen will be blanked. [0085] The DURATION attribute could identify an absolute stop time, a relative stop time, a duration time, or a stop location within the signal or recording. Matching V-codes with the length (duration) of content.) and altering, based on the filter data, a presentation of the portion of the multimedia content presentation on a display device. (Robson – [0035] Filtering, when used in the context of filtering of the video, means blocking a portion of the video by, for example, covering over, overwriting, blanking, changing, or obscuring the video displayed on the display screen or any other process that prevents the objectionable content of the video from being displayed on the display screen. These terms are used interchangeably throughout this disclosure. [0160] At step 380, the video is filtered according to the processed filtering information and, at step 420, is output for viewing, transmission, or recording.) Robson does not explicitly teach: a beginning of a skip location of the portion of the multimedia content presentation However, Yun teaches: a beginning of a skip location of the portion of the multimedia content presentation, (Yun – [abstract] determining by calculating the sum of portions too be skipped over. Fig. 8 filter table for event 1 start time 00:04:15:19 and end time 00:04:48:26 to skip during filter condition “violence & sex”) the portion of the multimedia content presentation to be skipped offset from a position of the at least one of image data, audio data, or close caption data of the multimedia content presentation; (Yun – [abstract] determining by calculating the sum of portions too be skipped over. Fig. 8 filter table for event 1 start time 00:04:15:19 and end time 00:04:48:26 to skip during filter condition “violence & sex”)) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, and Yun as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Robson does not explicitly teach: the position of the portion offset from the matched attribute of the multimedia content presentation; However, Durden teaches: the beginning of the skip location position of the portion offset within the multimedia content presentation from the matched attribute of the multimedia content presentation; (Durden − [0058] [0060] Timestamp 44 associates program content and program control data with a particular time interval within a program 40. The time interval may be represented as an offset from the start of program 40 and may be expressed in hours, minutes, and seconds or in any smaller unit of time, such as the length of a video frame. [0060] [0069] [0075-0076] [0076] "00:11:15:02 R; L audio change to 14" is the second timestamp offset. These values indicate that 11 minutes, 15 seconds, and 2 frames from the start of the program, the programs rating has changed to "R" and the show now has a content attribute "L" which indicates coarse "Language". The "audio" value indicates that the rating is only associated with the audio portion of the program. For the purposes of this example, assume that a character in the program has uttered a brief vulgarity. In this case, at the offset shown, presentation control system 36 may switch to an alternative audio track if the ratings and content attributes equal or exceed those indicated by the user's parental control settings.) and altering, based on the filter data, a presentation of the portion of the multimedia content presentation on a display device. (Durden − [0076] [0101-0103] [0104] Blocking processor 66 sends "block video" control instruction if viewer 18 selects the "partial block" disabling method per instruction 76 and program component 50 is "video.") Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun and Durden as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. The claim recites, synchronization information comprising at least one of, therefore close caption data that is synchronized. Robson teaches synchronizing video data, and close caption data. Furthermore, the examiner respectfully disagrees that metadata is not content itself, since the metadata is close caption data that is part of the video data. The claim recites that “synchronization information comprising at least one of image data, audio data, or close caption data of the multimedia content presentation;”. Independent claim 1 and claim 14 does not recite limitations of “raw samples, e.g., pixels or waveforms for direct matching against the presentation. Therefore, Robson teaches the limitation of “synchronization information comprising at least one of image data, audio data, or close caption data of the multimedia content presentation; and filter data identifying a portion of the multimedia content presentation to be skipped, analyzing the multimedia content presentation to locate a matching attribute of the multimedia content presentation comprising the at least one of image data, audio data, or close caption data of the multimedia content presentation;” as recited above. Yun is recited for teaching the limitation of ”a beginning of a skip location of the portion of the multimedia content presentation, the portion of the multimedia content presentation to be skipped offset from a position of the at least one of image data, audio data, or close caption data of the multimedia content presentation;” and Durden is recited for teaching the limitation of “the beginning of the skip location position of the portion offset within the multimedia content presentation from the matched attribute of the multimedia content presentation; and altering, based on the filter data, a presentation of the portion of the multimedia content presentation on a display device” and maintains that the prior art Robson, Yun and Durden teaches the limitations recited in the independent claims of 1 and 14. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1, 3-6, 8-9, 11, 14, 16-19, 21-22, and 24 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Robson et al. (US PGPUB: 20040006767, hereinafter "Robson") in view of Yun (US PGPUB: 20060056808 A1, hereinafter “Yun”) in view of Durden (US PGPUB: 20040261099 A1, hereinafter “Durden”). Regarding independents claim 1, Robson teaches: A method for applying multimedia content filter data with a multimedia content presentation comprising: (Robson − [0021] The present invention achieves these objects and others by providing a system, method, and computer program product for the selective filtering of objectionable content from a program. The selective filtering of objectionable content from a program is accomplished by applying an encoding process and a decoding process to the audio and/or video signal of the program.) receiving data comprising: (Robson − [0032] As used herein, the term "audio-video device" is intended to refer to any device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components.) synchronization information comprising at least one of image data, audio data, or close caption data of the multimedia content presentation; (Robson − [0215] In yet another embodiment using digital video, the filtering information could be… sent in a separate stream according to the Synchronized Accessible Media Interchange (SAMI) format or the Synchronized Multimedia Integration Language (SMIL).) and filter data identifying a portion of the multimedia content presentation to be skipped, (Robson – [0210] To determine when filtering should start, the system calculates how many words in the caption need to be skipped [0042-0043] If the filtering information matches any of the filtering criteria, then the program material is filtered according to the filtering information at step 140. Thus, the filtering information also includes information sufficient to permit the filtering device to filter the objectionable content, such as information sufficient to locate the content (temporally and/or spatially) [0159] If, at step 350, it is determined that the intensity level of the potentially objectionable content is of a level that should be filtered, the area of the video display to be filtered is identified from the filter code's AREA attribute at step 360, and at step 370, the start and stop points for filtering the video are determined by using the DURATION and START attributes as described above.) analyzing the multimedia content presentation to locate a matching attribute of the multimedia content presentation comprising the at least one of image data, audio data, or close caption data of the multimedia content presentation; (Robson – [0021] [0151-0153] B. Decoding Process, Upon being supplied the encoded program in the form of an audio and/or video signal representing the encoded program--either by reception of a transmission or by playback from a recorded medium--the filtering device performs the decoding process. [0158] If, at step 340, it is determined that the filter code of the filtering information does match the filtering criteria, (indicating that the potentially objectionable content is of a type that should be filtered), then at step 350 the intensity level of the filter code is compared with corresponding filtering criteria to determine if the material should be filtered. [0200-0205] The filtering device include filtering words in closed caption that match a word list for closed caption data.) determining, based on the location of the matching of the attribute of the multimedia content presentation with the at least one of image data, audio data, or close caption data of the synchronization information, (Robson − [0021] [0081-0082] d. Duration Attribute, [0082] The DURATION attribute, which is required for all codes, specifies the number of frames (or alternately, thirtieths of a second) the program is to be filtered from the reception of the START attribute (discussed below). In the case of A-codes, it specifies how long the audio will be filtered (e.g., muted). For V-codes and S-codes, the DURATION attribute specifies how long a portion of the screen will be blanked. [0085] The DURATION attribute could identify an absolute stop time, a relative stop time, a duration time, or a stop location within the signal or recording. Matching V-codes with the length (duration) of content.) and altering, based on the filter data, a presentation of the portion of the multimedia content presentation on a display device. (Robson – [0035] Filtering, when used in the context of filtering of the video, means blocking a portion of the video by, for example, covering over, overwriting, blanking, changing, or obscuring the video displayed on the display screen or any other process that prevents the objectionable content of the video from being displayed on the display screen. These terms are used interchangeably throughout this disclosure. [0160] At step 380, the video is filtered according to the processed filtering information and, at step 420, is output for viewing, transmission, or recording.) Robson does not explicitly teach: a beginning of a skip location of the portion of the multimedia content presentation However, Yun teaches: a beginning of a skip location of the portion of the multimedia content presentation, (Yun – [abstract] determining by calculating the sum of portions too be skipped over. Fig. 8 filter table for event 1 start time 00:04:15:19 and end time 00:04:48:26 to skip during filter condition “violence & sex”) the portion of the multimedia content presentation to be skipped offset from a position of the at least one of image data, audio data, or close caption data of the multimedia content presentation; (Yun – [abstract] determining by calculating the sum of portions too be skipped over. Fig. 8 filter table for event 1 start time 00:04:15:19 and end time 00:04:48:26 to skip during filter condition “violence & sex”)) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, and Yun as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Robson does not explicitly teach: the position of the portion offset from the matched attribute of the multimedia content presentation; However, Durden teaches: the beginning of the skip location position of the portion offset within the multimedia content presentation from the matched attribute of the multimedia content presentation; (Durden − [0058] [0060] Timestamp 44 associates program content and program control data with a particular time interval within a program 40. The time interval may be represented as an offset from the start of program 40 and may be expressed in hours, minutes, and seconds or in any smaller unit of time, such as the length of a video frame. [0060] [0069] [0075-0076] [0076] "00:11:15:02 R; L audio change to 14" is the second timestamp offset. These values indicate that 11 minutes, 15 seconds, and 2 frames from the start of the program, the programs rating has changed to "R" and the show now has a content attribute "L" which indicates coarse "Language". The "audio" value indicates that the rating is only associated with the audio portion of the program. For the purposes of this example, assume that a character in the program has uttered a brief vulgarity. In this case, at the offset shown, presentation control system 36 may switch to an alternative audio track if the ratings and content attributes equal or exceed those indicated by the user's parental control settings.) and altering, based on the filter data, a presentation of the portion of the multimedia content presentation on a display device. (Durden − [0076] [0101-0103] [0104] Blocking processor 66 sends "block video" control instruction if viewer 18 selects the "partial block" disabling method per instruction 76 and program component 50 is "video.") Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun and Durden as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Regarding dependents claim 3, depends on claim 1, Robson teaches: wherein the synchronization information is at least two of image data, audio data, or close caption data. Robson – [0146] At step 230, the system determines whether the potentially objectionable content identified is video content. If it is, then at step 240, the area of the video display where the potentially objectionable video content is present and potentially to be filtered is identified. [0151] Finally, at step 290, the filtering information is applied to the program by marking the program with the filtering information. As discussed above… [0200-0205] The filtering device include filtering words in closed caption that match a word list for closed caption data.) Regarding dependents claim 4, depends on claim 1, Robson teaches: obtaining the synchronization information from a content server to synchronize the filter data with the portion of the multimedia content presentation. (Robson ─ [0123-0129] Example 3 The individual portions of 75 frames contain objectives contents covering 2 and ½ seconds duration. The area obscured the 75 frames of sexual content which was identified by the content filter code) Regarding dependents claim 5, depends on claim 1, Robson teaches: providing for transmission of the multimedia content presentation from a remote content server to a client device. (Robson – [0032-0033] [0041-0042] device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components. The audio and/or video signal referred to herein may be delivered to the audio-video device by satellite, by cable, or any other wired or wireless means for transmitting an audio and/or video signal.) Regarding dependents claim 6, depends on claim 1, Robson teaches: providing for reading of the multimedia content presentation from a removable storage media. (Robson – [0032-0033] [00041-0042] device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components. The audio and/or video signal referred to herein may be delivered to the audio-video device by satellite, by cable, or any other wired or wireless means for transmitting an audio and/or video signal. The incoming audio and/or video signal may also be the output from an audio and/or video playback device, such as for example, a DVD player, video cassette recorder (VCR), digital video recorder (DVR), laser disc player, compact disc (CD) player, or like devices and components.) Regarding dependents claim 8, depends on claim 1, Robson teaches: decoding an encoded portion of the multimedia content presentation prior to matching the attribute of the multimedia content presentation with the synchronization information. (Robson – [0153-0153] B. Decoding Process, Upon being supplied the encoded program in the form of an audio and/or video signal representing the encoded program--either by reception of a transmission or by playback from a recorded medium--the filtering device performs the decoding process. [0158] If, at step 340, it is determined that the filter code of the filtering information does match the filtering criteria, (indicating that the potentially objectionable content is of a type that should be filtered), then at step 350 the intensity level of the filter code is compared with corresponding filtering criteria to determine if the material should be filtered.) Regarding dependents claim 9, depends on claim 1, Robson teaches: wherein determining the position of the portion of the multimedia content presentation comprises: determining a time lapse between a position of the matched attribute of the multimedia content presentation to the portion of the multimedia content presentation of the filter data. (Robson ─ [0021] [0081-0082] d. Duration Attribute, [0082] The DURATION attribute, which is required for all codes, specifies the number of frames (or alternately, thirtieths of a second) the program is to be filtered from the reception of the START attribute (discussed below). In the case of A-codes, it specifies how long the audio will be filtered (e.g., muted). For V-codes and S-codes, the DURATION attribute specifies how long a portion of the screen will be blanked. [0085] The DURATION attribute could identify an absolute stop time, a relative stop time, a duration time, or a stop location within the signal or recording. [0123-0129] Example 3 The individual portions of 75 frames contain objectives contents covering 2 and ½ seconds duration. The area obscured the 75 frames of sexual content which was identified by the content filter code. 2 and ½ second duration is the time lapse.) Regarding dependents claim 11, depends on claim 1, Robson teaches: wherein the portion of the multimedia content comprises objectionable content. (Robson − [0144-0146] Referring now to FIG. 2, The encoding process 200 encodes the filtering information that identifies potentially objectionable video content according to the method steps shown in FIG. 2. At step 210, identifies potentially objectionable materials. Once the potentially objectionable content is identified at step 210, the start and stop of the potentially objectionable material is identified at step 220. This information might be the start frame and stop frame of the material, the start frame and duration of the material, start time and stop time (as measured from the beginning of the program), or any information suitable to facilitate reasonably precise filtering of the program.) Regarding independent claim 14 is directed to a system and a non-transitory computer-readable medium. Claim 14 have similar/same technical features/limitations as claim 1 and the claims are rejected under the same rationale. Regarding dependents claim 16, depends on claim 14, Robson teaches: wherein the synchronization information is at least two of image data, audio data, or close caption data. Robson – [0146] At step 230, the system determines whether the potentially objectionable content identified is video content. If it is, then at step 240, the area of the video display where the potentially objectionable video content is present and potentially to be filtered is identified. [0151] Finally, at step 290, the filtering information is applied to the program by marking the program with the filtering information. As discussed above… [0200-0205] The filtering device include filtering words in closed caption that match a word list for closed caption data.) Regarding dependents claim 17, depends on claim 14, Robson teaches: wherein the instructions further cause the processing device to perform the operation of: obtaining the synchronization information from a content server to synchronize the filter data with the portion of the multimedia content presentation. (Robson ─ [0123-0129] Example 3 The individual portions of 75 frames contain objectives contents covering 2 and ½ seconds duration. The area obscured the 75 frames of sexual content which was identified by the content filter code) Regarding dependents claim 18, depends on claim 14, Robson teaches: wherein the instructions further cause the processing device to perform the operation of: providing for transmission of the multimedia content presentation from a remote content server to a client device. (Robson – [0032-0033] [0041-0042] device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components. The audio and/or video signal referred to herein may be delivered to the audio-video device by satellite, by cable, or any other wired or wireless means for transmitting an audio and/or video signal.) Regarding dependents claim 19, depends on claim 14, Robson teaches: wherein the instructions further cause the processing device to perform the operation of: providing for reading of the multimedia content presentation from a removable storage media. (Robson – [0032-0033] [00041-0042] device adapted to receive an audio and/or video signal, including but not limited to, a set-top box (STB), television set, video cassette recorder (VCR), digital video recorder (DVR), radio receiver, personal computer, digital television (DTV) receiver, or like devices and components. The audio and/or video signal referred to herein may be delivered to the audio-video device by satellite, by cable, or any other wired or wireless means for transmitting an audio and/or video signal. The incoming audio and/or video signal may also be the output from an audio and/or video playback device, such as for example, a DVD player, video cassette recorder (VCR), digital video recorder (DVR), laser disc player, compact disc (CD) player, or like devices and components.) Regarding dependents claim 21, depends on claim 14, Robson teaches: wherein the instructions further cause the processing device to perform the operation of: decoding an encoded portion of the multimedia content presentation prior to matching the attribute of the multimedia content presentation with the synchronization information. (Robson – [0153-0153] B. Decoding Process, Upon being supplied the encoded program in the form of an audio and/or video signal representing the encoded program--either by reception of a transmission or by playback from a recorded medium--the filtering device performs the decoding process. [0158] If, at step 340, it is determined that the filter code of the filtering information does match the filtering criteria, (indicating that the potentially objectionable content is of a type that should be filtered), then at step 350 the intensity level of the filter code is compared with corresponding filtering criteria to determine if the material should be filtered.) Regarding dependents claim 22, depends on claim 14, Robson teaches: wherein determining the position of the portion of the multimedia content presentation comprises: determining a time lapse between a position of the matched attribute of the multimedia content presentation to the portion of a multimedia content presentation of the filter data. (Robson ─ [0021] [0081-0082] d. Duration Attribute, [0082] The DURATION attribute, which is required for all codes, specifies the number of frames (or alternately, thirtieths of a second) the program is to be filtered from the reception of the START attribute (discussed below). In the case of A-codes, it specifies how long the audio will be filtered (e.g., muted). For V-codes and S-codes, the DURATION attribute specifies how long a portion of the screen will be blanked. [0085] The DURATION attribute could identify an absolute stop time, a relative stop time, a duration time, or a stop location within the signal or recording. [0123-0129] Example 3 The individual portions of 75 frames contain objectives contents covering 2 and ½ seconds duration. The area obscured the 75 frames of sexual content which was identified by the content filter code. 2 and ½ second duration is the time lapse.) Regarding dependents claim 24, depends on claim 14, Robson teaches: wherein the portion of the multimedia content comprises objectionable content. (Robson − [0144-0146] Referring now to FIG. 2, The encoding process 200 encodes the filtering information that identifies potentially objectionable video content according to the method steps shown in FIG. 2. At step 210, identifies potentially objectionable materials. Once the potentially objectionable content is identified at step 210, the start and stop of the potentially objectionable material is identified at step 220. This information might be the start frame and stop frame of the material, the start frame and duration of the material, start time and stop time (as measured from the beginning of the program), or any information suitable to facilitate reasonably precise filtering of the program.) Claims 7, 10, 20 and 23 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Robson, Yun and Durden as applied to claims 1 and 14 above, and further in view of Candelore et al. (US PGPUB: 20060130121, Filed. Date: Dec. 1, 2005 hereinafter "Candelore-0121"). Regarding dependents claim 7, depends on claim 1, Robson does not explicitly teach: wherein the synchronization information comprises compressed data corresponding to the attribute of the multimedia content presentation. However, Candelore-0121 teaches: wherein the synchronization information comprises compressed data corresponding to the attribute of the multimedia content presentation. (Candelore-0121 – [0035] According to one embodiment, the program may contain multiple identifiers such as Packet Identifiers (PIDs) when the program is MPEG (Moving Pictures Expert Group) compliant compressed video.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun Durden and Candelore-0121 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Regarding dependents claim 10, depends on claim 1, Robson does not explicitly teach: wherein the synchronization information comprises at least one of a plurality of pixels of a frame of the multimedia content presentation, a row of pixels of the frame of the multimedia content presentation, or the entirety of the pixels of the frame of the multimedia content presentation. However, Candelore-0121 teaches: wherein the synchronization information comprises at least one of a plurality of pixels of a frame of the multimedia content presentation, a row of pixels of the frame of the multimedia content presentation, or the entirety of the pixels of the frame of the multimedia content presentation. (Candelore-0121 – [0029] In short, the content filter unit provides scene-by-scene filtration, and even to the granularity of frame-by-frame, to block or replace individual scenes or words. Such blocking may be accomplished by the content filter unit residing with the customer through screen blocking or obscuring pixels for a particular image or muting audio.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun Durden and Candelore-0121 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Regarding dependents claim 20, depends on claim 14, Robson does not explicitly teach: wherein the synchronization information comprises compressed data corresponding to the attribute of the multimedia content presentation. However, Candelore-0121 teaches: wherein the synchronization information comprises compressed data corresponding to the attribute of the multimedia content presentation. (Candelore-0121 – [0035] According to one embodiment, the program may contain multiple identifiers such as Packet Identifiers (PIDs) when the program is MPEG (Moving Pictures Expert Group) compliant compressed video.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun Durden and Candelore-0121 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Regarding dependents claim 23, depends on claim 14, Robson does not explicitly teach: wherein the synchronization information comprises at least one of a plurality of pixels of a frame of the multimedia content presentation, a row of pixels of the frame of the multimedia content presentation, or the entirety of the pixels of the frame of the multimedia content presentation. However, Candelore-0121 teaches: wherein the synchronization information comprises at least one of a plurality of pixels of a frame of the multimedia content presentation, a row of pixels of the frame of the multimedia content presentation, or the entirety of the pixels of the frame of the multimedia content presentation. (Candelore-0121 – [0029] In short, the content filter unit provides scene-by-scene filtration, and even to the granularity of frame-by-frame, to block or replace individual scenes or words. Such blocking may be accomplished by the content filter unit residing with the customer through screen blocking or obscuring pixels for a particular image or muting audio.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun Durden and Candelore-0121 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Claims 12 and 25 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Robson, Yun and Durden as applied to claims 1 and 14 above, and further in view of Candelore et al. (US PGPUB: 20060130119 A1, Filed. Date: Sep. 15, 2005 hereinafter "Candelore-0119"). Regarding dependents claim 12, depends on claim 1, Robson does not explicitly teach: comprises a commercial However Candelore-0119 teaches: wherein the portion of the multimedia content comprises a commercial. (Candelore-0119 − [0045] The below-described content filter unit, however, is adapted to selectively control the playback of commercials.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun, Durden and Candelore-0119 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Regarding dependents claim 25, depends on claim 14, Robson does not explicitly teach: comprises a commercial However Candelore-0119 teaches: wherein the portion of the multimedia content comprises a commercial. (Candelore-0119 − [0045] The below-described content filter unit, however, is adapted to selectively control the playback of commercials.) Accordingly, it would have been obvious to one of ordinary skill in the art, at the time of claimed invention, to have combine Robson, Yun, Durden and Candelore-0119 as both invention are related to removing objectionable content from media. The motivation to combine provides the improvement of selective blocking of objectionable content offensive to the reviewer using filtering parameters. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. HEJNA, JR., US 20020013949, aligning/synchronizing content of playback data to remove commercial advertisement, or other offensive content. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Jul 20, 2023
Non-Final Rejection — §103
Jan 29, 2024
Response Filed
Apr 10, 2024
Final Rejection — §103
Oct 22, 2024
Request for Continued Examination
Oct 24, 2024
Response after Non-Final Action
Nov 14, 2024
Non-Final Rejection — §103
Jan 31, 2025
Interview Requested
Feb 10, 2025
Applicant Interview (Telephonic)
Feb 11, 2025
Examiner Interview Summary
Feb 13, 2025
Response Filed
May 27, 2025
Final Rejection — §103
Dec 03, 2025
Request for Continued Examination
Dec 10, 2025
Response after Non-Final Action
Dec 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month