Prosecution Insights
Last updated: April 19, 2026
Application No. 18/324,481

SYSTEMS AND METHODS FOR AUTOMATIC CONTENT RECOGNITION

Non-Final OA §103
Filed
May 26, 2023
Examiner
TAYLOR, JOSHUA D
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Comcast Cable Communications LLC
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
89%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
307 granted / 525 resolved
+0.5% vs TC avg
Strong +30% interview lift
Without
With
+30.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
36 currently pending
Career history
561
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
55.4%
+15.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 525 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 5, 2025 has been entered. Claims 1-8 and 21-32 are pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 7, 21, 22, 24-26 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Ikizyan et al. (Pub. No.: US 2013/0163957) in view of Yang et al. (Pub. No.: US 2012/0087583). Regarding claim 1, Ikizyan discloses a method comprising: determining, by a device, one or more frames associated with a content item (paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”); determining, based on the one or more frames, timing information associated with one or more shot changes of the content item (Fig. 1, element 101 and Δt1-Δt6, para. [0012]); generating, based on the one or more shot changes, one or more shot signatures (Fig. 2, any of the 12 rows labeled S1-S4 can be seen as a shot signature, paras. [0013]-[0016]; “A scene start time of at least a subset of the scene changes in the video can be associated with at least two time differences between scene changes that are subsequent to the particular scene change. In other words, the fingerprint 201 associates a scene change with a pairing of time intervals between the next two successive scene changes as shown. Accordingly, in the depicted example, the first entry in the table of the video fingerprint 201 associates a scene change start time S1 with Δt1 and Δt2, which represent a time difference between the scene change occurring at time S1 in the video and the next two scene changes in the video, S2 and S3.”); and generating, based on the one or more shot signatures and the timing information, a video signature associated with the content item (Fig. 2, element 203, paras. [0013]-[0016]; “The video fingerprint 201 can include a representation of a table that comprises at least three types of data.”). It could be argued that Ikizyan does not explicitly disclose wherein each shot signature of the one or more shot signatures comprises a representation of a frame of the one or more frames. However, in analogous art, Yang discloses that “[c]urrent video signature schemes are divided into two categories. In one category, a single key frame is selected to represent a shot, and an image hash is taken of the single key frame to be used as a shot signature. The first category takes advantage of the image hash, of which the solution is well developed (para. [0004]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan to allow for each shot signature of the one or more shot signatures to comprise a representation of a frame of the one or more frames. This would have produced predictable and desirable results, in that it would allow for a well-known technique to be used in order to represent a shot. Regarding claim 2, the combination of Ikizyan and Yang discloses the method of claim 1, and further discloses wherein the timing information comprises one or more of a time duration between a first shot change and a second shot change or a number of frames between a first shot change and a second shot change (Ikizyan, paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”). Regarding claim 4, the combination of Ikizyan and Yang discloses the method of claim 1, and further discloses wherein determining, based on the one or more frames, the timing information associated with the one or more shot changes of the content item comprises: determining, based on a difference between every two adjacent frames of the one or more frames satisfying a threshold, the one or more shot changes (Fig. 6, paras. [0035] and [0036]); and determining, based on the one or more shot changes, the timing information associated with the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]). Regarding claim 7, the combination of Ikizyan and Yang discloses the method of claim 1, and further discloses further comprising: receiving, from one or more user devices, one or more target video signatures associated with one or more content items; and identifying, based on the one or more target video signatures and the video signature, the content item (Ikizyan, Fig. 3A, paras. [0010] and [0017]). Regarding claim 21, Ikizyan discloses an apparatus comprising: one or more processors; and a memory storing processor-executable instructions that, when executed by the one or more processors, cause the apparatus to: determine one or more frames associated with a content item (paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”); determine, based on the one or more frames, timing information associated with one or more shot changes of the content item (Fig. 1, element 101 and Δt1-Δt6, para. [0012]); generate, based on the one or more shot changes, one or more shot signatures (Fig. 2, any of the 12 rows labeled S1-S4 can be seen as a shot signature, paras. [0013]-[0016]; “A scene start time of at least a subset of the scene changes in the video can be associated with at least two time differences between scene changes that are subsequent to the particular scene change. In other words, the fingerprint 201 associates a scene change with a pairing of time intervals between the next two successive scene changes as shown. Accordingly, in the depicted example, the first entry in the table of the video fingerprint 201 associates a scene change start time S1 with Δt1 and Δt2, which represent a time difference between the scene change occurring at time S1 in the video and the next two scene changes in the video, S2 and S3.”); and generate, based on the one or more shot signatures and the timing information, a video signature associated with the content item (Fig. 2, element 203, paras. [0013]-[0016]; “The video fingerprint 201 can include a representation of a table that comprises at least three types of data.”). It could be argued that Ikizyan does not explicitly disclose wherein each shot signature of the one or more shot signatures comprises a representation of a frame of the one or more frames. However, in analogous art, Yang discloses that “[c]urrent video signature schemes are divided into two categories. In one category, a single key frame is selected to represent a shot, and an image hash is taken of the single key frame to be used as a shot signature. The first category takes advantage of the image hash, of which the solution is well developed (para. [0004]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan to allow for each shot signature of the one or more shot signatures to comprise a representation of a frame of the one or more frames. This would have produced predictable and desirable results, in that it would allow for a well-known technique to be used in order to represent a shot. Regarding claim 22, the combination of Ikizyan and Yang discloses the apparatus of claim 21, and further discloses wherein the timing information comprises one or more of a time duration between a first shot change and a second shot change or a number of frames between a first shot change and a second shot change (Ikizyan, paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”). Regarding claim 24, the combination of Ikizyan and Yang discloses the apparatus of claim 21, and further discloses wherein the processor-executable instructions that, when executed by the one or more processors, cause the apparatus to determine, based on the one or more frames, the timing information associated with the one or more shot changes of the content item, further cause the apparatus to: determine, based on a difference between every two adjacent frames of the one or more frames satisfying a threshold, the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]); and determine, based on the one or more shot changes, the timing information associated with the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]). Regarding claim 25, Ikizyan discloses one or more non-transitory computer-readable media storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: determine, by a device, one or more frames associated with a content item (paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”); determine, based on the one or more frames, timing information associated with one or more shot changes of the content item (Fig. 1, element 101 and Δt1-Δt6, para. [0012]); generate, based on the one or more shot changes, one or more shot signatures (Fig. 2, any of the 12 rows labeled S1-S4 can be seen as a shot signature, paras. [0013]-[0016]; “A scene start time of at least a subset of the scene changes in the video can be associated with at least two time differences between scene changes that are subsequent to the particular scene change. In other words, the fingerprint 201 associates a scene change with a pairing of time intervals between the next two successive scene changes as shown. Accordingly, in the depicted example, the first entry in the table of the video fingerprint 201 associates a scene change start time S1 with Δt1 and Δt2, which represent a time difference between the scene change occurring at time S1 in the video and the next two scene changes in the video, S2 and S3.”); and generate, based on the one or more shot signatures and the timing information, a video signature associated with the content item (Fig. 2, element 203, paras. [0013]-[0016]; “The video fingerprint 201 can include a representation of a table that comprises at least three types of data.”). It could be argued that Ikizyan does not explicitly disclose wherein each shot signature of the one or more shot signatures comprises a representation of a frame of the one or more frames. However, in analogous art, Yang discloses that “[c]urrent video signature schemes are divided into two categories. In one category, a single key frame is selected to represent a shot, and an image hash is taken of the single key frame to be used as a shot signature. The first category takes advantage of the image hash, of which the solution is well developed (para. [0004]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan to allow for each shot signature of the one or more shot signatures to comprise a representation of a frame of the one or more frames. This would have produced predictable and desirable results, in that it would allow for a well-known technique to be used in order to represent a shot. Regarding claim 26, the combination of Ikizyan and Yang discloses the non-transitory computer-readable media of claim 25, and further discloses wherein the timing information comprises one or more of a time duration between a first shot change and a second shot change or a number of frames between a first shot change and a second shot change (Ikizyan, paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”). Regarding claim 28, the combination of Ikizyan and Yang discloses the non-transitory computer-readable media of claim 25, and further discloses wherein the processor- executable instructions that, when executed by the at least one processor, cause the at least one processor to determine, based on the one or more frames, the timing information associated with the one or more shot changes of the content item, further cause the at least one processor to: determine, based on a difference between every two adjacent frames of the one or more frames satisfying a threshold, the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]); and determine, based on the one or more shot changes, the timing information associated with the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]). Claims 3, 23 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ikizyan et al. (Pub. No.: US 2013/0163957) in view of Yang et al. (Pub. No.: US 2012/0087583) and Younessian et al. (Pub. No.: US 2022/0019809). Regarding claim 3, the combination of Ikizyan and Yang discloses the method of claim 1, but it could be argued that Ikizyan does not explicitly disclose wherein each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. However, in analogous art, Younessian discloses that “[a] video fingerprint 218a,b for a shot may comprise a video fingerprint for a single frame of the shot, such as the first frame of the shot. A video fingerprint 218a,b may comprise a block-level RGB (red-green-blue) descriptor of a frame. A video fingerprint 218a,b may comprise a CLD (color layer descriptor) of a frame. A video fingerprint 218a,b may comprise an alphanumeric value, such as a 10-digit hash of the CLD of the frame (para. [0038]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. This would have produced predictable and desirable results, in that it would allow for a well-known frame identifier to be used as part of the representation, which could improve the ability of the system to properly match related content. Regarding claim 23, the combination of Ikizyan and Yang discloses the apparatus of claim 21, but it could be argued that Ikizyan does not explicitly disclose wherein each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. However, in analogous art, Younessian discloses that “[a] video fingerprint 218a,b for a shot may comprise a video fingerprint for a single frame of the shot, such as the first frame of the shot. A video fingerprint 218a,b may comprise a block-level RGB (red-green-blue) descriptor of a frame. A video fingerprint 218a,b may comprise a CLD (color layer descriptor) of a frame. A video fingerprint 218a,b may comprise an alphanumeric value, such as a 10-digit hash of the CLD of the frame (para. [0038]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. This would have produced predictable and desirable results, in that it would allow for a well-known frame identifier to be used as part of the shot signature, which could improve the ability of the system to properly match related content. Regarding claim 27, the combination of Ikizyan and Yang discloses the non-transitory computer-readable media of claim 25, but it could be argued that Ikizyan does not explicitly disclose wherein each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. However, in analogous art, Younessian discloses that “[a] video fingerprint 218a,b for a shot may comprise a video fingerprint for a single frame of the shot, such as the first frame of the shot. A video fingerprint 218a,b may comprise a block-level RGB (red-green-blue) descriptor of a frame. A video fingerprint 218a,b may comprise a CLD (color layer descriptor) of a frame. A video fingerprint 218a,b may comprise an alphanumeric value, such as a 10-digit hash of the CLD of the frame (para. [0038]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. This would have produced predictable and desirable results, in that it would allow for a well-known frame identifier to be used as part of the shot signature, which could improve the ability of the system to properly match related content. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ikizyan et al. (Pub. No.: US 2013/0163957) in view of Yang et al. (Pub. No.: US 2012/0087583) and Deng (Pub. No.: US 2013/0259323). Regarding claim 5, the combination of Ikizyan and Yang discloses the method of claim 1, but it could be argued that Ikizyan does not explicitly disclose wherein generating, based on the one or more shot changes, the one or more shot signatures comprises: determining, based on the one or more shot changes, one or more groups of frames associated with the one or more shot changes; and generating, based on a first frame of each group of frames of the one or more groups of frames, the one or more shot signatures. However, in analogous art, Deng discloses “[a]fter a scene is detected, the scene change detector 225 of the illustrated example also determines key frame(s) and an image signature of the key frame(s) (referred to herein as the key image signature) that are to be representative of the scene. The scene change detector 225 stores the key frame(s) and the key image signature(s) in the image store 220. For example, the key frame and the key image signature for the scene may be chosen to be the frame and signature corresponding to the first frame in the scene, the last frame in the scene, the midpoint frame in the scene, etc. (para. [0030]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for generating, based on the one or more shot changes, the one or more shot signatures to comprise: determining, based on the one or more shot changes, one or more groups of frames associated with the one or more shot changes; and generating, based on a first frame of each group of frames of the one or more groups of frames, the one or more shot signatures. This would have produced predictable and desirable results, in that it would let representative frames be used in generating the signature of the shot, which makes logical sense in the context of the art. Claims 6, 8, 29, 30 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Ikizyan et al. (Pub. No.: US 2013/0163957) in view of Yang et al. (Pub. No.: US 2012/0087583) and Yabu (Pub. No.: US 2016/0088365). Regarding claim 6, the combination of Ikizyan and Yang discloses the method of claim 1, but it could be argued that Ikizyan does not explicitly disclose further comprising: sending the video signature; and receiving, based on the video signature, one or more of viewing history information or a content recommendation. However, in analogous art, Yabu discloses that a “video recognition device 20 can grasp a viewing status of video reception device 40 by the content specifying processing performed based on the fingerprint (terminal video recognition information) transmitted from video reception device 40, and accordingly, can also be configured to update the viewing history of video reception device 40, which is stored in storage unit 23, based on the result of the content specifying processing (para. [0142]),” wherein “[v]ideo recognition device 20 receives the next viewing information transmitted from video reception device 40, and performs retrieval from the online database based on the information included in the received next viewing information. If content corresponding to the information (information on the channel estimated to be viewed next time and the time zone thereof) included in the next viewing information can be found from the online database, then video recognition device 20 generates a local database having a fingerprint (server video recognition information) and analysis information regarding the content. Note that this analysis information may include broadcast program meta information of an electronic broadcast program guide and the like. Then, video recognition device 20 transmits the generated local database to video reception device 40 through communication network 16 (step S85) (para. [0147]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for sending the video signature, and receiving, based on the video signature, one or more of viewing history information or a content recommendation. This would have produced predictable and desirable results, in that it would allow for the video signatures of Ikizyan to be used in a well-known manner to maintain a record of programming that had been consumed by a user. Regarding claim 8, the combination of Ikizyan and Yang discloses the method of claim 7, but it could be argued that Ikizyan does not explicitly disclose further comprising: determining viewing history information; and updating the viewing history information with the identification of the content item. However, in analogous art, Yabu discloses that a “video recognition device 20 can grasp a viewing status of video reception device 40 by the content specifying processing performed based on the fingerprint (terminal video recognition information) transmitted from video reception device 40, and accordingly, can also be configured to update the viewing history of video reception device 40, which is stored in storage unit 23, based on the result of the content specifying processing (para. [0142]),” wherein “[v]ideo recognition device 20 receives the next viewing information transmitted from video reception device 40, and performs retrieval from the online database based on the information included in the received next viewing information. If content corresponding to the information (information on the channel estimated to be viewed next time and the time zone thereof) included in the next viewing information can be found from the online database, then video recognition device 20 generates a local database having a fingerprint (server video recognition information) and analysis information regarding the content. Note that this analysis information may include broadcast program meta information of an electronic broadcast program guide and the like. Then, video recognition device 20 transmits the generated local database to video reception device 40 through communication network 16 (step S85) (para. [0147]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for determining viewing history information, and updating the viewing history information with the identification of the content item. This would have produced predictable and desirable results, in that it would allow for the video signatures of Ikizyan to be used in a well-known manner to maintain a record of programming that had been consumed by a user. Regarding claim 29, Ikizyan discloses a system comprising: a first computing device configured to: determine one or more frames associated with a content item (paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”), determine, based on the one or more frames, timing information associated with one or more shot changes of the content item (Fig. 1, element 101 and Δt1-Δt6, para. [0012]), generate, based on the one or more shot changes, one or more shot signatures (Fig. 2, any of the 12 rows labeled S1-S4 can be seen as a shot signature, paras. [0013]-[0016]; “A scene start time of at least a subset of the scene changes in the video can be associated with at least two time differences between scene changes that are subsequent to the particular scene change. In other words, the fingerprint 201 associates a scene change with a pairing of time intervals between the next two successive scene changes as shown. Accordingly, in the depicted example, the first entry in the table of the video fingerprint 201 associates a scene change start time S1 with Δt1 and Δt2, which represent a time difference between the scene change occurring at time S1 in the video and the next two scene changes in the video, S2 and S3.”), and generate, based on the one or more shot signatures and the timing information, a video signature associated with the content item (Fig. 2, element 203, paras. [0013]-[0016]; “The video fingerprint 201 can include a representation of a table that comprises at least three types of data.”). It could be argued that Ikizyan does not explicitly disclose wherein each shot signature of the one or more shot signatures comprises a representation of a frame of the one or more frames. However, in analogous art, Yang discloses that “[c]urrent video signature schemes are divided into two categories. In one category, a single key frame is selected to represent a shot, and an image hash is taken of the single key frame to be used as a shot signature. The first category takes advantage of the image hash, of which the solution is well developed (para. [0004]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan to allow for each shot signature of the one or more shot signatures to comprise a representation of a frame of the one or more frames. This would have produced predictable and desirable results, in that it would allow for a well-known technique to be used in order to represent a shot. It could be argued that the combination of Ikizyan and Yang does not explicitly disclose and a second computing device configured to receive the video signature. However, in analogous art, Yabu discloses that a “video recognition device 20 can grasp a viewing status of video reception device 40 by the content specifying processing performed based on the fingerprint (terminal video recognition information) transmitted from video reception device 40, and accordingly, can also be configured to update the viewing history of video reception device 40, which is stored in storage unit 23, based on the result of the content specifying processing (para. [0142]),” wherein “[v]ideo recognition device 20 receives the next viewing information transmitted from video reception device 40, and performs retrieval from the online database based on the information included in the received next viewing information. If content corresponding to the information (information on the channel estimated to be viewed next time and the time zone thereof) included in the next viewing information can be found from the online database, then video recognition device 20 generates a local database having a fingerprint (server video recognition information) and analysis information regarding the content. Note that this analysis information may include broadcast program meta information of an electronic broadcast program guide and the like. Then, video recognition device 20 transmits the generated local database to video reception device 40 through communication network 16 (step S85) (para. [0147]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan and Yang to allow for a second computing device configured to receive the video signature. This would have produced predictable and desirable results, in that it would allow for the video signatures of Ikizyan to be used in a well-known manner to maintain at a second device a record of programming that had been consumed by a user. Regarding claim 30, the combination of Ikizyan, Yang and Yabu discloses the system of claim 29, and further discloses wherein the timing information comprises one or more of a time duration between a first shot change and a second shot change or a number of frames between a first shot change and a second shot change (Ikizyan, paras. [0012] and [0022]; “Additionally, the time difference (e.g., an amount of time, number of frames, etc.) between successive scene changes is also detected.”). Regarding claim 32, the combination of Ikizyan, Yang and Yabu discloses the system of claim 29, and further discloses wherein the first computing device is configured to determine, based on the one or more frames, the timing information associated with the one or more shot changes of the content item, the first computing device is further configured to: determine, based on a difference between every two adjacent frames of the one or more frames satisfying a threshold, the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]); and determine, based on the one or more shot changes, the timing information associated with the one or more shot changes (Ikizyan, Fig. 6, paras. [0035] and [0036]). Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Ikizyan et al. (Pub. No.: US 2013/0163957) in view of Yang et al. (Pub. No.: US 2012/0087583), Yabu (Pub. No.: US 2016/0088365) and Younessian et al. (Pub. No.: US 2022/0019809). Regarding claim 31, the combination of Ikizyan, Yang and Yabu discloses the system of claim 29, but it could be argued that Ikizyan does not explicitly disclose wherein each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. However, in analogous art, Younessian discloses that “[a] video fingerprint 218a,b for a shot may comprise a video fingerprint for a single frame of the shot, such as the first frame of the shot. A video fingerprint 218a,b may comprise a block-level RGB (red-green-blue) descriptor of a frame. A video fingerprint 218a,b may comprise a CLD (color layer descriptor) of a frame. A video fingerprint 218a,b may comprise an alphanumeric value, such as a 10-digit hash of the CLD of the frame (para. [0038]).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ikizyan, Yang and Yabu to allow for each representation of the one or more representations comprises a color layout descriptor associated with the corresponding frame of the content item. This would have produced predictable and desirable results, in that it would allow for a well-known frame identifier to be used as part of the shot signature, which could improve the ability of the system to properly match related content. Response to Arguments Applicant's arguments filed November 5, 2025 have been fully considered but they are moot in view of the new grounds of rejection in view of Yang. Conclusion Claims 1-8 and 21-32 are rejected. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joshua D Taylor whose telephone number is (571)270-3755. The examiner can normally be reached Monday - Friday 8 am - 6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Joshua D Taylor/Primary Examiner, Art Unit 2426 January 28, 2026
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Mar 20, 2025
Non-Final Rejection — §103
Jun 23, 2025
Response Filed
Jul 15, 2025
Final Rejection — §103
Sep 15, 2025
Response after Non-Final Action
Nov 05, 2025
Request for Continued Examination
Nov 10, 2025
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604065
Systems and Methods for Broadcasting Data Contents Related to Media Contents Using a Media Device
2y 5m to grant Granted Apr 14, 2026
Patent 12604051
METHODS AND SYSTEMS FOR GENERATING A MULTIPLE USER PROFILE
2y 5m to grant Granted Apr 14, 2026
Patent 12598350
METHODS, SYSTEMS, ARTICLES OF MANUFACTURE, AND APPARATUS FOR ADAPTIVE METERING
2y 5m to grant Granted Apr 07, 2026
Patent 12556777
LIVE VIDEO RENDERING AND BROADCASTING SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12556488
NETWORK TRAFFIC ARBITRATION BASED ON PACKET PRIORITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
89%
With Interview (+30.5%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 525 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month