Prosecution Insights
Last updated: April 19, 2026
Application No. 16/196,907

SYSTEMS AND METHODS FOR INDEXING A CONTENT ASSET

Final Rejection §102§103§112
Filed
Nov 20, 2018
Examiner
WILLIS, AMANDA LYNN
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Comcast Cable Communication LLC
OA Round
11 (Final)
36%
Grant Probability
At Risk
12-13
OA Rounds
4y 8m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
123 granted / 345 resolved
-19.3% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
25 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Receipt of Applicant’s Amendment, filed January 30, 3036 is acknowledged. Claim 22 was amended. Claims 4-6, 9 and 20 were cancelled. Claims 1-3, 7-19, 21-25 are pending in this office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 22 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regard to claim 22, the claim recites “wherein the element comprises faces and wherein based on the quantity of faces satisfying the threshold, the segment label comprises at least one of an advertisement, an interview, or a monologue.” This claim limitation lacks antecedent basis and logically contradicts the limitations of the parent claim. Parent claim 1 has recited “a quantity of an element in each keyframe” and “determining, based on the average quantity of the element satisfying a threshold, a segment label”. Claim 1 dictates where the ‘quantity’ that satisfies the threshold to determine the segment label is the “average quantity” not the “quantity”. Yet claim 22 references to ‘the quantity’ not the ‘average quantity’. When read within context, claim 22 does not appear to be reciting a new threshold check or label determination, instead claim 22 appears to detail that the element comprises a face and that the segment label comprises at least one of an advertisement, and interview, or a monologue. For examination purposes the claimed ‘quantity’ has been construed as referring to the –average quantity-- recited in the parent claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 7-8, 10-18, 21-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dimitrova [6754389]. With regard to claim 1 Dimitrova teaches A method comprising: determining, by a computing device as an image processor (Dimitrova, Column 2, lines 51-53 “FIG. 1 illustrates an example block diagram of an image processor 100 for classifying a sequence of image frames based on object trajectories.”), a plurality of keyframes as frames of a video segment (Dimitrova, Column 2, lines 3-7 “The object of this invention, and others, are achieved by providing a content-based classification system that detects the presence of objects within a frame and determines the path, or trajectory, of each object through multiple frames of a video segment.”; Column 3, line 20-48) from a portion of a content asset as a video stream, e.g. a video broadcast or recording (Dimitrova, Column 4, line 2 “video stream 10”; Column 1 lines 62-65 “It is an object of this invention to provide a method and system that facilitate an automated classification of content material within segments, or clips, of a video broadcast or recording.”); determining, based on at least one of facial-recognition as face tracking (Dimitrova, Column 3, lines 49-51 “The face tracking system 300 identifies faces in each segment of the video stream 10, and tracks: each face from frame to frame in each of the image frames of the segment.”) or object-recognition (Dimitrova, Column 3, lines 9 “an "other object" tracker 500”; Column 7, lines 65-67 “types. The application of these techniques, and others, to identify and track other, object types will be evident to one of ordinary skill in the art in view of this disclosure.”), a quantity of an element in each keyframe of the plurality of keyframes (Dimitrova, Column 3, lines 60-63 “Other trajectory information, such as the duration of time, or number of frames, that the face appears within the segment are also included in the parameters of each face trajectory 301”; Column 6, lines 9-14 “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream.”); determining, based on the quantity of the element in each keyframe of the plurality of keyframes (Dimitrova, Column 3, lines 60-63; Column 6, lines 9-14), an average quantity as the average duration the object trajectories (Id) of the element in the plurality of keyframes as the object trajectories within the unit of time in the segment of the video (Column 6, lines 9-19); determining, based on the average quantity of the element satisfying a threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold,”), a segment label for the portion of the content asset as the classification (Dimitrova, Column 6, lines 19-21 “particular features of particular object types can be used to further facilitate the classification process.”; Column 3, line 66 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or set of segments 202, of the video stream 10”); and generating metadata as generating a symbol (Dimitrova, Column 7, lines 19-21 “The symbol generator 210 generates the appropriate symbol for each frame of the sequence of frames forming the segment 10' using, for example, the above list of symbols.”), wherein the metadata indicates (Dimitrova, Column 7, lines 29-33 “In response to the sequence of observation symbols, each HMM 220a-d provides a probability measure that relates to the likelihood that this sequence of observed symbols would have been produced by a video segment having the designated classification.”) an association between the segment label as the designated classification (Id) and the portion of the content asset as the video segment (Id) and wherein the metadata facilitates navigation to the portion of the content asset (Dimitrova, Column 1, lines 45-51 “Video recorders also allow viewers to select. specific portions of recorded programs for viewing. For example, commercial segments may be skipped while viewing an entertainment or news program, or, all non-news material may be skipped to provide a consolidation of the day's news at select times.”). With regard to claim 2 Dimitrova further teaches wherein the element comprises at least one of a face (Dimitrova, Column 3, lines 49 “identifies faces in each segment of the video stream 10”), an inanimate object as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”), or an advertisement (Dimitrova, Column 3, line 46 “frames classified as commercial”). With regard to claim 7 Dimitrova further teaches wherein the element comprises an inanimate object as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”) and wherein determining the quantity of the element in each keyframe of the plurality of keyframes (Dimitrova, Column 3, lines 60-63; Column 6, lines 9-14) comprises applying an image classifier to each keyframe of the plurality of keyframes (Dimitrova, Column 3, lines 65 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or sets of segments 202, of the video stream 10.”). With regard to claim 8 Dimitrova further teaches wherein the element comprises an inanimate object and wherein determining the segment label further comprises determining, based on a quantity as difference exceeding a threshold (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) of matches (Dimitrova, Column 5, lines 3-6 “At a more analytical level, statistical techniques, such as multivariate correlation analysis, and graphic techniques, such as pattern matching, can be used to effect this classification.”) between at least one inanimate object of a segment profile as particular classifications, such as weather report, or commercial segment (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) and the at least one inanimate object identified by the object-recognition as the location of faces in the sequence of the image frame over time (Id) satisfying a second threshold (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”), the segment label as the classification (Id). With regard to claim 10 Dimitrova teaches A method comprising: determining, by a computing device as an image processor (Dimitrova, Column 2, lines 51-53 “FIG. 1 illustrates an example block diagram of an image processor 100 for classifying a sequence of image frames based on object trajectories.”) , from a portion of a content asset as a video stream, e.g. a video broadcast or recording (Dimitrova, Column 4, line 2 “video stream 10”; Column 1 lines 62-65 “It is an object of this invention to provide a method and system that facilitate an automated classification of content material within segments, or clips, of a video broadcast or recording.”), a plurality of keyframes as frames of a video segment (Dimitrova, Column 2, lines 3-7 “The object of this invention, and others, are achieved by providing a content-based classification system that detects the presence of objects within a frame and determines the path, or trajectory, of each object through multiple frames of a video segment.”; Column 3, line 20-48); determining, based on the plurality of keyframes as frames of a video segment (Dimitrova, Column 2, lines 3-7 “The object of this invention, and others, are achieved by providing a content-based classification system that detects the presence of objects within a frame and determines the path, or trajectory, of each object through multiple frames of a video segment.”; Column 3, line 20-48), a number (Dimitrova, Column 3, lines 60-63 “Other trajectory information, such as the duration of time, or number of frames, that the face appears within the segment are also included in the parameters of each face trajectory 301”; Column 6, lines 9-14 “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream.”) of people (Dimitrova, Column 3, line 27 “dialog between two people”) and a first plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”); determining, based on the number of people and based on a quantity as difference exceeding a threshold (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) of matches (Dimitrova, Column 5, lines 3-6 “At a more analytical level, statistical techniques, such as multivariate correlation analysis, and graphic techniques, such as pattern matching, can be used to effect this classification.”) between the first plurality of disparate inanimate objects as the location of faces in the sequence of the image frame over time (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) when the tracked element is an object (Dimitrova, Column 3, line 9; Column 3, line 3-6) and a second plurality of disparate inanimate objects as the patterns common to particular classifications (Id), e.g. the set of characterizations for the categories (Dimitrova, Column 6, lines 45-65) when the tracked element is an object (Dimitrova, Column 3, line 9; Column 3, line 3-6) satisfying a threshold(Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”), a segment profile indicating a category as particular classifications, such as weather report, or commercial segment (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) of segment in the content asset as the sequence of images (Id); and generating metadata as generating a symbol (Dimitrova, Column 7, lines 19-21 “The symbol generator 210 generates the appropriate symbol for each frame of the sequence of frames forming the segment 10' using, for example, the above list of symbols.”), wherein the metadata indicates (Dimitrova, Column 7, lines 29-33 “In response to the sequence of observation symbols, each HMM 220a-d provides a probability measure that relates to the likelihood that this sequence of observed symbols would have been produced by a video segment having the designated classification.”) an association between the category of the segment as the designated classification (Id) and the portion of the content asset as the video segment (Id) and wherein the metadata facilitates navigation to the portion of the content asset (Dimitrova, Column 1, lines 45-51 “Video recorders also allow viewers to select. specific portions of recorded programs for viewing. For example, commercial segments may be skipped while viewing an entertainment or news program, or, all non-news material may be skipped to provide a consolidation of the day's news at select times.”). With regard to claim 11 Dimitrova further teaches wherein determining the first plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”) comprises applying an image classifier to the plurality of keyframes (Dimitrova, Column 3, lines 65 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or sets of segments 202, of the video stream 10.”; Column 3, lines 10-19 “For ease of reference and understanding, because face tracking and text tracking serve as the paradigm for tracking other objects, the "other object" tracker 500 and corresponding "other" trajectories 501 are not discussed further herein, their function and embodiment being evident to one of ordinary skill in the art in light of the detail presentation below of the function and embodiment of the face 300 and text 400 tracking systems, and corresponding face 301 and text 401 trajectories”). With regard to claim 12 Dimitrova further teaches wherein determining the first plurality of disparate inanimate objects comprises: determining, based on object-recognition as the location of faces in the sequence of the image frame over time (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”), for at least one inanimate object of the first plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9; Column 3, line 3-6), a confidence score of a plurality of confidence scores, wherein the plurality of confidence scores as a probability (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) indicate an association as the classification corresponds to the segment (Id)to one or more identifiable inanimate objects as the location of the faces, e.g. objects (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”); and determining that at least one confidence score of the plurality of confidence scores satisfies a second threshold (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”). With regard to claim 13 Dimitrova further teaches wherein determining the plurality of keyframes is based on determining a quantity of changes between a plurality of frames of the content asset as significant changes (Dimitrova, Column 9, lines 9-11 “MPEG and other digital encodings of video information use differential encoding, wherein a subsequent frame is encoded based on the difference from a prior frame.”; Column 9, lines 11-19). With regard to claim 14 Dimitrova further teaches wherein determining the plurality of confidence scores (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) comprises: generating a data structure comprising a multi-dimensional vector as movement vectors in MPEG are two-dimensional vectors (Column 3, lines 53-57 “The face trajectory 301 includes such trajectory information as the coordinates of the face within each frame, the coordinates of the. face in an initial frame and a movement vector that describes the path of the face through the segment”; Column 7, lines 58-65 “Existing and proposed video encoding standards, such as MPEG-4 and MPEG-7, allow for the explicit identification of objects within each frame or sequence of frames and their corresponding movement vectors from frame to frame. The following describes techniques that can be utilized in addition to, or in conjunction with, such explicit object tracking techniques for tracking the paradigm face and text object types.”), wherein at least one dimension of the multi-dimension vector corresponds to an identifiable inanimate object of the one or more identifiable inanimate objects as the object who’s trajectory is being tracked (Id); and storing each confidence score of the plurality of confidence scores (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) in a corresponding dimension of the multi-dimension vector (Column 9, lines 50- “an HMM classifier 200' as presented in FIG. 2, the face trajectories 301 will preferably contain information related to each frame of the segment, but the information can merely be the location of the face relative to a distance from a camera, .... Using a parametric classifier 200, the face trajectories 301 may contain a synopsis of the movement of the face,… or it may contain the determined location of the face in each frame of the segment.”; Column 7, lines 58-67 “Existing and proposed video encoding standards, such as MPEG-4 and MPEG-7, allow for the explicit identification of objects within each frame or sequence of frames and their corresponding movement vectors from frame to frame. The following describes techniques that can be utilized in addition to, or in conjunction with, such explicit object tracking techniques for tracking the paradigm face and text object types.”). With regard to claim 15 Dimitrova further teaches determining, based on the plurality of keyframes and facial-recognition as face tracking (Dimitrova, Column 3, lines 49-51 “The face tracking system 300 identifies faces in each segment of the video stream 10, and tracks: each face from frame to frame in each of the image frames of the segment.”), the number of people (Dimitrova, Column 3, line 27 “dialog between two people”) associated with the portion of the content asset as a video stream, e.g. a video broadcast or recording (Dimitrova, Column 4, line 2 “video stream 10”; Column 1 lines 62-65). With regard to claim 16 Dimitrova teaches determining, by a computing device as an image processor (Dimitrova, Column 2, lines 51-53 “FIG. 1 illustrates an example block diagram of an image processor 100 for classifying a sequence of image frames based on object trajectories.”), a plurality of keywords of a natural language description as the terms news, commercial, sitcom, soap, weather, sports-news, market-news, political news (Column 6, line 67 - Column 7, line 1 “In the example classifier 200', four classification types are defined: news, commercial, sitcom, and soap.”; Column 4, lines 23-33 “News… weather… sports-news… market-news… political-news”) of a segment profile as particular classifications, such as weather report, or commercial segment (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”), wherein the plurality of keywords indicate as the terms news, commercial, sitcom, soap, weather, sports-news, market-news, political news (Column 6, line 67 - Column 7, line 1; Dimitrova, Column 6, lines 37-42 “Hidden Markov Models (HMMs) are used to facilitate the classification process. The Hidden Markov Model approach is particularly well suited for classification based on trajectories, because trajectories represent temporal events, and the Hidden Markov Model inherently incorporates a time-varying model.”) a plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”) of the segment profile as the classification (Dimitrova, Column 6, lines 45-65); determining, from a portion of a content asset as a video stream, e.g. a video broadcast or recording (Dimitrova, Column 4, line 2 “video stream 10”; Column 1 lines 62-65 “It is an object of this invention to provide a method and system that facilitate an automated classification of content material within segments, or clips, of a video broadcast or recording.”), a plurality of keyframes as frames of a video segment (Dimitrova, Column 2, lines 3-7 “The object of this invention, and others, are achieved by providing a content-based classification system that detects the presence of objects within a frame and determines the path, or trajectory, of each object through multiple frames of a video segment.”; Column 3, line 20-48); determining, based on the plurality of keyframes and an image classifier (Column 3, line 66 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or set of segments 202, of the video stream 10”), a second plurality of disparate inanimate objects as the patterns common to particular classifications (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) when the tracked element is an object (Dimitrova, Column 3, line 9; Column 3, line 3-6), e.g. the set of characterizations for the categories (Dimitrova, Column 6, lines 45-65) from the portion of the content asset as the sequence of image frames (Column 5, lines 3-18); and generating, based on a number of people(Dimitrova, Column 3, line 27 “dialog between two people”) in at least one keyframe of the plurality of keyframes as frames of a video segment (Dimitrova, Column 2, lines 3-7 “The object of this invention, and others, are achieved by providing a content-based classification system that detects the presence of objects within a frame and determines the path, or trajectory, of each object through multiple frames of a video segment.”; Column 3, line 20-48) and based on a quantity as difference exceeding a threshold (Dimitrova, Column 7, lines 35-42 “Generally, the classification corresponding to the, HMM having the highest probability is assigned to the segment, although other factors may also be utilized, particular when the difference among the highest reported probabilities from the HMMs 220a-d are not significantly different, or when the highest reported probability does not exceed a minimum threshold level.”) of matches (Dimitrova, Column 5, lines 3-6 “At a more analytical level, statistical techniques, such as multivariate correlation analysis, and graphic techniques, such as pattern matching, can be used to effect this classification.”) between the second plurality of disparate inanimate objects as the patterns common to particular classifications (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) when the tracked element is an object (Dimitrova, Column 3, line 9; Column 3, line 3-6), e.g. the set of characterizations for the categories (Dimitrova, Column 6, lines 45-65) and the plurality of keywords as the terms news, commercial, sitcom, soap, weather, sports-news, market-news, political news (Column 6, line 67 - Column 7, line 1; Dimitrova, Column 6, lines 37-42 “Hidden Markov Models (HMMs) are used to facilitate the classification process. The Hidden Markov Model approach is particularly well suited for classification based on trajectories, because trajectories represent temporal events, and the Hidden Markov Model inherently incorporates a time-varying model.”) indicating (Dimitrova, Column 6, lines 37-42 “Hidden Markov Models (HMMs) are used to facilitate the classification process. The Hidden Markov Model approach is particularly well suited for classification based on trajectories, because trajectories represent temporal events, and the Hidden Markov Model inherently incorporates a time-varying model.”) the plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”), metadata as generating a symbol (Dimitrova, Column 7, lines 19-21 “The symbol generator 210 generates the appropriate symbol for each frame of the sequence of frames forming the segment 10' using, for example, the above list of symbols.”), wherein the metadata indicates (Dimitrova, Column 7, lines 29-33 “In response to the sequence of observation symbols, each HMM 220a-d provides a probability measure that relates to the likelihood that this sequence of observed symbols would have been produced by a video segment having the designated classification.”) an association between a segment label of the segment profile as the designated classification (Id) and the portion of the content asset as the video segment (Id) and wherein the metadata facilitates navigation to the portion of the content asset (Dimitrova, Column 1, lines 45-51 “Video recorders also allow viewers to select. specific portions of recorded programs for viewing. For example, commercial segments may be skipped while viewing an entertainment or news program, or, all non-news material may be skipped to provide a consolidation of the day's news at select times.”). With regard to claim 17, Dimitrova further teaches wherein the metadata further indicates (Dimitrova, Column 7, lines 29-33 “In response to the sequence of observation symbols, each HMM 220a-d provides a probability measure that relates to the likelihood that this sequence of observed symbols would have been produced by a video segment having the designated classification.”) an association between a plurality of segment labels as the designated classification (Id) and a plurality of portions of the content asset as the video segment (Id), wherein the segment label of the segment profile is a segment label of the plurality of segment labels as the designated classification (Id) and the portion of the content asset is a portion of the plurality of portions of the content asset as the video segment (Id). With regard to claim 18 Dimitrova further teaches wherein the segment profile as particular classifications, such as weather report, or commercial segment (Dimitrova, Column 5, lines 3-18 “pattern matching …a plot of the location of faces in a sequence of image frames over time demonstrates distinguishable patterns common to particular classifications… high correlation with weather reports… high correlation with a commercial segment…”) further indicates the number of people (Dimitrova, Column 3, line 27 “dialog between two people”) associated with the portion of the content asset as a video stream, e.g. a video broadcast or recording (Dimitrova, Column 4, line 2 “video stream 10”; Column 1 lines 62-65). With regard to claim 21 Dimitrova further teaches sending the metadata as generating a symbol (Dimitrova, Column 7, lines 19-21 “The symbol generator 210 generates the appropriate symbol for each frame of the sequence of frames forming the segment 10' using, for example, the above list of symbols.”; Column 7, lines 47-51 “If the object types include human figures, for example, a symbol representing multiple human figure objects colliding with each other would serve as an effective symbol for distinguishing-segments of certain sports from other sports or from other classification types”) to a user device as transmitting the automatically generated classification information with the content material to the user’s television (Column 1, lines 22-25 “A classification of program material may be provided via a manually created television guide, or by other means, such as an auxiliary signal that is transmitted with the content material.”; Column 1, line 65 – Column 2 line 2 “The classification of each segment within a broadcast facilitates selective viewing, or non-viewing, of particular types of content material, and can also be used to facilitate the classification of a program based on the classification of multiple segments within the program”). With regard to claim 22 Dimitrova further teaches wherein the element comprises faces (Column 8, lines 1-3 “FIG. 3 illustrates an example block diagram of an example face tracking system 300 for determining face trajectories in a sequence of image frames.”) and wherein based on the quantity of faces as the average duration the object trajectories of the faces (Dimitrova, Column 3, lines 60-63; Column 6, lines 9-14; Please note this claim element has been construed in light of the 112b rejection above as referring to the average quantity) satisfying the threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold,”), the segment label as the classification (Dimitrova, Column 6, lines 19-21 “particular features of particular object types can be used to further facilitate the classification process.”; Column 3, line 66 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or set of segments 202, of the video stream 10”)comprises at least one of an advertisement (Column 7, line 1 “commercial”), an interview (Column 4, line 21-22 “followed by a reporter conducting on-the-scene interviews.”), or a monologue (Column 6, lines 50-51 “3. Wide close-up (shoulders and above) without text; 4. Close-up (chest and above) without text”). With regard to claim 23 Dimitrova further teaches a plurality of elements that comprises the element (Dimitrova, Column 3, lines 60-63 “Other trajectory information, such as the duration of time, or number of frames, that the face appears within the segment are also included in the parameters of each face trajectory 301”; Column 6, lines 9-14 “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream.”), wherein the method further comprises filtering as only utilizing the trajectories with a duration that exceeds the threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold”) a first element from the plurality of elements, based on a quantity of keyframes of the plurality of keyframes in which the first element appears(Dimitrova, Column 3, lines 60-63 “Other trajectory information, such as the duration of time, or number of frames, that the face appears within the segment are also included in the parameters of each face trajectory 301”; Column 6, lines 9-14 “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream.”), not satisfying a quantity of keyframes threshold as only utilizing the trajectories with a duration that exceeds the threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold”). With regard to claim 24 Dimitrova further teaches filtering as only utilizing the trajectories with a duration that exceeds the threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold”) a first inanimate object from the plurality of disparate inanimate objects as objects, such as vehicle figure objects (Dimitrova, Column 3, line 9 “an "other object" tracker 500”; Column 3, line 3-6 “As will be evident to one of ordinary skill in the art, the principles presented herein are applicable to other object types, such as human figure objects, animal figure objects, vehicle figure objects, hand (gesture) objects, and so”), based on a quantity of keyframes of the plurality of keyframes in which the first inanimate object appears(Dimitrova, Column 3, lines 60-63 “Other trajectory information, such as the duration of time, or number of frames, that the face appears within the segment are also included in the parameters of each face trajectory 301”; Column 6, lines 9-14 “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream.”), not satisfying a quantity of keyframes threshold as only utilizing the trajectories with a duration that exceeds the threshold (Dimitrova, Column 6, lines 16-17 “a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold”). With regard to claim 25 Dimitrova further teaches wherein determining, based on the plurality of keyframes and the image classifier(Dimitrova, Column 3, lines 65 – Column 4, line 2 “The classifier 200 uses the face trajectories 301 of the various segments of the video stream 10 to determine the classification 201 of each segment, or sets of segments 202, of the video stream 10.”; Column 3, lines 10-19 “For ease of reference and understanding, because face tracking and text tracking serve as the paradigm for tracking other objects, the "other object" tracker 500 and corresponding "other" trajectories 501 are not discussed further herein, their function and embodiment being evident to one of ordinary skill in the art in light of the detail presentation below of the function and embodiment of the face 300 and text 400 tracking systems, and corresponding face 301 and text 401 trajectories”), the second plurality of disparate inanimate objects the tracked element is an object (Dimitrova, Column 3, line 9; Column 3, line 3-6), e.g. the set of characterizations for the categories (Dimitrova, Column 6, lines 45-65) from the portion of the content asset comprises determining, based on the plurality of the keyframes, an aggregate number as the average duration the object trajectories (Dimitrova, Column 3, lines 60-63; Column 6, lines 9-14) of each of the second plurality of disparate inanimate objects in the plurality of keyframes as the object trajectories within the unit of time in the segment of the video (Column 6, lines 9-19) for the portion of the content asset as the segment of the video (Id). Claims 3 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Dimitrova in view of Soni [2018/0089203]. With regard to claims 3 and 19 Dimitrova further teaches wherein determining a keyframe of the plurality of keyframes is based on at least one of: a color [[ (Dimitrova, Column 3, lines 64-65 “characteristics associated with each face, such as color, size, and so on”), or a quantity of changes between a plurality of frames of the content asset as significant changes (Dimitrova, Column 9, lines 9-11 “MPEG and other digital encodings of video information use differential encoding, wherein a subsequent frame is encoded based on the difference from a prior frame.”; Column 9, lines 11-19). Dimitrova does not explicitly teach a color histogram. Soni teaches a color histogram for a frame of the content asset (Soni, ¶45 “content-based methods to identify key frames of video content. For example, the media system can determine content features (e.g., objects… colors, etc.) included (e.g., depicted) in the frames”; ¶46 “histograms similarity”), or a quantity of changes between a plurality of frames of the content asset (Soni, ¶46 “can determine to cluster one or more frames based on whether the frames shar one or more content features (e.g., items depicted within each frame… can identify key frames of the media object by comparing non-adjacent frames using inter-frames entropy, histograms similarity, or wavelets, selecting fames having maximum ration of objects to background… and/or any combination thereof”). It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the feature tracking taught by Dimitrova (Column 3, lines 7-65) using the feature detection techniques taught by Soni as it yields the predictable results of providing a known means of identifying the content features. Within the proposed combination the system may leverage the Machine Learning techniques used to analyze the content to detect and recognize the features of the content (Soni, ¶49-¶50). Response to Arguments Applicant's arguments filed January 30, 2026 have been fully considered but they are not persuasive. With regard to claim 1, applicant argues (Argument A.i.) that Dimitrova does not teach determining a plurality of keyframes and instead teaches evaluating every frame of a video stream. In response the plurality of frames being analyzed by Dimitrova are not the entirety of the video as applicant has argued. The plurality of frames are multiple frames of a video segment (Column 2, lines 3-7), which are themselves parts of a video sequence (Dimitrova, Column 2, lines 13-14 “…a preferred embodiment of this invention to classify each segment of a video sequence.”) which are themselves distinct parts of a video stream (Column 3, lines 20-21 “The video segmented 110 in the example processor 100 identifies distinct sequences of a video stream 10 to facilitate the processing and classification process.”). Meaning that Dimitrova explicitly details that the frames being analyzed are a plurality of frames that makes up a single logical segment that can be used to classify that particular sequence of frames (Column 3, lines 42-48 “Note also that a segment, or sequence of image frames, need not be a contiguous sequence of image frames. For example, for ease of processing or other efficiency, a sequence of image frames forming a segment or program segment may exclude those frames classified as commercial, so hat[sic] the non-commercial frames can be processed and classified as a single logical segment.”). Each individual segment of the set of segments of the video stream is classified separately (Column 4, liens 1-2 “classification 201 of each segment, or sets of segments 202, of the video stream 10.”) using a plurality of frames of the distinct segment. Furthermore, the claims do not bound the scope of ‘a plurality of keyframes’ beyond the plain meaning of the language itself. Applicant argues that the processing of ‘every frame in a portion of the video stream’ does not read on the claim language but does not provide any reasoning or rational as to how this subset of frames form a specific portion of the video stream (e.g. not all the frames of the video stream) is distinct from ‘a plurality of keyframes’. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Neither the instant specification nor the instant specification provide any suggestion on what the bounds of a ‘keyframe’ includes. Applicant’s arguments simply state that there is a distinction without providing any reasoning or rational to enable one of ordinary skill in the art to be able to identify the intended distinction. One of ordinary skill in the art would recognize the frames of the video segment as taught by Dimitrova as ‘a plurality of keyframes’ as they are frames of the video segment which represent logical segments such as commercials (Column 3, lines 44-48), news events or interviews (Column 4, lines 19-22). This is in line with how the term ‘plurality of keyframes’ is used within the instant specification (See Paragraph [0051] which recites “To determine the plurality of keyframes, a plurality of scenes (e.g., shots, etc.) of the segment of the content asset may be determined”; Paragraph [0053] which describes the keyframes being advertisements; Paragraph [54] which describes the keyframes being of interview segments). It is suggested that applicant amended the claims to better define the scope of the term ‘keyframes’ should there be an intended distinction. Based on the above reasoning and rational the applied prior art reads on the claim language. With regard to claim 1, applicant argues (Argument A.ii.) that Dimitrova does not teach determining an average quantity of the element in the plurality of keyframes or that the determination is based on the quantity of the element in each keyframe. Applicant asserts that instead Dimitrova only teaches determining average durations of trajectories. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant summarizes the prior art, and states that the prior art does not teach the claim language without providing any explication, reasoning or rational. The distinction applicant sees between the prior art and the instant claims is unclear. Applicant has provided no details regarding how the ‘average quantity’ is calculated. Within the instant specification the term ‘average’ is recited along with alternative means of aggregation, including median, logarithmic function, or other aggregation functions (Original Specification, Paragraph [0074]). Within this Paragraph, it is detailed that the system may aggregate the occurrence of one object within the keyframes, which one of ordinary skill in the art would recognize as being substantially similar to the average duration calculation of the prior art. Applicant has provided no suggestion regarding the intended scope of the phrase “average quantity of the element in the plurality of keyframes” in either the specification, claims, nor arguments. One of ordinary skill in the art is left to the plain meaning of the term within the context of analyzing video frames. It is suggested that applicant amend the claims to clarify the scope of the claim device. For sake of clarity, Dimitrova explicitly teaches tracking the number of objects that appear in each individual frame, as well as the duration of the objects. One of ordinary skill in the art would recognize the ‘duration’ as being equivalent to how many frames the object appears in. Therefore, when the system tracks the ‘duration’ of the objects, the system is tracking the quantity of that object in the set of frames. The ‘average duration’ therefore tracks the average quantity of that object within the particular set of frames. Dimitrova explicitly teaches that the average duration represents the ’density’ of the particular objects within the segment (Column 6, lines 9-14). One of ordinary skill in the art would recognize the average duration, e.g. the ‘density’ of the object, as being the average quantity of the object in the keyframes. Applicant has provided no reasoning or rational to clarify the scope of the claim language and instead merely asserts that it is distinct from the prior art. Based on the above reasoning and rational the applied art reads on the claim language. With regard to claim 1, applicant argues (A.iii) argues that comparing each individual duration for each trajectory to a threshold is not an ‘average quantity of the element”. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant summarizes the prior art, and states that the prior art does not teach the claim language without providing any explication, reasoning or rational. The distinction applicant sees between the prior art and the instant claims is unclear. Applicant appears to have an intended meaning for the phrase “average quantity of elements in the plurality of keyframes” that is not clearly articulated in the instant specification, claims, or arguments. It is suggested that the claims be amended to clarify the scope of the claim language. For the sake of clarity, when discussing the determination of the number of each object type trajectories with a duration that exceeds a threshold, the full quote, with context, of the prior art reads: “It has been found that the number of object trajectories for each object type per unit time and their average duration are fairly effective separating features because they represent the "density" of particular objects, such as faces or text, in the segment of the video stream. Additionally, it has been found that trajectories of long duration usually convey more important content information in video, and a preferred embodiment utilizes the number of each object type trajectories with a duration that exceeds a threshold, and their respective average duration as an effective separating feature”. When read in context, it is clear that the system is using the average duration to separate the features, e.g. the average duration is clearly envisioned as being used to make the determination. Dimitrova clearly acknowledges that the average duration (e.g. the density of the object) is an effective feature to be used to separate the features. One of ordinary skill in the art would recognize the determination that the duration (e.g. the average duration) exceeds the threshold as being the calculation that the system uses to identify the ‘separate features’ that are being used to classify the object. Based on the above reasoning and rational the prior art reads on the claim language. With regard to claim 1, applicant argues (A.iv.) the symbols generated by Dimitrova does not facilitate navigation to the portion of the content asset. Specifically, applicant argues that Dimitrova does not teach that the category symbols inserted in the frame are used by the user to facilitate selection of specific portions of the program. In response, Dimitrova explicitly teaches that the user can select specific portions of the program for viewing. One of ordinary skill in the art would recognize that this means that the user is specifically selecting the segment that they which to view (or skip). Dimitrova explicitly details that the specific content being viewed/skipped are specifically, for example, ‘commercial segments’, ‘news programs’ or ‘non-news materials’. One of ordinary skill in the art would recognize this to mean that the user is able to view/skip content based on the categorization of that content, which within the device is denoted by the categorization symbol that was inserted into the segment. The user would not be able to to navigate to the specific ‘news program’ that they wish to view, while specifically skipping the ‘commercial segments’ if they are not provided with the categorization symbols that the system has added to the video. Dimitrova details that this can only be done if there is a clear identification of the start and end of each commercial break (Column 1, lines 35-36), which one of ordinary skill in the art would recognize that within the device taught by Dimitrova, this is done by the symbols that categorize each frame in the segment. The entire purpose of the device taught by Dimitrova is to classify the programs to facilitate video retrieval based on said classification. The problem that Dimitrova is attempting to solve, is that the material does not include the detailed classification information (Column 1, lines 25-40). The purpose of the invention is to enable the users to locate select segments of programs for viewing” (Column 1, line 59) and it facilitates this using automatic classification (Column 1, line 62-65). Based on the above reasoning and rational the applied prior art reads on the claim language. With regard to claim 10, applicant argues (Argument B.i) that the prior art does not teach determining.. .based on a quantity of matches between the first plurality of disparate inanimate objects and a second plurality of disparate inanimate objects satisfying a threshold, a segment profile indicating a category of segment in the content asset”. Applicant asserts that Dimitrova instead teaches comparing each individual duration for each trajectory to a threshold duration, and is silent to “a quantity of matches between the first plurality of disparate inanimate objects and a second plurality of disparate inanimate objects satisfying a threshold”. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant describes aspects of the prior art, quotes the claim language, and states that the prior art is silent regarding the prior art. Applicant has provided no reasoning or rational to support their statements. Within the claim mapping, the ‘second plurality of disparate inanimate objects” has been mapped to the patterns common to particular classifications. The patterns common to particular classifications are a set of disparate inanimate objects, and thus read on the claim element. Neither the claim language, nor the specification places any restriction beyond the plain meaning of the language for the claimed “second plurality of disparate inanimate objects”. Within the prior art, this set of common patterns (e.g. the second plurality of disparate inanimate objects” are matched with the objects located in the frames, e.g. the faces in the frame. When the match is above a threshold quantity, e.g. when number of differences are not above the threshold, they are determined to satisfy the threshold, and the label is applied. Applicant has provided no reasoning or rational regarding the claim mapping put forth by the office, and has not addressed the claim mapping in their arguments. Applicant merely asserts that the prior art is silent without addressing the mapping put forth by the office. It is suggested that the claims be amended to clarify the scope of this claim element. With regard to claim 16, (argument C.i and C.ii) applicant again describes the prior art, then quotes claim language and makes the statement that the prior art fails to teach the quoted claim language. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant has not addressed the claim mapping put forth, which details that the metadata was mapped to the symbols. The symbols being determined based on the evaluation of the classification, which determines a quantity of matches as differences exceeding the threshold number of differences allowed when determining matching content. One of ordinary skill in the art would recognize that the system taught by Dimitrova is comparing the objects, e.g. the “first disparate inanimate objects” with the common patterns (e.g. the ‘second plurality of disparate inanimate objects’) to identify if they match or not. A match is determined when the number of differences between the objects satisfies a threshold. The objects in question, being faces, e.g. ‘people’. The system uses a HMM which has a classification term associated with the ‘common patterns’. The ‘common patterns’ within the model are associate with ‘keywords’ that represent the classification (e.g. terms like news, commercial, sitcome, soap, weather, sports-news, market-news, political news (Column 6, line 67 - Column 7, line 1)). When the system matches the object (e.g. the face) with the common patterns (e.g. the second disparate object), the HMM assigns the ‘keyword’ that represent the classification type with the object being classified. Within the prior art the system is then able to generate the ‘symbol’ (e.g. metadata) which is added to the segment to designate the classification. Applicant’s arguments do not address the claim mapping put forth. The claim language recite the matching at a high level, merely detailing the elements being matched without providing any detail regarding the operation itself. It is suggested that the claims be amended to clarify what the ‘keywords’ are and how they are used during the classification instead of merely reciting their user during the generation of the label. Based on the above reasoning the applied art reads on the claim langague. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA WILLIS whose telephone number is (571)270-7691. The examiner can normally be reached Monday-Friday 8am-2pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMANDA L WILLIS/ Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Nov 20, 2018
Application Filed
Jun 11, 2020
Non-Final Rejection — §102, §103, §112
Dec 17, 2020
Response Filed
Jan 12, 2021
Final Rejection — §102, §103, §112
Jul 15, 2021
Request for Continued Examination
Jul 16, 2021
Response after Non-Final Action
Jul 22, 2021
Non-Final Rejection — §102, §103, §112
Dec 09, 2021
Interview Requested
Dec 17, 2021
Applicant Interview (Telephonic)
Dec 17, 2021
Examiner Interview Summary
Dec 27, 2021
Response Filed
Feb 23, 2022
Non-Final Rejection — §102, §103, §112
May 18, 2022
Interview Requested
Jun 10, 2022
Examiner Interview Summary
Jun 10, 2022
Applicant Interview (Telephonic)
Jun 27, 2022
Response Filed
Jul 15, 2022
Final Rejection — §102, §103, §112
Jan 23, 2023
Request for Continued Examination
Jan 25, 2023
Response after Non-Final Action
Feb 09, 2023
Non-Final Rejection — §102, §103, §112
Aug 14, 2023
Response Filed
Oct 16, 2023
Final Rejection — §102, §103, §112
Jan 19, 2024
Request for Continued Examination
Jan 22, 2024
Response after Non-Final Action
Mar 18, 2024
Non-Final Rejection — §102, §103, §112
Jun 10, 2024
Interview Requested
Jun 18, 2024
Applicant Interview (Telephonic)
Jun 18, 2024
Examiner Interview Summary
Aug 26, 2024
Response Filed
Nov 21, 2024
Final Rejection — §102, §103, §112
Jan 27, 2025
Response after Non-Final Action
Feb 26, 2025
Notice of Allowance
Feb 26, 2025
Response after Non-Final Action
Mar 05, 2025
Response after Non-Final Action
May 27, 2025
Response after Non-Final Action
Jun 27, 2025
Response after Non-Final Action
Jul 02, 2025
Response after Non-Final Action
Sep 02, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Sep 26, 2025
Non-Final Rejection — §102, §103, §112
Jan 30, 2026
Response Filed
Feb 24, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602380
SUBSUMPTION OF VIEWS AND SUBQUERIES
2y 5m to grant Granted Apr 14, 2026
Patent 12585675
HYBRID POSITIONAL POSTING LISTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579206
AUTOMATIC ARTICLE ENRICHMENT BY SOCIAL MEDIA TRENDS
2y 5m to grant Granted Mar 17, 2026
Patent 12461960
SYSTEMS AND METHODS FOR MACHINE LEARNING-BASED CLASSIFICATION AND GOVERNANCE OF UNSTRUCTURED DATA USING CURATED VIRTUAL QUEUES
2y 5m to grant Granted Nov 04, 2025
Patent 12443613
REDUCING PROBABILISTIC FILTER QUERY LATENCY
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

12-13
Expected OA Rounds
36%
Grant Probability
62%
With Interview (+26.6%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month