Prosecution Insights
Last updated: April 19, 2026
Application No. 18/029,510

Techniques and Apparatuses that Implement Camera Manager Systems Capable of Generating Frame Suggestions from a Set of Frames

Final Rejection §102§103
Filed
Mar 30, 2023
Examiner
SALEH, ZAID MUHAMMAD
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
28 granted / 43 resolved
+3.1% vs TC avg
Strong +48% interview lift
Without
With
+48.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1 – 16 remain pending. Claims 1, 2, 3, 5, 6, 9, 14 – 16 are Amended. Response to Arguments Applicant's arguments filed November 03, 2025 with respect to claims 1 – 16 have been fully considered but they are not persuasive. Amendment of November 03, 2025 overcomes the claim objection regarding claims 1 - 16 from the last Office Action. Response to Remarks Applicant argues that Townsend is silent on the following limitations below. Examiner respectfully disagrees for the reasons provided below: In the Remarks (p. 9) regarding claim 1, applicants assert, “The frame numbers used in Townsend are "annotation data” used to “determine a time (e.g., video frames) and .. [associate it] with the moments/video clips” [see Abstract]. Frame numbers and time “annotation data” do not teach “a time diversity score based on a time feature difference between time-related features of frames of the set of frames and time-related features of the first frame” as recited in amended claim 1”. Examiner respectfully disagrees because Townsend in [0051] discloses about timestamp and period of time which equates to time feature. Additionally, Townsend in [0079] discloses different priority metrics with an object over time which implies to time feature difference between time-related features of frames. Difference in priority metrics across different timeline directly reflect differences between frames based on the time related feature. Summary of citations (Townsend) Paragraph [0051]; “the server(s) 112 may analyze a video frame 310 and generate annotation data 312 , which may include time (e.g., a timestamp, a period of time, etc.)”. Paragraph [0079]; “In some examples, the server(s) 112 may associate different priority metrics with an object over time, such as when a face is obscured, hidden and/or turned away from the image capture device 110”. In the Remarks (p. 10) regarding claim 1, applicants assert, “Townsend discloses "As illustrated in Fig. 5A, the server(s) may store annotation data in an annotation database 510. The annotation database may include time, location, motion, faces, humans, scenes, audio, land-marks, objects, pets, directional data, etc." (See [col 11, line 64 - col 12, line 5]). "Stor[ing] annotation data" of faces is not sufficient to teach "calculating a facial diversity score based on a facial feature difference between facial-related features of the frames of the set of frames and facial-related features of the first frame" as recited in amended claim 1”. Examiner respectfully disagrees because Townsend in [Column – 9, Line 30 – 48] discloses about identifying faces and facial expression like smiling across different frames and annotate them. Additionally, in [Column – 12, Line 26 – 27] Townsend discloses the priority metric (diversity score) is calculated based on the annotated data. Summary of Citations (Townsend) [Column – 9, Line 30 – 48]; “analyze a video frame 310 and generate annotation data 312, which may include time (e.g., a timestamp, a period of time, etc.), ... faces (existence, identification, if smiling, etc.), humans (e.g., head and shoulders), ... and/or directional data (e.g., position of faces, audio, landmarks, objects, pets, etc. within the video frame). In some examples, the annotation data may indicate an area within (e.g., x and y pixel coordinates) the video data that is of interest”. [Column – 12, Line 26 – 27]; “The server(s) 112 may determine the priority metric (e.g., interesting score) using the annotation data”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 7, 10, 12 – 16 are rejected under 35 U.S.C 103 as being unpatentable over Chakraborty et al. US Patent Publication No. US-20160070963-A1 (hereinafter Chakraborty) in view of Townsend US Patent Application Publication No. US-9620168-B1 (hereinafter Townsend). Regarding claim 1, Chakraborty discloses a method performed by a computing device comprising: performing a frame score generation process to calculate a frame diversity score (Chakraborty in [0037] discloses, “In the exemplary embodiment illustrated in FIG. 2B, frame-scoring logic 235 includes both coverage scoring logic 230 and diversity scoring logic 240”), calculating a (In [0037] Chakraborty discloses about calculating the diversity score of the frame);calculating an aesthetic diversity score for based on an aesthetic feature difference between scene-related features of the frames of the set of frames relative to and scene-related features of the first frame (In [0037] Chakraborty discloses about determining diversity score and in [0036] it discloses about object detection (aesthetic) across different frames); and calculating the frame diversity score for the frames of the set of frames relative to the first frame based on the (Chakraborty in [0037] discloses about calculating the diversity score of frames and features (disclosed in [0036]). Moreover, [0036] it discloses about object detection (aesthetic)). Chakraborty doesn’t disclose the limitations as further recited in the claim. Townsend discloses receiving a stream of image data defining a first frame and a set of frames not including the first frame (Townsend in [Column – 10, 35 – 37] discloses, “As illustrated in FIG. 4, the server(s) 112 may receive (410) video data and may optionally receive (412) existing annotation data associated with the video data”. In Fig. 5A Townsend discloses about determining about first frame and not including first frame); the frame score generation process comprising: calculating a time diversity score for based on a time feature difference between time-related features of frames of the set of frames and time-related features of relative to the first frame (Townsend in [Column – 9, Line 32 – 34] discloses about timestamp and period of time which equates to time feature. Additionally, Townsend in [Column – 17, Line 9 – 12] discloses different priority metrics with an object over time which implies to time feature difference between time-related features of frames. Difference in priority metrics across different timeline directly reflect differences between frames based on the time related feature); facial diversity, and time diversity (Townsend disclosed in Fig. 5A); calculating a facial diversity score for based on a facial feature difference between facial-related features of the frames of the set of frames relative to and facial-related features of the first frame (Townsend in [Column – 9, Line 30 – 48] discloses about identifying faces and facial expression like smiling across different frames and annotate them. Additionally, in [Column – 12, Line 26 – 27] Townsend discloses the priority metric (diversity score) is calculated based on the annotated data); and determining, using the frame diversity score, whether to include the first frame as part of an image object representing suggested frames of the stream of image data (Townsend in [Column – 3, Line 52 – 65 & Column – 4, Line 1 – 4] discloses, “the server(s) 112 may group ten candidate video clips together based on the similarity score and may select three candidate video clips having the highest priority metric as the first video clips to increase a diversity between the first video clips” wherein selecting based on the priority to increase diversity implies to determining if the first frame as part of an image object representing suggested frames of the stream of image data). It would been obvious to one with one having an ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Townsend into the system of Chakraborty because it would allow the system to suggest frames that not only distinct by facial expression but also captured in different time frame. Summary of Citations (Townsend) [Column – 3, Line 52 – 65 & Column – 4, Line 1 – 4]; “the server(s) 112 may select any of the candidate video clips having a peak priority metric value ... In some examples, the server(s) 112 may select the first video clips based on priority metrics using a variable threshold .... Thus, the server(s) 112 may group the candidate video clips based on the similarity scores and may select a desired number of candidate video clips from each group based on a highest priority metric. For example, the server(s) 112 may group ten candidate video clips together based on the similarity score and may select three candidate video clips having the highest priority metric as the first video clips to increase a diversity between the first video clips”. [Column – 9, Line 32 – 34]; “the server(s) 112 may analyze a video frame 310 and generate annotation data 312 , which may include time (e.g., a timestamp, a period of time, etc.)”. [Column – 10, Line 35 – 37]; “As illustrated in FIG. 4, the server(s) 112 may receive (410) video data and may optionally receive (412) existing annotation data associated with the video data”. [Column – 12, Line 43 – 46]; “As illustrated in FIG. 5B, the annotation database 512 includes Frame 1, Frame 2, Frame 3, Frame 10, Frame 11, Frame 30 and Summary Data associated with the overall video clip”. [Column – 17, Line 9 – 12]; “In some examples, the server(s) 112 may associate different priority metrics with an object over time, such as when a face is obscured, hidden and/or turned away from the image capture device 110”. Summary of Citations (Chakraborty) Paragraph [0036]; “the feature vector may include features determined using any object detection technique known in the art. In the exemplary embodiment, frame feature extractor 229 is to generate a feature vector comprising histograms of oriented gradient (HOG) features”. Paragraph [0037]; “In the exemplary embodiment illustrated in FIG. 2B, frame-scoring logic 235 includes both coverage scoring logic 230 and diversity scoring logic 240”. Regarding claims 7, 10 and 12 – 14, the grounds of rejection from the last Office Action with respect to Chakraborty in view of Townsend apply here. Regarding claim 15, apparatus claim 15 corresponds to method claim 1. Therefore, the rejection analysis and motivation to combine of claim 1 is applicable to claim 15. Regarding claim 16, is a computer readable storage medium claim corresponds to method claim 1. Therefore, the rejection analysis of claim 1 is applied in claim 16. See also in [0087]. Summary of Citations (Chakraborty) Paragraph [0087]; “In one or more third embodiment, a computer-readable storage media has instructions stored...”. Claims 2 – 6, 8, 9 and 11 are rejected under 35 U.S.C 103 as being unpatentable over Chakraborty in view of Townsend and further in view of Jiang US Patent Application Publication No. US-10134440-B2 (hereinafter Jiang). Chakraborty in view of Townsend fails to teach the limitations as recited in claims 2 – 6, 8, 9 and 11 respectively. However, Jiang does. The grounds of rejection and motivation to combine from the last Office Action with respect to Jiang apply here. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684. The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /ZAID MUHAMMAD SALEH/ Examiner, Art Unit 2668 01/22/2025 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Mar 30, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §102, §103
Nov 03, 2025
Response Filed
Jan 22, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602944
AUTHENTICATION OF DENDRITIC STRUCTURES
2y 5m to grant Granted Apr 14, 2026
Patent 12586501
DISPLAY DEVICE, DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586396
INFORMATION PROCESSING APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12562535
METHOD FOR DETECTING UNDESIRED CONNECTION ON PRINTED CIRCUIT BOARD
2y 5m to grant Granted Feb 24, 2026
Patent 12555344
METHOD AND APPARATUS FOR IMPROVING VIDEO TARGET DETECTION PERFORMANCE IN SURVEILLANCE EDGE COMPUTING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+48.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month