Prosecution Insights
Last updated: April 19, 2026
Application No. 18/364,856

MEDIA ANNOTATION WITH PRODUCT SOURCE LINKING

Final Rejection §103
Filed
Aug 03, 2023
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Receipt is acknowledged of claim amendments with associated arguments/remarks, received January 26, 2026. Claims 1-20 are pending with amendments to claims 1, 10, 17. Response to Arguments Applicant’s arguments, see Remarks, pg 7-8, filed 01/26/2026, with respect to the rejections of claim 1-2, 4-6, 8-11, 13-15, 17-18, 20 under 35 USC § 102(a)(1) has been fully considered and, in light of the associated amendment, is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made as being unpatentable over Lewis et al (US 2014/0244660) in view of Shihadah et al (US 2013/0282532). Applicant’s arguments, see Remarks, pg 7-8, filed 01/26/2026, with respect to the rejections of claim 3, 7, 12, 16, 19 under 35 USC § 103 has been fully considered and, in light of the associated amendment, is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made as being unpatentable over Lewis et al (US 2014/0244660) in view of Shihadah et al (US 2013/0282532) and Huang et al (US 2010/0260426). Information Disclosure Statement The information disclosure statement (IDS) submitted on January 26, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is considered by examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-6, 8-11, 13-15, 17-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lewis et al (US 2014/0244660, cited in Non-Final 10/27/2025) in view of Shihadah et al (US 2013/0282532, disclosed in IDS 01/26/2026). Regarding Claim 1, Lewis et al teach a method (implementation of architecture 100 including method of ranking and scoring media content source; Fig 1-5 and ¶ [0020], [0023]-[0024], [0029], [0055]) comprising: identifying a set of media items accessible to users of a platform (client machines 102A-N subscribe to one or more media content sources 122 (platform) with media items associated with media content sources 122; Fig 1 and ¶ [0023]), wherein each of the set of media items comprises a reference to one or more of a set of objects (media items may each be associated with media content sources 122 and objects (objects interpreted under BRI as the annotated button and popup); Fig 1 and ¶ [0026]-[0027], [0042]); responsive to a request from a client device associated with a user of the platform for access to a media item of the set of media items (client machine 102A-N may request to consume (play or view) media items and content server 126 sends media item to client machine 102A-N for client machine 102A-N access, which is analyzed by media source recommender 124; Fig 1, 2 and ¶ [0023]-[0024], [0027]-[0028]), determining one or more objects depicted within content of the media item that satisfy one or more criteria based on consumption data associated with the set of media items (records of user accounts are monitored including media items 250 and content, such as content sources, user interests and subject matter, are monitored by media item logger 215 and similar media items 250 are determined based on a scoring criterion by media item analyzer 210 and media source ranker 220, including objects associated with an annotation; Fig 1, 2 and ¶ [0024]-[0026], [0032]-[0035], [0042]), wherein the consumption data indicates information about a consumption of the set of media items by other users of the platform over a time period (media item analyzer 210 includes media item analysis to compare access logs 270 of different users based on a threshold amount of media consumption (interpreted that the threshold amount of media consumption represents the broadly claimed “time period”); Fig 1, 2 and ¶ [0024], [0037]); obtaining a source indicator associated with the determined one or more objects (an object may be identified in media (video) as a media item and associated with a media content source; Fig 1, 2 and ¶ [0024]-[0026], [0041]-[0042]); and updating a user interface (UI) of the client device (the user interface 230 may provide media content sources 255 to display to user based on a ranking; Fig 1, 2 and ¶ [0028], [0053]) to include, with the media item, an annotation of the one or more objects to indicate that the one or more objects are associated with the source indicator (annotations associated with the media items are correlated to media content source and the annotation may generate a call-to-action that invites a user to activate the annotation to perform an action, such as open a new web page via the media source recommender 124, based on a media source ranker 220; Fig 1, 2 and ¶ [0024]-[0026], [0041]-[0042], [0049]). Lewis et al does not explicitly teach the one or more objects is depicted within content of the media. Lewis et al teaches an object is displayed on the media content based on an annotation referenced by the video at a particular point and initiates an action when activated (¶ [0042]). Shihadah et al is analogous art pertinent to the technological problem addressed in the current application and teaches the one or more objects is depicted within content of the media (frames of video is extracted 602, 604 with real objects depicted in the media content (¶ [0095]) in which logos, object identifiers, text are identified 606A, 606B, 606C are associated with the frame and may be matched 610 to additional content for the user 614; Fig 6 and ¶ [0089]-[0091]). It would have been obvious to one of ordinary skill in the art to substitute the teachings of Lewis et al with Shihadah et al including the one or more objects is depicted within content of the media. By using objects depicted with content of the media, marketers and users may quickly and efficiently identify objects quickly for product or service offers, resulting in further consumer options effectively and efficiently offered, as recognized by Shihadah et al (¶ [0007]-[0009]). Regarding Claim 2, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above), wherein the request is received from the client device during a current time period (Lewis et al, client machines 102A-N may request to consume media sources 122 during a logged-in time (current time interpreted to occur at a time while consuming media content); Fig 1, 2 and ¶ [0027]), and wherein the time period indicated by the consumption data corresponds to at least one of (interpreted as a single time from the following list of times is required based on conjunction “or”; see Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004)) the current time period, a prior time period that is prior to the current time period, or a future time period (Lewis et al, access logs 270 of different users based on a threshold amount of media consumption (interpreted that the threshold amount of media consumption represents the broadly claimed “time period”, which would be understood to be a prior time period that is prior to the current time period of different users); Fig 1, 2 and ¶ [0024], [0037]). Regarding Claim 4, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above), wherein the set of media items comprises at least one of a video item or an audio item (Lewis et al, media items include video files and audio files; Fig 1 and ¶ [0026]). Regarding Claim 5, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above), wherein the consumption data associated with the set of media items comprises information about an action by the at least one of the other users with respect to the set of media items during the time period (Lewis et al, media item analyzer 210 includes media item analysis to compare access logs 270 of different users based on a threshold amount of media consumption (interpreted that the threshold amount of media consumption represents the broadly claimed “time period”); Fig 1, 2 and ¶ [0024], [0037]), the action comprising one or more of (interpreted as a single action from the following list of action is required based on conjunction “or”; see Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004)) a re-watching action, a pausing action, a rewinding action, a fast-forwarding action, or a zoom action (Lewis et al, user action can include any type of interaction including playing, saving, rating, sharing, pausing, rewinding, viewing, commenting, and forwarding a media item; Fig 2 and ¶ [0034]). Regarding Claim 6, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above), further comprising: performing digital image processing on a media item of the set of media items (Lewis et al, annotations may be embedded in media items (annotating the media is a digital image processing technique); Fig 1, 2 and ¶ [0041]-[0042]) and recognizing the one or more of the set of objects based on the digital image processing (Lewis et al, an object may be identified based on the annotation in the media items and media content sources; Fig 1, 2 and ¶ [0035], [0041]-[0043]). Regarding Claim 8, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above), wherein obtaining the source indicator associated with the determined one or more objects comprises: determining a plurality of sources that are associated with the one or more objects (Lewis et al, multiple media content sources may be identified based on the object annotation; Fig 2 and ¶ [0042]-[0044], [0046]); and selecting a source from the plurality of sources based on contextual data associated with the client device, wherein the source indicator corresponds to the selected source (Lewis et al, an annotation may redirect a user, on a given user (client) device 102A-N based on user’s actions, activities, preferences (contextual data), to a particular media content source based on the media item analyzer 210 of associated objects and annotations and associated ranking from ranker 220 for the user to select a recommended media content source; Fig 2 and ¶ [0031], [0042]-[0044], [0053]). Regarding Claim 9, Lewis et al in view of Shihadah et al teach the method of claim 8 (as described above), wherein the contextual data comprises at least one of a geographic location of the client device, a source preference associated with the client device, or an availability of the one or more objects (Lewis et al, context data may be based on a user’s preferences relevant to the user on the user (client) device 102A-N; Fig 1, 2 and ¶ [0023], [0031]-[0034]). Regarding Claim 10, Lewis et al teach a system (computing device 600; Fig 6 and ¶ [0071]) comprising: a memory (memory 604; Fig 6 and ¶ [0072]) ; and a processing device coupled to the memory (processing device 602 coupled to memory 604; Fig 6 and ¶ [0073], [0075]), the processing device to perform operations (instructions 626 executed on processing device 602; Fig 6 and ¶ [0075]) comprising: steps identical to claim 1 (as described above). Regarding Claim 11, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 2 (as described above). Regarding Claim 13, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 4 (as described above). Regarding Claim 14, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 5 (as described above). Regarding Claim 15, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 6 (as described above). Regarding Claim 17, Lewis et al teach a non-transitory computer readable storage medium comprising instructions (memory 604 may store instructions 626; Fig 6 and ¶ [0075]) that, when executed by a processing device, cause the processing device to perform operations (instructions 626 executed on processing device 602; Fig 6 and ¶ [0075]) comprising: steps identical to claim 1 (as described above). Regarding Claim 18, Lewis et al in view of Shihadah et al teach the non-transitory computer readable storage medium of claim 17 (as described above), wherein further limitations are identical to claim 2 (as described above). Regarding Claim 20, Lewis et al in view of Shihadah et al teach the non-transitory computer readable storage medium of claim 17 (as described above), wherein further limitations are identical to claim 4 (as described above). Claims 3, 7, 12, 16, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lewis et al (US 2014/0244660) in view of Shihadah et al (US 2013/0282532) and Huang et al (US 2010/0260426). Regarding Claim 3, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above). Lewis et al in view of Shihadah et al does not teach wherein the UI of the client device is updated to include the annotation in response to a detection of a user selection of the one or more objects via the UI. Huang et al is analogous art pertinent to the technological problem addressed in this application and teaches wherein the UI of the client device is updated to include the annotation in response to a detection of a user selection of the one or more objects via the UI (a user annotation of objects identified by user with user input interface of the user mobile device 130 may be superimposed over the image 100; Fig 1, 4, 6A, 6B and ¶ [0031], [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Lewis et al in view of Shihadah et al with Huang et al including wherein the UI of the client device is updated to include the annotation in response to a detection of a user selection of the one or more objects via the UI. By allowing a user to identify objects with annotations, the remote server may focus the visual search scope on the annotations of interest thereby improve the image recognition accuracy, speed and efficiency and tailor the information content to the user, as recognized by Huang et al (¶ [0004], [0011]). Regarding Claim 7, Lewis et al in view of Shihadah et al teach the method of claim 1 (as described above). Lewis et al in view of Shihadah et al does not teach wherein updating the UI of the client device to include, with the media item, the annotation comprises: updating the UI to include an emphasis of a region of an image frame of the media item that depicts the one or more objects, wherein the emphasis comprises at least one of an outline, a highlight, a color altering, or a brightening of the region of the image frame. Huang et al is analogous art pertinent to the technological problem addressed in this application and teaches wherein updating the UI of the client device to include, with the media item, the annotation comprises: updating the UI to include an emphasis of a region of an image frame of the media item that depicts the one or more objects, wherein the emphasis comprises at least one of (interpreted as a single emphasis from the following list is required based on conjunction “or”; see Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004)) an outline, a highlight, a color altering, or a brightening of the region of the image frame (indicators that outline a pattern, for example, are superimposed over the image on the detected objects that recognize characteristics of the objects (step 420) and include additional methods to highlight the object(s) such as patterns, boxes, bulls-eyes 610-620; Figs 1, 4, 6A, 6B and ¶ [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Lewis et al in view of Shihadah et al with Huang et al including wherein updating the UI of the client device to include, with the media item, the annotation comprises: updating the UI to include an emphasis of a region of an image frame of the media item that depicts the one or more objects, wherein the emphasis comprises at least one of an outline, a highlight, a color altering, or a brightening of the region of the image frame. By highlighting annotated objects of interest in images, the user may easily visualize objects in a query image and the communication bandwidth requirement is decreased to perform the query search, resulting in enhanced image recognition accuracy, speed and efficiency and tailor the information content to the user, as recognized by Huang et al (¶ [0004], [0011]). Regarding Claim 12, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 3 (as described above). Regarding Claim 16, Lewis et al in view of Shihadah et al teach the system of claim 10 (as described above), wherein further limitations are identical to claim 7 (as described above). Regarding Claim 19, Lewis et al in view of Shihadah et al teach the non-transitory computer readable storage medium of claim 17 (as described above), wherein further limitations are identical to claim 3 (as described above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Scott-Green et al (US 2022/0027624, application 16/627,991, cited in Non-Final Rejection 10/27/2025), by the same inventors and applicant, discloses annotating and source linking objects displayed in an image, with claim limitations focused on selecting objects based viewership data and associating the selected object with a source indicator. Grossman et al (US 2018/0121470, cited in Non-Final Rejection 10/27/2025) teach a system and method for acquiring and sharing annotations of objects identified in images with annotations stored in a database and linked to pre-defined object data. Van Zwol et al (US 2018/0047064, cited in Non-Final Rejection 10/27/2025) teach a system and method for contextual media enrichment presentation items of media objects identified in an image and searched for similar object identifiers over the internet. Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 01/26/2026 prompted the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Oct 23, 2025
Non-Final Rejection — §103
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 26, 2026
Response Filed
Mar 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month