Prosecution Insights
Last updated: April 17, 2026
Application No. 18/172,122

INTERACTIVE PURCHASING OF PRODUCTS DISPLAYED IN VIDEO

Final Rejection §103§DP
Filed
Feb 21, 2023
Examiner
FLYNN, RANDY A
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
4 (Final)
65%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
82%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
391 granted / 602 resolved
+7.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
33 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 602 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice relating to Pre-AIA or AIA Status In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the claims have been considered but are moot because the arguments do not apply to the new reference(s) and/or citations being used in the current rejection. Examiner’s Note It is noted to Applicant that Allowable subject matter has been indicated in related Application 17/012,226. Applicant is suggested to try and incorporate similar Allowable content into the current Application’s claims to try and move prosecution forward to an Allowance. Applicant is also cautioned not to repeat allowable subject matter in a manner that could lead to a double patenting rejection. This is just a note and suggestion by the Examiner, any amendments made by Applicant will be searched thoroughly before a final indication on Allowability is made. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-8, and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al., US 2018/0152764 in view of Li, US 2012/0284105 and further in view of McDevitt, US 2018/0288448. Regarding claim 1, Taylor discloses a system for generating and providing on-demand interactive content to an audience, the system comprising: a content server including a processor and a memory, the memory hosting a content database (Fig. 2, elements 212 and 215), a product database (Fig. 2, element 236), and an advertising database (Fig. 2, element 248), the memory further storing instructions which, when executed, cause the content server to: receive video content, the video content including a plurality of scenes and a plurality of objects appearing in the plurality of scenes (the live video stream shows one or more hosts/guests discussing a sequence of items/objects that are featured within the program segments; page 2, paragraphs 18 and 22, and Fig. 3B); ingest the video content into the content database (the video segments which correspond to segments of the live video stream are served to client devices from server; page 3, paragraph 35, and Fig. 2, element 203); process the video content to automatically identify an object of the plurality of objects with a product having an associated product record in the product database without a requirement of prior product identification by an individual (an automated image recognition system may recognize the item within the live video stream and extract an approximate graphical position, i.e. automated without prior identification; page 8, paragraph 81, and Fig. 2, and wherein based on the automated image recognition and other information, the server can determine, i.e. identify, a version, i.e. product record, of the item, i.e. not using prior identification; page 8, paragraph 83); define one or more content enrichment actions to be performed during playback of the content, the one or more content enrichment actions including generating an overlay providing product details of the product based on information about the product in the product database, the overlay being displayed in association with the content during playback of a portion of the content during which the object appears (the Video Player Ad-Serving Interface Definition (VPAID) provides an application programming interface (API) for serving advertisements in conjunction with playback of digital video content; page 1, paragraph 15, and wherein user selection of the controls may pause the live video stream; page 5, paragraph 47, and generates data encoding a selectable graphical overlay with respect to the featured item; page 2, paragraph 19, and pages 5-6, paragraph 54, and Fig. 7); an application server communicatively connected to the content server and configured to execute instructions which cause the application server to perform (shopping application; Fig. 2, element 221, and with server(s); page 2, paragraph 26): receive a request from an interactive user device for playback of the video content (market items in connection with prerecorded video when the video is started; page 1, paragraph 15, and responsive to request; page 6, paragraph 61); in response to the request, obtaining content, product information, and advertisement information from the content server (receiving in response to request; Fig. 4, element 427, and page 6, paragraph 64, and wherein with corresponding advertising as well; page 3, paragraph 33); and generate a content feed to the interactive user device, wherein the content feed provides playback of the video content and allows interaction with the content feed during playback of the video content based on the one or more content enrichment actions (the items are in connection with prerecorded video; page 1, paragraph 15, and page 3, paragraph 34, and the interactive shopping interface application may then generate segment metadata, Fig. 2, indicating items featured; page 6, paragraph 62, and wherein the content access application receives a user selection of a selectable item component rendered in an interactive shopping interface; page 7, paragraph 73, and Fig. 3C, and page 8, paragraph 85, and Fig. 3D, and page 6, paragraphs 57-58). Taylor does not explicitly disclose wherein a wired or wireless transmitter is associated with each of a plurality of objects; and process video content to automatically associate an object with a product having an associated product record in a database. In a related art, Li does disclose wherein a wired or wireless transmitter is associated with each of a plurality of objects (detection/recognition of object via attached NFC tag, i.e. wireless transceiver; page 9, paragraph 126, and page 19, paragraph 210, and page 25, paragraph 260). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor and Li by allowing transmitters to be associated with objected appearing in captured content, in order to provide an improved system and method for an object of interest that automatically enables the display of one or more retailers offering the object of interest at a desirable price, one or more qualifying offers, one or more qualifying rewards, one or more related and/or competitive objects, and/or a function whose selection can execute the purchase of the object of interest (Li; page 1, paragraph 4). Taylor in view of Li does not explicitly disclose process video content to automatically associate an object with a product having an associated product record in a database. In a related art, McDevitt does disclose process video content to automatically associate an object with a product having an associated product record in database, without a requirement of prior product identification by an individual (system can match items in content with those stored in a reference database, i.e. matching without prior identification; page 2, paragraphs 11-12, and once item is matched, system can then automatically determine and merge, i.e. associate, additional information for particular items which are part of an information database; pages 2-3, paragraphs 16-18, and wherein data can include product information such as pricing, sizing, description, etc.; page 2, paragraph 16, and page 4, paragraph 29). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor, Li, and McDevitt by allowing automatic association of identified products/items with other specific information, in order to provide an improved system and method for recognizing individual items or sets of items in source content and accessing information relating to the recognized items that can then be requested by or automatically pushed to the end user in order to facilitate additional interaction related to the recognized item (McDevitt; page 1, paragraph 3). Regarding claim 2, Taylor in view of Li and McDevitt discloses at least one content enrichment action includes presenting to a user an option to purchase the product associated with the object from a retailer (Taylor; an interactive action may include adding the item to a shopping cart, initiating an order or purchase of the item; page 2, paragraphs 20 and 22, and Fig. 3C). Regarding claim 4, Taylor in view of Li and McDevitt discloses the at least one content enrichment action includes presenting to the user a link to the product associated with the object, the link being to an online retail website (Taylor; the advertising system may link to the electronic commerce system to provide advertising to be included within item detail pages, generate network content such as web pages; page 3, paragraph 33). Regarding claim 5, Taylor in view of Li and McDevitt discloses the plurality of objects each is associated with at least one of a product and an advertisement (Taylor; user interface shows an example of an interactive shopping interface rendered as an overlay; page 5, paragraph 52, wherein the items may include other items from the item catalog that are related to items that are featured or discussed; page 5, paragraph 53, wherein the items are related with products and advertisements; page 6, paragraph 55, and page 3, paragraph 33, and McDevitt; data can include product information such as pricing, sizing, description, etc.; page 2, paragraph 16, and page 4, paragraph 29). Regarding claim 6, Taylor in view of Li and McDevitt discloses the product details are included in product information and the overlay comprises a tag displayed in association with the object, the overlay being defined within content enrichment data that is generated during processing of the video content and which defines the one or more content enrichment actions (Taylor; the player interface may include various player controls that may allow a viewer to jump to an earlier point in the live video stream, pause the live video stream, stop; page 2, paragraph 18, and rendered an overlay on top of a portion of the video stream; page 2, paragraph 19, and Fig. 3C, and the items may include other items from the item catalog that are related to items that are featured or discussed and each item can be represented by a selectable item component or indicia, which in this case may be an item thumbnail image; page 5, paragraph 53, and McDevitt; data can include product information such as pricing, sizing, description, etc.; page 2, paragraph 16, and page 4, paragraph 29). Regarding claim 7, Taylor in view of Li and McDevitt discloses the content server is further configured to define one or more advertisements to be displayed in association with the content during playback of at least a portion of the content (Taylor; user selection of the controls may pause the live video stream, cause an interactive shopping interface to be rendered, such as advertisement; page 5, paragraph 47, wherein the advertising system may link to the electronic commerce system to provide advertising to be included within item detail pages, the advertising system provides advertising to be injected into the shopping interface; page 3, paragraph 33). Regarding claim 8, Taylor in view of Li and McDevitt discloses at least one interactive user device communicatively connected to the content server and the application server, the at least one interactive user device having software thereon capable of playback of the content feed (Taylor; the player interface may include various player controls that may allow a viewer to jump to an earlier point in the live video stream, pause the live video stream, stop the live video stream; page 2, paragraph 18). Regarding claim 10, Taylor in view of Li and McDevitt discloses the at least one interactive user device is selected from the group of devices consisting of: a television, a set-top box, a phone, a tablet, and a portable computing system (Taylor; client devices may comprise, a computer system may be embodied in the form of a desktop computer, a laptop computer, personal digital assistants, cellular telephones, smartphones, set-top boxes, music players, web pads, tablet computer systems, game consoles, electronic book readers; page 4, paragraph 38). Regarding claim 11, Taylor in view of Li and McDevitt discloses the at least one interactive user device comprises a touch screen device, and wherein, in response to user selection of a tag associated with an object displayed during playback of the video content and displayed as one of the one or more content enrichment actions, the user is presented with an option to purchase the product associated with the object (Taylor; the controls may be touchscreen input; page 5, paragraph 47, and again an interactive action may include adding the item to a shopping cart, initiating an order or purchase of the item; page 2, paragraphs 20 and 22, and Fig. 3C). Claims 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al., US 2018/0152764 in view of Incorvia, US 2012/0167146 and further in view of McDevitt, US 2018/0288448. Regarding claim 16, Taylor discloses a method of interacting with enriched video content, comprising: receiving a request for playback of a first video content (responsive to request for accessing content, including live video stream; page 6, paragraph 61), wherein the first video content includes an object (the live video stream shows one or more hosts/guests discussing a sequence of items that are featured within the program; page 2, paragraphs 18 and 22, and Fig. 3B); obtaining the first video content and content enrichment data (obtaining responsive to requesting; page 6, paragraphs 61-64); providing enriched video content by a content development tool, the enriched video content including the first video content and a tag automatically associated with the object, the tag identifying a product having an associated product record without requiring prior product identification (the items are in connection with prerecorded video; page 1, paragraph 15, and page 3, paragraph 34, and the interactive shopping interface application may then generate segment metadata, Fig. 2, indicating items featured; page 6, paragraph 62, and wherein the content access application receives a user selection of a selectable item component rendered in an interactive shopping interface; page 7, paragraph 73, and Fig. 3C, and page 8, paragraph 85, and Fig. 3D, and page 6, paragraphs 57-58, and an automated image recognition system may recognize the item within the live video stream and extract an approximate graphical position, i.e. automated without prior identification; page 8, paragraph 81, and Fig. 2, and with stored data, i.e. record, about the products; page 3, paragraph 34, and wherein based on the automated image recognition and other information, the server can determine, i.e. identify, a version, i.e. product record, of the item, i.e. not using prior identification; page 8, paragraph 83, and can explicitly include tags; page 5, paragraph 53, and page 6, paragraph 55); receiving a definition of an interaction with the tag (the content access application receives a user selection of a selectable item component rendered in an interactive shopping interface; page 7, paragraph 73, and Fig. 3C, and page 8, paragraph 85, and Fig. 3D, and page 6, paragraphs 57-58); and automatically generating a content enrichment action based on the interaction, wherein a content feed provides playback of the first video content and allows the interaction with the content feed during playback of a plurality of frames of the first video content based on the content enrichment action to initiate an action with respect to the product (the items are in connection with prerecorded video; page 1, paragraph 15, and page 3, paragraph 34 and Fig. 2, indicating items featured; page 6, paragraph 62, and wherein the content access application receives a user selection of a selectable item component rendered in an interactive shopping interface; page 7, paragraph 73, and Fig. 3C, and page 8, paragraph 85, and Fig. 3D, and page 6, paragraphs 57-58, and wherein selection/interaction in shopping/interactive interface can initiate ordering/purchasing; page 2, paragraph 22, and page 4, paragraph 42). While Taylor does also disclose product data for multiple objects is to be shown, and displaying product data for the multiple objects (product data corresponding to the multiple items shown; Fig. 3D, and page 6, paragraphs 57-59), as well as product data (Fig. 3D, and page 6, paragraphs 57-59), Taylor does not explicitly disclose a product having an associated product record and being associated with an identified object; receiving user-configurable display settings that specify whether data is to be displayed simultaneously or individually; and according to the user-configurable display settings, displaying the data during playback of the first video content, and wherein data is automatically displayed for all objects present on screen when playback is paused. In a related art, Incorvia does disclose receiving user-configurable display settings that specify whether data is to be displayed simultaneously or individually (during playback, display of selectable objects in the video is dependent on user settings, wherein if user specifies that the objects, i.e. interpreted as all, are to be displayed, they will be simultaneously displayed; page 5, paragraph 39, and Fig. 3, elements 48, and wherein with displayed selectable object identification information, i.e. product information; page 6, paragraph 41); and according to the user-configurable display settings, displaying the data during playback of the first video content (during playback, display of selectable objects in the video dependent on the user settings, wherein if user specifies that the objects, i.e. interpreted as all, are to be displayed, they will be simultaneously displayed; page 5, paragraph 39, and Fig. 3, elements 48, and wherein with displayed selectable object identification information, i.e. product information; page 6, paragraph 41), and wherein data is automatically displayed for all objects present on screen when playback is paused (when video is paused, selectable objects, i.e. again interpreted as all, may then be displayed for the video; page 5, paragraph 39, and Fig. 3, elements 48, and again with displayed selectable object identification information, i.e. product information; page 6, paragraph 41). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor and Incorvia, by allowing user setting to be utilized when displaying information about items appearing in a video, in order to provide an improved system and method for enabling users to obtain information about objects within video media being rendered, purchase the objects, and thereby expand e-commerce and advertising for the video (Incorvia; page 1, paragraph 7). Taylor in view of Incorvia does not explicitly disclose a product having an associated product record and being associated with an identified object. In a related art, McDevitt does disclose a tag identifying a product having an associated product record and being associated with an identified object without requiring prior product identification (system can match items in content with those stored in a reference database, i.e. matching without prior identification; page 2, paragraphs 11-12, and once item is matched, system can then automatically determine and merge, i.e. associate, additional information for particular items which are part of an information database, i.e. metadata/tagged information which is associated; pages 2-3, paragraphs 16-18, and page 3, paragraph 21, and data can include product information such as pricing, sizing, description, etc.; page 2, paragraph 16, and page 4, paragraph 29). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor, Incorvia, and McDevitt by allowing automatic association of identified products/items with other specific information, in order to provide an improved system and method for recognizing individual items or sets of items in source content and accessing information relating to the recognized items that can then be requested by or automatically pushed to the end user in order to facilitate additional interaction related to the recognized item (McDevitt; page 1, paragraph 3). Regarding claim 17, Taylor in view of Incorvia and McDevitt discloses the plurality of frames is configured to be editable by the content development tool to include the content enrichment data during a specified amount of time (Taylor; editing, i.e. generating and adding during a particular time period with the operations being performed; page 8, paragraph 82). Regarding claim 18, Taylor in view of Incorvia and McDevitt discloses automatically generating the content enrichment action includes displaying the tag configured to be displayed by an interactive user device (Taylor; the player interface may include various player controls that may allow a viewer to jump to an earlier point in the live video stream, pause the live video stream, stop; page 2, paragraph 18, and rendered an overlay on top of a portion of the video stream; page 2, paragraph 19, and Fig. 3C, and the items may include other items from the item catalog that are related to items that are featured or discussed and each item can be represented by a selectable item component or indicia, which in this case may be an item thumbnail image; page 5, paragraph 53, and can explicitly include tags; page 5, paragraph 53, and page 6, paragraph 55). Regarding claim 19, Taylor in view of Incorvia and McDevitt discloses the tag is displayed in a subset of the plurality of frames defined in the content enrichment data (Taylor; about the items currently being discussed within a segment, i.e. only for a subset of segments/frames; page 2, paragraph 19, and page 5, paragraph 53). Regarding claim 20, Taylor in view of Incorvia and McDevitt discloses the content enrichment action includes presenting to a user an option to purchase a product associated with the object from a retailer (Taylor; an interactive action may include adding the item to a shopping cart, initiating an order or purchase of the item; page 2, paragraphs 20 and 22, and Figs. 3C and 3D). Regarding claim 21, Taylor in view of Incorvia and McDevitt discloses the content enrichment action includes presenting to the user a link to the product associated with the object, the link being to an online retail website (Taylor; links to and including ecommerce/online retailers; page 3, paragraph 33, and page 6, paragraph 64). Claims 22-24 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al., US 2018/0152764 in view of Jouhikainen et al., US 2017/0372165 and further in view of McDevitt, US 2018/0288448. Regarding claim 22, Taylor discloses a method for identification of objects within video data (recognize an object/item within segment(s)/frame(s) of a live video stream; page 8, paragraph 81, and Fig. 2), comprising: receiving video data from a video database (received live video stream shows one or more hosts/guests discussing a sequence of items/objects that are featured within the program segments; page 2, paragraphs 18 and 22, and Fig. 3B, and wherein client receives responsive to request for accessing content, including live video stream from source, i.e. database; page 6, paragraph 61); processing a plurality of frames of the video data (an automated image recognition system may recognize the item within segment(s) of the live video stream; page 8, paragraph 81, and Fig. 2), wherein: identifying a frame in which an object appears (recognize the item within segment(s)/frame(s) of the live video stream; page 8, paragraph 81, and Fig. 2); processing with a particular layout (determining location information in relation to coordinates, i.e. a particular layout; page 8, paragraph 81); automatically identifying an object at an object location in the frame where the object appears without requiring prior product identification by an individual (an automated image recognition system may recognize the item within the live video stream and extract an approximate graphical position, i.e. automated without prior identification; page 8, paragraph 81, and Fig. 2, and wherein based on the automated image recognition and other information, the server can determine, i.e. identify, a version, i.e. product record, of the item; page 8, paragraph 83); using the object location determined via the particular layout to use the product data with the object within the object location within at least one frame of the plurality of frames (based on location of object, selectable tag/object can be presented with corresponding product data; Fig. 3D, and page 6, paragraphs 57-58, and again in relation to coordinates, i.e. a particular layout; page 8, paragraph 81); and recording the object location in the product database, and storing product information related to the object in the product database (with stored data about the products and configurations, i.e. layout/positions; page 3, paragraph 34). Taylor does not explicitly disclose receiving a plurality of photographs from a product database; processing video with a grid layout; the grid layout; an object subject to the plurality of photographs appears; and automatically associating an object with product data, wherein automatically associating the object with product data includes using an object location to associate the product data with the object. In a related art, Jouhikainen does disclose receiving a plurality of photographs from a product database (stored and retrieved image block(s), i.e. photograph(s), page 6, paragraph 58); processing video with a grid layout (can partition into image blocks, i.e. grid format; page 9, paragraph 90, and Fig. 4A, element 410); the grid layout (partitioned into image blocks, i.e. grid format; page 9, paragraph 90, and Fig. 4A, element 410); and an object subject to the plurality of photographs appears (image matching by comparing to stored and retrieved image block(s), i.e. photograph(s), page 6, paragraph 58, and wherein can also determine object location; page 9, paragraph 87). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor and Jouhikainen by allowing image matching techniques to be used for identifying products appearing in video content, in order to provide an improved system and method for automated recognition of objects that are present in video and image content (Jouhikainen; page 1, paragraph 8). Taylor in view of Jouhikainen does not explicitly disclose automatically associating an object with product data, wherein automatically associating the object with product data includes using an object location to associate the product data with the object. In a related art, McDevitt does disclose automatically associating an object with product data without requiring prior product identification by an individual, wherein automatically associating the object with product data includes using an object location to associate the product data with the object (system can match items in content with those stored in a reference database, i.e. matching without prior identification; page 2, paragraphs 11-12, and once item is matched, system can then automatically determine and merge, i.e. associate, additional information for particular items which is part of an information database, i.e. metadata/tagged information which is associated; pages 2-3, paragraphs 16-18, and page 3, paragraph 21, and wherein can also be based on identified location information with the other data; page 2, paragraph 13, and data can include product information such as pricing, sizing, description, etc.; page 2, paragraph 16, and page 4, paragraph 29). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor, Jouhikainen, and McDevitt by allowing automatic association of identified products/items with other specific information, in order to provide an improved system and method for recognizing individual items or sets of items in source content and accessing information relating to the recognized items that can then be requested by or automatically pushed to the end user in order to facilitate additional interaction related to the recognized item (McDevitt; page 1, paragraph 3). Regarding claim 23, Taylor in view of Jouhikainen and McDevitt discloses processing a plurality of frames of the video data with a grid layout further includes: separating the grid layout into a plurality of grids (Jouhikainen; partition into image blocks including regions, i.e. plurality of grids; page 9, paragraphs 87 and 90, and Fig. 4A, element 410); setting forth the object within one or more portions of the plurality of grids (Jouhikainen; object(s) initial location(s) based on the grids/regions; page 9, paragraph 87); overlapping the object by a single portion of the plurality of grids (Jouhikainen; location of an object in a region of another object, i.e. overlapping region; page 9, paragraph 87); and identifying the object location according to the single portion in the product database (Jouhikainen; can be used to determine specific object location; page 9, paragraph 87, and Taylor; with stored data about the products and configurations, i.e. layout/positions; page 3, paragraph 34, and extract an approximate graphical position; page 8, paragraph 81, and McDevitt; based on determined location information; page 2, paragraph 13). Regarding claim 24, Taylor in view of Jouhikainen and McDevitt discloses the product information including at least one of a product heading, a description of the product, and a product cost (Taylor; with stored data about the products and configurations, i.e. layout/positions; page 3, paragraph 34, including at least title, price, and other item/product information; page 6, paragraphs 59 and 64). Regarding claim 26, Taylor in view of Jouhikainen and McDevitt discloses using automatic recognition of the object within one or more portions of the video data on a frame-by-frame basis (Taylor; an automated image recognition system may recognize the item within the live video stream and extract an approximate graphical position; page 8, paragraph 81, and Fig. 2, and about the items currently being discussed within a segment, i.e. on a segment/frame-by-segment/frame basis; page 2, paragraph 19, and page 5, paragraph 53, and Jouhikainen; based on particular frame selection rate, i.e. frame-by-frame; page 3, paragraph 40, and McDevitt; with image recognition techniques; page 2, paragraph 11). Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al., US 2018/0152764 in view of Jouhikainen et al., US 2017/0372165 and McDevitt, US 2018/0288448, and further in view of Li, US 2012/0284105. Regarding claim 25, Taylor in view of Jouhikainen and McDevitt discloses all the claimed limitations of claim 22, as well as automatically identify the object with the at least one frame of the plurality of frames in the video data without a requirement of prior object identification by an individual (Taylor; an automated image recognition system may recognize the item within the live video stream and extract an approximate graphical position, i.e. automated without prior identification; page 8, paragraph 81, and Fig. 2, and wherein based on the automated image recognition and other information, the server can determine, i.e. identify, a version, i.e. product record, of the item; page 8, paragraph 83). Taylor in view of Jouhikainen and McDevitt does not explicitly disclose an electronic transmitting tag is configured to be linked to a video camera. In a related art, Li does disclose an electronic transmitting tag is configured to be linked to a video camera (detection/recognition of object via attached NFC tag, i.e. wireless transceiver; page 9, paragraph 126, and page 19, paragraph 210, and page 25, paragraph 260, and wireless tag associated with image transceiver, i.e. camera, that is capturing/processing; page 25, paragraph 262, and pages 24-25, paragraph 260, and Fig. 5, element 05100). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Taylor, Jouhikainen, McDevitt, and Li by allowing transmitters to be associated with objected appearing in captured content, in order to provide an improved system and method for an object of interest that automatically enables the display of one or more retailers offering the object of interest at a desirable price, one or more qualifying offers, one or more qualifying rewards, one or more related and/or competitive objects, and/or a function whose selection can execute the purchase of the object of interest (Li; page 1, paragraph 4). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RANDY A FLYNN whose telephone number is (571)270-5680. The examiner can normally be reached Monday - Thursday, 6:00am - 3:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENJAMIN BRUCKART can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RANDY A FLYNN/Primary Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Dec 14, 2023
Non-Final Rejection — §103, §DP
Jun 19, 2024
Response Filed
Aug 01, 2024
Final Rejection — §103, §DP
Feb 05, 2025
Request for Continued Examination
Feb 07, 2025
Response after Non-Final Action
Apr 02, 2025
Non-Final Rejection — §103, §DP
Sep 08, 2025
Response Filed
Oct 23, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598362
METHOD FOR SIGNALING HAPTIC INFORMATION FOR DASH SELECTION PROCESS BY USING INITIALIZATION SEGMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12587697
PROGRAM GENERATION AND BROADCASTING METHOD, DEVICE AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12574600
USER INTERFACES FOR INTERACTING WITH CHANNELS THAT PROVIDE CONTENT THAT PLAYS IN A MEDIA BROWSING APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12568252
DISPLAY METHOD AND APPARATUS FOR EVENT LIVESTREAMING, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12568257
CONTENT INSERTION USING QUALITY SCORES FOR VIDEO
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
82%
With Interview (+16.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 602 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month