Prosecution Insights
Last updated: April 19, 2026
Application No. 18/219,931

Automated and Dynamic Provisioning of Synchronized Product-Related Prompts Within Media Content

Non-Final OA §102§103
Filed
Jul 10, 2023
Examiner
EL-BATHY, MOHAMED N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Disney Enterprises Inc.
OA Round
3 (Non-Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
64%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
71 granted / 235 resolved
-21.8% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
53 currently pending
Career history
288
Total Applications
across all art units

Statute-Specific Performance

§101
37.8%
-2.2% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§102 §103
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. DETAILED ACTION The following Non-Final office action is in response to application 18/219931 filed on 12/17/2025. Status of Claims Claims 1-20 are currently pending and have been rejected as follows. Response to Amendments Rejections under 35 USC 101 are withdrawn. New rejections under 35 USC 102(a)(1)are issued below. Response to Arguments Applicant’s 35 USC 101 arguments and amendments have been fully considered and they are persuasive to overcome the rejection. In particular, see applicant’s remarks filed 12/17/2025, p. 7-16. Response to Arguments Applicant’s prior art arguments and amendments have been fully considered but they are moot in light of the newly cited portions of the Taylor reference. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 4-8, 10, and 13-18 are rejected under 35 USC 102(a)(1) as being unpatentable over the teachings of Taylor et al., US 20180152764 A1, hereinafter Taylor. As per, Claims 1, 11 A system comprising: a computing platform having a hardware processor, an input unit including a plurality of sensors including a camera and a microphone, and a system memory storing a software code; the hardware processor configured to execute the software code to perform the following actions in an automated process: / A method for use by a system including a computing platform having a hardware processor, an input unit including a plurality of sensors including a camera and a microphone, and a system memory storing a software code, the method comprising: (Taylor figs. 2, 8; [0038] “The client devices 206 may also include one or more capture devices 261a . . . 261N such as image cameras, video cameras, microphones, three-dimensional video capture devices, and other capture devices;” [0084]; [0085] “Stored in the memory 806 are both data and several components that are executable by the processor 803. In particular, stored in the memory 806 and executable by the processor 803 are the live video source 215, the mixer 216, the plurality of video encoders 218, the interactive shopping interface application 221, the live video stream management application 224, the media server 227, the electronic commerce system 230, the advertising system 232, and potentially other applications” note the computing platform including processor executed software code in an automated process) receive an activation input from a user; (Taylor [0040] “To begin, a user launches a content access application 263 and accesses a uniform resource locator (URL) associated with a live video stream with an interactive shopping interface” note the user launching the application) generate, using the camera and the microphone of the input unit in response to receiving the activation input, media content including a performance by one of the user or a performer; (Taylor fig. 2; [0029] “The video mixer 216 may combine the output of the live video source 215 with one or more live video feeds originating in client devices 206. For example, the video mixer 216 may combine a video feed of a program host with a video feed of a customer;” [0038] “The client devices 206 may also include one or more capture devices 261a . . . 261N such as image cameras, video cameras, microphones, three-dimensional video capture devices, and other capture devices” noting the camera and the microphone of the client devices; [0040] “A live video stream 103 (FIG. 1) begins playing via a player interface of the content access application 263. The live video stream 103 depicts one or more hosts discussing a sequence of items” note the host) identify, using one or more of the plurality of sensors while generating the media content, a product referenced by the one of the user or the performer; (Taylor [0041] “during the production of the live video stream 103, the hosts or producers of the live video stream 103 may create a sequence of items corresponding to the items to be discussed or featured. As the live video stream 103 progresses, the hosts or producers may select via a user interface which items are being discussed;” [0053] “Associated with each segment is a list of items that were featured or discussed in the particular segment. In some cases, the items may include items that were shown but not discussed or explicitly featured (e.g., a coffee mug sitting on the show host's desk, or an article of clothing worn by a host but not explicitly mentioned, etc.);” [0081] “the interactive shopping interface application 221 receives an indication that an item is featured in a live video stream” note the identification of the product referenced made during production of the media content using one of the plurality of sensors) detect timing information corresponding to one or more references to the product during the performance, the timing information including at least one of a timestamp or a video frame number; (Taylor [0035] “The segment metadata 245 corresponds to a sequence of items featured during a video segment 242 or discussed by one or more hosts during the video segment 242;” [0062] “The interactive shopping interface application 221 may then generate segment metadata 245 (FIG. 2) indicating items featured or discussed in the video segment 242;” [0070] “the content access application 263 may receive an indication that item X is discussed at 2:34 pm and 0 seconds” note the detecting of timing information for product references during the livestream including a timestamp) dynamically obtain up-to-date marketing data for the product; (Taylor [0034] “The item data 236 may include titles, descriptions, weights, prices, quantities available, export restrictions, customer reviews, customer ratings, images, videos, version information, availability information, shipping information, and/or other data;” [0064] “The user interface code 257 may request additional item data 236 (e.g., title, price, image, ordering links, etc.) in order to render selectable item components or other item information in a user interface. In box 430, the media server 227 sends the requested item information to the client device 206.” Note the request and obtaining of current item data) generate, using the up-to-date marketing data and the detected timing information, metadata for use in providing one or more product-related prompts synchronized with the one or more references to the product made by the one of the user or the performer during the performance; and (Taylor [0031] “the interactive shopping interface application 221 may determine items featured in the live video stream and then generate various metadata to be sent to the client devices 206. The metadata instructs the client devices 206 to render user interface components that facilitate an interactive shopping experience;” [0035] “the video segment manifests 239 may encode the segment metadata 245 associated with the particular video segment 242;” [0062] “the segment metadata 245 may be included to provide a time-based association of items with the video segment 242;” [0057] “In the user interface 300d, a selectable graphical overlay 312a is rendered on top of the live video stream 103. In this example, the selectable graphical overlay 312 corresponds to a rectangle that is superimposed over a graphical position of a corresponding item in the frame of the live video stream 103. The item here is a necklace, and the rectangle is shown relative to the necklace being worn by a person in the live video stream 103” note the time-based segment metadata and overlay related to the referenced product) output the media content accompanied by the metadata. (Taylor [0063] “the media server 227 may push the video segment manifest 239 to the client device 206 … The media server 227 sends the encoded video segment 242 to the client device 206” note the media content and metadata output to the client device) Claims 2, 12 at least one of broadcast, stream, or live-stream the media content to a plurality of consumers; wherein the one or more product-related prompts enable any of the plurality of consumers to trigger an interaction associated with the product while consuming the media content. (Taylor [0040] “A live video stream 103 (FIG. 1) begins playing via a player interface of the content access application 263. The live video stream 103 depicts one or more hosts discussing a sequence of items;” fig. 3B; [0048] “The chat interface 306 may facilitate communication among multiple users watching the live video stream 103 and may also include interaction with hosts, producers, interview guests, or other users associated with the live video stream 103.” Note the plurality of users watching the livestream and interacting via the overlay/prompt associated with the product) Claims 3, 13 wherein the system comprises one of a smartphone or a tablet computer of the user. (Taylor [0038] “The client devices 206 may comprise, for example, a processor-based system such as a computer system. Such a computer system may be embodied in the form of a desktop computer, a laptop computer, personal digital assistants, cellular telephones, smartphones”) Claims 4, 14 wherein the one or more product-related prompts comprise at least one of a visual overlay or an audio prompt. (Taylor fig. 3B noting the visual overlay; [0043] “A host or producer, through broadcaster tool interfaces, may cause selectable graphical overlays to be rendered over the live video stream 103. Selection of the selectable graphical overlays may cause an interactive function to be performed. Hosts or producers may also cause item information to be pushed to the client devices 206 within chat interfaces”) Claims 6, 16 detect one or more locations relative to the one of the user or the performer designated by the one of the user or the performer during the performance; (Taylor [0081] “the interactive shopping interface application 221 determines a graphical position of the item in one or more frames of the live video stream 103. The graphical position may correspond to the relative position of the item within a window or a screen … a producer or host user may supply coordinates corresponding to an approximate graphical position of the item”) wherein synchronizing the product-related prompt with the one or references to the product is based on the one or more locations relative to the one of the user or the performer. (Taylor [0082] “the interactive shopping interface application 221 generates data encoding a selectable graphical overlay with respect to the featured item … the data encoding the selectable graphical overlay may be sent within the segment metadata 245”) Claims 8, 18 wherein the one of the user or the performer holds the product during the performance, and wherein the one or more locations relative to the one of the user or the performer, designated by the one of the user or the performer, is detected based on how the product is held by the one of the user or the performer. (Taylor fig. 3D; [0057] “the selectable graphical overlay 312 corresponds to a rectangle that is superimposed over a graphical position of a corresponding item in the frame of the live video stream 103. The item here is a necklace, and the rectangle is shown relative to the necklace being worn by a person in the live video stream 103”) Claims, 9, 19 wherein identifying the product comprises performing object recognition of the product. (Taylor [0081] “an automated image recognition system may recognize the item within the live video stream 103”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 7, 15, and 17 are rejected under 35 USC 103 as being unpatentable over the teachings of Taylor in view of Gupta et al., US 20220115043 A1, hereinafter Gupta. As per, Claims 5, 15 Taylor does not explicitly teach, Gupta however in the analogous art of product marketing teaches wherein identifying the product is based on speech identifying the product by the one of the user or the performer. (Gupta [0023] “a review video is processed by extracting an audio component from the video and converting speech in the audio component to text … Topics are identified in the text based on keywords” note the topic identification based on speech) Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Taylor to include product identification based on speech in view of Gupta in an effort to improve a buyer’s shopping experience (see Gupta ¶ [0022] & MPEP 2143G). Claims 7, 17 wherein the one or more locations relative to the one of the user or the performer is/are detected based on at least one of (i) a gesture by the one of the user or the performer, (ii) a three-dimensional (3D) hand position of the one of the user or the performer, or (iii) speech by the one of the user or the performer. (Gupta [0066] “a review video is processed by extracting an audio component from the video and converting speech in the audio component to text. The text is also timestamped to identify a time in the review video at which each word or sentence occurs. Topics are identified in the text based on keywords … The computing device 900 may be equipped with depth cameras … for gesture detection and recognition” noting the gesture recognition via depth cameras) The motivation/rationale to combine Taylor with Gupta persists. Claims 10 and 20 are rejected under 35 USC 103 as being unpatentable over the teachings of Taylor in view of Lampert et al., US 20150302474 A1, hereinafter Lampert. As per, Claims 10, 20 Taylor does not explicitly teach, Lampert however in the analogous art of product marketing teaches wherein the product is configured for wireless communication, and wherein identifying the product is further based on a communication signal received from the product. (Lampert [0054] “The products may be displayed in a way that permits the consumer to capture product information using, by way of example and not limitation … wireless interface circuitry (e.g., Bluetooth, Near Field Communication (NFC), or other electromagnetic signal protocol) supported by their personal electronic device”) Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Taylor to include product identification based on wireless communication with the product in view of Lampert in an effort to enhance the customer shopping experience (see Lampert ¶ [0041] & MPEP 2143G). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2024/0265396 A1; WO2012125940A1; Liu et al., Beyond Shopping: The Motivations and Experience of Live Stream Shopping Viewers, 2021. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED EL-BATHY whose telephone number is (571)270-5847. The examiner can normally be reached on M-F 8AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PATRICIA MUNSON can be reached on (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED N EL-BATHY/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Jul 10, 2023
Application Filed
Jul 22, 2025
Non-Final Rejection — §102, §103
Oct 09, 2025
Response Filed
Oct 23, 2025
Final Rejection — §102, §103
Dec 17, 2025
Response after Non-Final Action
Jan 21, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586005
CLIENT CREATION OF CONDITIONAL SEGMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12265803
AUTOMATIC IMPROVEMENT OF SOFTWARE APPLICATIONS
2y 5m to grant Granted Apr 01, 2025
Patent 12205057
ASSIGNING SENTRY DUTY TASKS TO OFF-DUTY FIRST RESPONDERS
2y 5m to grant Granted Jan 21, 2025
Patent 12197966
SYSTEMS AND METHODS FOR MULTIUSER DATA CONCURRENCY AND DATA OBJECT ASSIGNMENT
2y 5m to grant Granted Jan 14, 2025
Patent 12165161
EVALUATING ONLINE ACTIVITY TO IDENTIFY TRANSITIONS ALONG A PURCHASE CYCLE
2y 5m to grant Granted Dec 10, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
64%
With Interview (+33.3%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month