Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,783

SCENE AWARE SEARCHING

Non-Final OA §102§112
Filed
Nov 15, 2023
Examiner
WOO, ISAAC M
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Adeia Media Holdings LLC
OA Round
5 (Non-Final)
91%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1162 granted / 1271 resolved
+36.4% vs TC avg
Moderate +6% lift
Without
With
+6.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
26 currently pending
Career history
1297
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
3.8%
-36.2% vs TC avg
§102
71.4%
+31.4% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1271 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 13, 2025 has been entered. Applicant has amended claims 21-22, 30-32, and 40-42. Claims 22-28, 30-38 and 40-42 are pending. This action is response to the application filed on November 13, 2025 Claim Interpretation; Broadest Reasonable Interpretation CLAIMS MUST BE GIVEN THEIR BROADEST REASONABLE INTERPRETATION IN LIGHT OF THE SPECIFICATION During patent examination, the pending claims must be "given their broadest reasonable interpretation consistent with the specification." The Federal Circuit’s en banc decision in Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) expressly recognized that the USPTO employs the "broadest reasonable interpretation" standard: The Patent and Trademark Office ("PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction "in light of the specification as it would be interpreted by one of ordinary skill in the art.". Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 22-28, 30-38 and 40-42 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 2173.05(e) Lack of Antecedent Basis. A claim is indefinite when it contains words or phrases whose meaning is unclear. Claims 22-28, 30-38 and 40-42 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Because claims 21 and 31 recites, “…. the one or more objects in the frame of the video stream …. ". The term “the one or more objects in the frame of the video stream” is introduced without first defining what constitutes a “the one or more objects in the frame of the video stream”. Thus, it is unclear whether: The phrase should be grounded in previously introduced elements, which render indefinite claimed invention scope. A claim is indefinite when it contains words or phrases whose meaning is unclear. Examiner Note: May be renders the claim indefinite by failing to point out that is being performed. Applicants are advised to amend the claim so solve the 112 rejection set forth in the claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 21, 23-28, 30-31, 33-38 and 40-42 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Teichman et al (US 20180246964 A1). With respect to claims 21 and 31, Teichman et al teaches receiving, at an artificial intelligence (Al) engine, a search query from a user device, wherein the search query is received while a video stream is displayed on the user device (FIG. 2, FIG. 4, [0002] query video frames that relate to the query. The video frames include the detected object. providing the video frames to the user. [0020] using the streamed video data, servicing speech queries [0087] select in a video frame or in a sequence of video frames displayed on the owner's smartphone); determining, by analyzing the video stream in real-time in response to the search query received while the video stream is displayed on the user device, whether the search query is related to the video stream ([0051] video streams stored in the video in real-time, as the video data are streamed to the video analyzing the stored video streams. [0043] video frames of the videos stored in the video may be tagged with the recognized action “entering through front door”. Action tags to serve database queries that are directed toward an action); and in response to determining that the search query is related to the video stream (claim11. from the monitoring system database, video frames that relate to the query, comprises: identifying, in site-specific data of a metadata archive of the monitoring system database, tags that relate to the query, wherein the tags label occurrences of at least one selected from a group consisting of the object and an action involving the object, wherein the tags identify the video frames that relate to the query; and retrieving the video frames that relate to the query from a video archive of the monitoring system): determining context related to the search query, wherein the determining context comprises generating at least one of a keyword or an audio signal based on analysis of an audio stream associated with the video stream ([0002] query video frames that relate to the query. The video frames include the detected object. providing the video frames to the user. [0065] in Step 504, content fragment extracted from the text. A filtering by querying the monitoring system database for in the text. in Step 504 to obtain keyword matching. [0051] real-time, as the video data are streamed to the video archive, e.g., at the time when objects are detected by the monitoring system by analyzing the video streams. [0027] audio signal captured by the input device (202) any kind of spoken user input], identifying, by the Al engine, one or more matches in a database based on the context, wherein the one or more matches comprise information related to the one or more objects in the frame of the video stream ([0058] In Step 406, the monitoring system database is accessed using query. If the query includes a question to be answered based on content of the monitoring system database, a query result, i.e., an answer to the question is generated and returned to the user in Step 408A. a scenario in which a user submits the question “Who was in the living room today?”. The monitoring system is queried for any moving object that was identified as a person, The querying may be performed by analyzing the moving object); and displaying search results based on the identified one or more matches (Step 408A. a scenario in which a user submits the search, “Who was in the living room today?”. The monitoring system displayed the moving object as person that was resulted by search). With respect to claims 23 and 33, Teichman et al teaches receiving, at the Al engine, the video stream and information associated with the video stream, wherein the information associated with the video stream comprises at least one of an audio stream, frame stream, closed captioning stream, and metadata ([0033] The video data may be accompanied by depth data and audio data). With respect to claims 24 and 34, Teichman et al teaches analyzing a combination of textual, audio, and touch input ([0028] the input device further includes a speech-to-text conversion engine (204) that is configured to convert the recorded audio signal, e.g., the spoken user input, to text. The speech-to-text-conversion engine may convert the recorded spoken user input to a text in the form of a string). With respect to claims 25 and 35, Teichman et al teaches matches are entries in one or more data lakes of a database ([0003] monitoring system database, video frames that relate to the database query). With respect to claims 26 and 36, Teichman et al teaches receiving feedback relating to an accuracy of the one or more matches ([0080] In Step 606, the monitoring system database may be updated to permanently store the newly resolved filtering intent). With respect to claims 27 and 37, Teichman et al teaches updating the one or more matches based on the feedback ([0080] In Step 606, the monitoring system database may be updated to permanently store the newly resolved filtering intent. the dog's name “Lucky” may be stored in the moving object definition for the dog, and/or the a new static object definition may be generated for the front door. Thus, future queries that include the name “Lucky” and/or the term “front door” can be directly processed without requiring a clarification request). With respect to claims 28 and 38, Teichman et al teaches updating the Al engine based on the feedback ([0080] In Step 606, the monitoring system database may be updated to permanently store the newly resolved filtering intent. the dog's name “Lucky” may be stored in the moving object definition for the dog, and/or the a new static object definition may be generated for the front door. Thus, future queries that include the name “Lucky” and/or the term “front door” can be directly processed without requiring a clarification request). With respect to claims 30 and 40, Teichman et al teaches search query is related to identifying a location of the one or more objects in the frame of the video stream ([0048 objects may thus be defined in the camera-specific data (352) based on their geometry, location, texture or any other feature that enables the detection). With respect to claims 41 and 42, Teichman et al teaches analyzing, via the AI engine, an audio stream or a segment of the audio stream associated with the video stream to determine at least one of a keyword or an audio signal associated with the one or more objects in the frame of the video stream, wherein the one or more matches are identified based at least in part on the at least one of a keyword or an audio signal [0065] (keyword matching, regular expressions, recurrent neural networks, long short term memories). Allowable Subject Matter Claims 22 and 32 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISAAC M WOO whose telephone number is (571)272-4043. The examiner can normally be reached 9:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ISAAC M WOO/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Sep 12, 2024
Non-Final Rejection — §102, §112
Dec 11, 2024
Response Filed
Dec 28, 2024
Final Rejection — §102, §112
Apr 02, 2025
Request for Continued Examination
Apr 09, 2025
Response after Non-Final Action
Apr 17, 2025
Non-Final Rejection — §102, §112
Jul 22, 2025
Response Filed
Aug 11, 2025
Final Rejection — §102, §112
Nov 13, 2025
Request for Continued Examination
Nov 20, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596726
METHOD AND SYSTEM FOR EFFICIENT SEGMENTATION FOR FORECASTING
2y 5m to grant Granted Apr 07, 2026
Patent 12591941
MEDIA MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585552
PROTECTION, RECOVERY, AND MIGRATION OF DATABASES-AS-A-SERVICE (DBAAS) AND/OR SERVERLESS DATABASE MANAGEMENT SYSTEMS (DBMS) IN CLOUD AND MULTI-CLOUD
2y 5m to grant Granted Mar 24, 2026
Patent 12585674
METADATA TAG AUTO-APPLICATION TO POSTED ENTRIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579167
DISTRIBUTED GRAPH-BASED CLUSTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+6.2%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 1271 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month