Prosecution Insights
Last updated: April 19, 2026
Application No. 18/736,011

System and Methods to Cover the Continuum of Real-time Decision-Making using a Distributed AI-Driven Search Engine on Visual Internet-of-Things

Final Rejection §101§103
Filed
Jun 06, 2024
Examiner
HASAN, SYED HAROON
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
BOARD OF REGENTS OF THE UNIVERSITY OF TEXAS SYSTEM
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
597 granted / 732 resolved
+26.6% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
39 currently pending
Career history
771
Total Applications
across all art units

Statute-Specific Performance

§101
18.3%
-21.7% vs TC avg
§103
34.8%
-5.2% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 732 resolved cases

Office Action

§101 §103
DETAILED ACTION Case Status This office action is in response to remarks and amendments of 22 October 2025. Claims 1-13 have been examined. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20220383662 Par. 3 “…reidentification is to recognize individuals tracked over a set of distributed non-overlapping cameras with different viewpoints and camera poses and the variability of image capture conditions…” US 10814815 Col. 13, lines 45-55 Deep learning model in the cloud and another deep learning model in camera video analytics US 20180103348 Pars. 29-39 Multiple surveillance zones with sensors that send detected metadata to a cloud-based person tracking system Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-13 are directed to one of the eligible categories of subject matter. With respect to independent claims 1 and 13, the extracting, combining, updating limitations cover performance of the limitations manually and/or in the mind (mental processes abstract idea). The capturing, storing, providing limitations are recited at a high level of generality and do not add meaningful limitations to the abstract idea, they are directed to insignificant extra solution activities. The claims as a whole merely describe how to generally “apply” the exception in a computer environment using generic computer functions or components (such as the expressly claimed language of using deep learning algorithms). Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. With respect to dependent claims 2-5 the zones, geotags, information are recited at a high level of generality and do not add meaningful limitations to the abstract idea. The claims as a whole merely describe how to generally “apply” the exception in a computer environment using generic computer functions or components. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. With respect to dependent claims 6-12, the querying, correlating, matching, reidentifying, generating, updating cover performance of the limitations manually and/or in the mind (mental processes abstract idea). No additional elements are recited and so the claims do not provide a practical application and are not considered to be significantly more. The claims are not eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Aghdasi et al., Pub. No.: US 20160357762 A1, hereinafter Aghdasi in view of Beach et al., Patent No.: US 11165954 B1, hereinafter Beach, and Kwon et al., Patent No.: US 11941870 B1, hereinafter Kwon. As per claim 1, Aghdasi discloses A method, comprising: capturing, by a plurality of geographically distributed embedded-AI (artificial intelligence) cameras, a set of streaming videos (par. 34 cameras 102a-n); extracting, at the cameras, metadata information from the set of streaming videos using […] algorithms on the cameras to create local metadata (par. 34) […]; storing the local metadata in local edge device metadata caches corresponding to the cameras (par. 28 discloses cameras collect (i.e., store) metadata; additionally, the cameras operate a video analytics process to generate metadata (i.e., creating and storing local metadata)); providing the local metadata to one or more edge-cloud servers (pars. 34-36, metadata is provided to gateway 52 (i.e., edge cloud server)); extracting, at the edge-cloud servers, additional metadata using […] algorithms on the edge-cloud servers (pars. 35-36 wherein gateway 52 (i.e., edge-cloud server) extracts/adds more metadata and/or modifies existing camera generated metadata.); combining the additional metadata with the local metadata to create global metadata (see rejection of previous limitation); storing the global metadata in an edge-cloud metadata cache (see citations above – note that at least par. 34, 36 state that gateway 52 (i.e., edge-cloud server) stores the metadata); providing the global metadata from the edge-cloud metadata cache to a cloud server (see citations above and note that at least pars. 33-37, 40 disclose that metadata is sent from gateway 52 (i.e., edge-cloud server) to cloud computing server 62 (i.e., cloud server)); extracting cloud metadata from the global metadata using […] algorithms on the cloud server (see at least pars. 45-50, 54-55 disclose defining new types of metadata, values thereof, indexes etc. based on gateway 52 and camera provided metadata (i.e. extracting)); updating the global metadata based on the cloud metadata (see rejection of previous limitation wherein the metadata (i.e., global metadata) provided by gateway 52 to cloud computing server 62 is updated by the new types of metadata, values thereof, indexes etc.); and storing the updated global metadata in a query database (see at least pars. 45-48). Aghdasi does not explicitly disclose using deep learning algorithms. However, Beach in the related field of endeavor of video surveillance discloses using deep learning algorithms (Beach, col. 2, last full par.) Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the cited references because Beach would have allowed Aghdasi to implement deep-learning algorithms on multiple sites/devices because “Training an optimized site-specific [deep-learning] detection model for a surveillance device (e.g., a camera), can reduce costs of operating the surveillance device while maintaining a desired performance level of the surveillance solution. Awareness of scenes, objects, subjects, and events for a particular surveillance device can be utilized to generate training data to train detection models that are lightweight and require less processing power from cloud-based servers.” Aghdasi does not explicitly disclose using wherein the local metadata includes an adjacency matrix representing relationships between objects in the set of streaming videos and actions of the objects. However, Kwon in the related field of endeavor of image analysis discloses this limitation in at least (Kwon, col. 8, lines 18-60 matrices that describe graph structures (i.e. adjacency matrices) including relationships between objects and actions). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the cited references because Kwon would have allowed Aghdasi to use matrices to capture and process image information including objects and actions that correspond to spatial-temporal scene graphs for a variety of applications that use object/action recognition as stated in Kwon, col. 10, line 50 to col. 11, line 6. As per claim 13, it includes the same or similar subject matter as claim 1 and is therefore likewise rejected. See Aghdasi pars. 29, 59-61 for the memories and processors. As per claim 2, Aghdasi as modified discloses The method of claim 1, wherein the edge-cloud servers are geographically distributed across one or more zones (pars. 30, 44). As per claim 3, Aghdasi as modified discloses The method of claim 2, wherein the edge-cloud servers are geotagged according to geographical locations of the edge-cloud servers (pars. 3-6, 22, 23, 30, 31). As per claim 4, Aghdasi as modified discloses The method of claim 1, wherein the local metadata includes information describing the objects and attributes in the set of streaming videos (par. 34). As per claim 5, Aghdasi as modified discloses The method of claim 1, wherein the global metadata includes information describing the objects and attributes in the set of streaming videos based on the local metadata (pars. 35-39). As per claim 6, Aghdasi as modified discloses The method of claim 1, further comprising implementing human-level querying of the query database for geographically distributed queries based on identification of specific objects and attributes of interest in the set of streaming videos (pars. 30, 35, 42, 43, 51, 55, 57). As per claim 7, Aghdasi as modified discloses The method of claim 6, further comprising implementing the querying by video stream content correlation through metadata matching and reidentification at the edge-cloud metadata cache (see rejection of claim 6 including pars. 26, 34, 36, 39, 40). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdasi in view of Beach and Kwon and further in view of Maheshwari et al., Pub. No.: US 20220391433 A1, hereinafter Maheshwari. As per claim 8, Aghdasi as modified discloses the method of claim 6. The combination does not expressly disclose however Maheshwari in the related field of endeavor of computer vision discloses further comprising implementing a classification algorithm to generate a hierarchical knowledge-graph representation of one or more images in the set of streaming videos (Maheshwari, pars. 6, 46-50). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the cited references because Maheshwari would have allowed the combination to implement “Scene graphs 405 [to] encapsulate the constituent objects and their relationships, and encode object attributes and spatial information. Scene graphs 405 can be applied to multiple downstream applications (for example, visual question answering, scene classification, image manipulation and visual relationship detection)” (Maheshwari, par. 50). Claims 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Aghdasi in view of Beach and Kwon and further in view of Schei et al., Pub. No.: US 20230128577 A1, hereinafter Schei. As per claim 9, Aghdasi as modified discloses The method of claim 1. The combination does not expressly disclose however Schei in the related field of endeavor of computer vision discloses further comprising implementing updating for the local metadata on the local edge device metadata caches based on distance correlations between the objects and entries in the local edge device metadata caches (Schei, pars. 75,81-84 disclose updating identifier information for a track moment (metadata) based on cosine distances with other stored face data objects/entries; see rejection of claim 1 for local metadata, devices, caches and corresponding limitations). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the cited references because Maheshwari would have allowed the combination to perform “a continuous re-identification process that integrates the face matching steps within a face tracker such as the simple online and real time tracking with a deep association metric algorithm … to track each detection over time by performing a series of updates to the identifiers” (Schei, par. 75). As per claim 10, Aghdasi as modified by Beach and further in view of Schei discloses the method of claim 9, wherein the local metadata includes local identifiers and global identifiers for the objects and attributes in the local metadata, and wherein the updating includes updating the local identifiers (see rejection of claim 9 and rationale to combine). As per claim 11, Aghdasi as modified discloses The method of claim 1. The combination does not expressly disclose however Schei in the related field of endeavor of computer vision discloses further comprising implementing updating for the global metadata on the edge-cloud metadata cache based on distance correlations between entries in the local edge device metadata caches and entries in the edge-cloud metadata cache (Schei, pars. 75,81-84 disclose updating identifier information for a track moment (metadata) based on cosine distances with other stored face data objects/entries; see rejection of claim 1 for global metadata, devices, caches and corresponding limitations). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the cited references because Maheshwari would have allowed the combination to perform “a continuous re-identification process that integrates the face matching steps within a face tracker such as the simple online and real time tracking with a deep association metric algorithm … to track each detection over time by performing a series of updates to the identifiers” (Schei, par. 75). As per claim 12, Aghdasi as modified by Beach and further in view of Schei discloses The method of claim 11, wherein the global metadata includes local identifiers and global identifiers for the objects and attributes in the global metadata, and wherein the updating includes updating the global identifiers (see rejection of claim 11 and rationale to combine). Response to Arguments Applicant's arguments filed 22 October 2025 have been fully considered. With respect to the 35 USC 101 rejection, the remarks present that “For example, the various features relating to "extracting, at the cameras, metadata information from the set of streaming videos using deep learning algorithms on the cameras to create local metadata, wherein the local metadata includes an adjacency matrix representing relationships between objects in the set of streaming videos and actions of the objects" recite substantially more than an abstract idea.” Examiner respectfully disagrees. The amendment adding an adjacency matrix does not amount to significantly more. The limitation merely refines how information is represented or organized. The claim still recites generic data extraction and structuring using conventional algorithms without a specific improvement to computer functionality or another technical field. With respect to the prior art rejection, Kwon et al., Patent No.: US 11941870 B1, has been applied in response to claim amendments. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED HASAN whose telephone number is (571)270-5008. The examiner can normally be reached M-F 8am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Syed Hasan Primary Examiner Art Unit 2154 /SYED H HASAN/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Apr 17, 2025
Non-Final Rejection — §101, §103
Oct 22, 2025
Response Filed
Nov 05, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602423
REAL-TIME NORMALIZATION OF RAW ENTERPRISE DATA FROM DISPARATE SOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12591662
SECURITY MARKER INJECTION FOR LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12566589
SYSTEM AND METHOD FOR DETERMINING DATA FEED SOURCES FOR INTERACTIVE AUTOMATED CODE GENERATION AND MODIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561352
OPTIMIZING PUBLICATION AND SUBSCRIPTION EXPRESSIVENESS
2y 5m to grant Granted Feb 24, 2026
Patent 12554759
RECOMMENDATION GENERATION USING USER INPUT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
97%
With Interview (+15.5%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 732 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month