Prosecution Insights
Last updated: April 19, 2026
Application No. 18/352,974

SYSTEMS AND METHODS FOR OBJECT IDENTIFICATION AND ANALYSIS

Non-Final OA §103
Filed
Jul 14, 2023
Examiner
ISLAM, MEHRAZUL NMN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Included Health Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
29 granted / 50 resolved
-4.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/24/2026 has been entered. Status of Claims Claims 1-20 are pending. Claims 1, 8, 11 and 18 are amended. Response to Arguments Applicant’s amendment of independent Claims 1 and 11, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 10-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gillani (US 2024/0371163 A1), in view of Yu (US 2020/0401811 A1) and in further view of Pribble et al. (US 10,339/374 B1). Regarding claim 1, Gillani teaches, A system comprising: at least one memory storing instructions; the system configured to execute the instructions to cause the system to perform operations (Gillani, ¶0106: “a system… include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out”) for automatically identifying and analyzing objects, (Gillani, ¶0040: “image recognition tasks such as object detection”) the operations comprising: receiving input data that comprises image data (Gillani, ¶0104: “a set of video frames 110 is received by the sensor”) having a plurality of objects; identifying, from the image data, an object of interest of the plurality of objects; (Gillani, ¶0094: “Object tracking can help to identify and track specific objects or regions of interest within the video frames”). However, Gillani does not explicitly teach, identifying key frame data from the image data based on the identified object of interest, the key frame data comprising one or more frames of the image data; analyzing, from the key frame data, the identified object of interest using one or more machine learning models to obtain output analysis data, wherein the analysis comprises registering first partial object data corresponding to a first partial view of the object, registering second partial object data corresponding to a second partial view of the object, and synthesizing the partial object data into complete object data; and performing output generation by transmitting the output analysis data to a device or database. In an analogous field of endeavor, Yu teaches, identifying key frame data from the image data based on the identified object of interest, the key frame data comprising one or more frames of the image data; (Yu, ¶0004: “identifying, from the one or more sampled frames, a reference frame of video data, the reference frame including a target object that is identified using an identification model”) analyzing, from the key frame data, the identified object of interest (Yu, ¶0062: “identify a target object in a frame using an identification model”) using one or more machine learning models (Yu, ¶0062: “The identification model may be or include a machine learning model”) to obtain output analysis data, (Yu, ¶0076: “analysis on identifying frames or the time period in which a target object appears in the video”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gillani using the teachings of Yu to introduce identifying and transmitting a visible information from a target object in a key frame. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of identifying a visible text information and transmitting it to a communication device. Therefore, it would have been obvious to combine the analogous arts Gillani and Yu to obtain the above-described limitations in claim 1. However, the combination of Gillani and Yu does not explicitly teach, wherein the analysis comprises registering first partial object data corresponding to a first partial view of the object, registering second partial object data corresponding to a second partial view of the object, and synthesizing the partial object data into complete object data; and performing output generation by transmitting the output analysis data to a device or database. In another analogous field of endeavor, Pribble teaches, wherein the analysis comprises registering first partial object data (Pribble, col. 1, lines 22-23: “analyzing the image to identify a first part of the object”) corresponding to a first partial view of the object, (Pribble, col. 4, lines 20-21: “detecting a part of the object in a field of view of the camera”) registering second partial object data (Pribble, col. 1, lines 29-30: “detect a second part of the object”) corresponding to a second partial view of the object, (Pribble, col. 13, lines 11-12: “the second part of the object in a field of view of a camera”) and synthesizing the partial object data into complete object data; (Pribble, col. 1, lines 31-34: “combining first image data associated with the first part of the object and second image data associated with the second part of the object to generate object data”) and performing output generation (Pribble, col. 11, lines 16-17: “a component that provides output information from device”) by transmitting the output analysis data to a device or database. (Pribble, col. 10, lines 1-2: “transmit information to user device 210”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gillani in view of Yu using the teachings of Pribble to introduce combining partial object data from partial views. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically identifying an object based on the combined data. Therefore, it would have been obvious to combine the analogous arts Gillani, Yu and Pribble to obtain invention in claim 1. Regarding claim 2, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein the image data comprises a sequence of images corresponding to video data. (Gillani, ¶0025: “analyzing the sequence of frames in the video”). Regarding claim 3, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein receiving input data further comprises acquiring data and normalizing data. (Gillani, ¶0026: “detect images… preprocessed and prepared frame by frame (and sometimes pixel by pixel within each frame) by transforming it into an acceptable format”). Regarding claim 4, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein identifying one or more objects associated with the input data comprises detecting, tracking, or classifying the one or more objects. (Gillani, ¶0094: “Object tracking can help to identify and track specific objects”). Regarding claim 5, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein generating the key frame data is based on one or more confidence metrics associated with the one or more objects. (Gillani, ¶0024: “decoder 140 may receive as input the embeddings and the decoder 140 may output the various possible labels for the video clip 110, along with a confidence score for each of the labels”). Regarding claim 6, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein analyzing the identified objects further comprises language processing or context analysis based on the key frame data. (Gillani, ¶0037: “Embeddings are commonly used in machine learning and natural language processing tasks to represent words, sentences, or other types of data”). Regarding claim 7, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein analyzing the identified objects comprises extracting or parsing text data from the key frame data. (Gillani, ¶0007: “the system has the ability to extract and recognize text from images”). Regarding claim 8, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, wherein analyzing the identified objects further comprises registering first text data corresponding to the first partial view, registering second text data corresponding to the second partial view, (Pribble, col. 3, lines 56-58: “an object, such as a document (e.g., a document that indicates identification information”) and synthesizing the first text data and the second text data with the partial object data. (Pribble, col. 1, lines 31-34: “combining first image data associated with the first part of the object and second image data associated with the second part of the object to generate object data”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gillani in view of Yu and in further view of Pribble using the additional teachings of Pribble to introduce combining partial objects containing text. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically identifying text information from combined object data. Therefore, it would have been obvious to combine the analogous arts Gillani, Yu and Pribble to obtain invention in claim 8. Regarding claim 10, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, further comprising tagging the one or more analyzed objects -or registering the one or more analyzed objects. (Gillani, ¶0024: “output the various possible labels for the video clip 110, along with a confidence score for each of the labels”). Regarding claim 11, it recites a method with steps corresponding to the elements of the system recited in claim 1. Therefore, the recited steps of method claim 11 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 1. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 12, it recites a method with steps corresponding to the elements of the system recited in claim 2. Therefore, the recited steps of method claim 12 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 2. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 13, it recites a method with steps corresponding to the elements of the system recited in claim 3. Therefore, the recited steps of method claim 13 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 3. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 14, it recites a method with steps corresponding to the elements of the system recited in claim 4. Therefore, the recited steps of method claim 14 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 4. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 15, it recites a method with steps corresponding to the elements of the system recited in claim 5. Therefore, the recited steps of method claim 15 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 5. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 16, it recites a method with steps corresponding to the elements of the system recited in claim 6. Therefore, the recited steps of method claim 16 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 6. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 17, it recites a method with steps corresponding to the elements of the system recited in claim 7. Therefore, the recited steps of method claim 17 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 7. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Regarding claim 18, it recites a method with steps corresponding to the elements of the system recited in claim 8. Therefore, the recited steps of method claim 18 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 8. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 8, apply to this claim. Regarding claim 20, it recites a method with steps corresponding to the elements of the system recited in claim 10. Therefore, the recited steps of method claim 20 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 10. Additionally, the rationale and motivation to combine Gillani, Yu and Pribble presented in rejection of claim 1, apply to this claim. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gillani (US 2024/0371163 A1), in view of Yu (US 2020/0401811 A1), in further view of Pribble et al. (US 10,339/374 B1), in still further view of Rahmi et al. (US 2024/0135701 A1) and yet in further view of Nishimura et al. (US 2021/0092280 A1). Regarding claim 9, Gillani in view of Yu and in further view of Pribble teaches, The system of claim 1, further comprising. However, the combination of Gillani, Yu and Pribble does not explicitly teach, iteratively executing second operations until a threshold value has been reached to generate an optimal object validation score, wherein the second operations comprise: analyzing the one or more objects using the one or more machine learning models; validating the one or more objects; updating an object validation score based on the validating of the one or more objects; and refining the one or more machine learning models based on the validation of the one or more objects. In an analogous field of endeavor, Rahmi teaches, iteratively executing second operations until a threshold value has been reached to generate an optimal object validation score, (Rahmi, ¶0035: “capture a second frame at a second time, third frame at a third time, fourth frame at a fourth time, and so on until the identification information yields a confidence score above the predetermined threshold”; a score above threshold is interpreted as an optimal score) wherein the second operations comprise: analyzing the one or more objects using the one or more machine learning models; validating the one or more objects; (Rahmi, ¶0035: “the appearance information of one or more body regions of person 420 may be calculated via one or more machine-learning models to have a confidence score above a predetermined threshold”) updating an object validation score based on the validating of the one or more objects; (Rahmi, ¶0070: “computing systems may update, using the one or more machine-learning models, the confidence score based on one or more additional appearance information detected within additional frames”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gillani in view of Yu and in further view of Pribble using the teachings of Rahmi to introduce an iteratively performing an object detection operation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of achieving better detection accuracy. Therefore, it would have been obvious to combine the analogous arts Gillani, Yu, Pribble and Rahmi to obtain the above-described limitations of claim 9. However, the combination of Gillani, Yu, Pribble and Rahmi does not explicitly teach and refining the one or more machine learning models based on the validation of the one or more objects. In an analogous field of endeavor, Nishimura teaches, and refining the one or more machine learning models based on the validation of the one or more objects. (Nishimura, ¶0063: “a set of neural weights of the neural network model 306 may be updated based on an output of the neural network model 306 for the detection of the first object 308 in the acquired imaging information”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gillani in view of Yu, in further view of Pribble and still in further view of Rahmi using the teachings of Nishimura to introduce refining an object detection model. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of achieving better detection accuracy. Therefore, it would have been obvious to combine the analogous arts Gillani, Yu, Pribble, Rahmi and Nishimura to obtain the invention in claim 9. Regarding claim 19, it recites a method with steps corresponding to the elements of the system recited in claim 9. Therefore, the recited steps of method claim 19 are mapped to the proposed combination in the same manner as the corresponding elements in system claim 19. Additionally, the rationale and motivation to combine Gillani, Yu, Pribble, Rahmi and Nishimura presented in rejection of claim 9, apply to this claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAZUL ISLAM/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Oct 07, 2025
Applicant Interview (Telephonic)
Oct 07, 2025
Examiner Interview Summary
Oct 15, 2025
Response Filed
Dec 13, 2025
Final Rejection — §103
Feb 24, 2026
Response after Non-Final Action
Mar 16, 2026
Request for Continued Examination
Mar 18, 2026
Non-Final Rejection — §103
Mar 18, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602808
METHOD FOR INSPECTING AN OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12592075
REMOTE SENSING FOR INTELLIGENT VEGETATION TRIM PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12579695
Method of Generating Target Image Data, Electrical Device and Non-Transitory Computer Readable Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12524900
METHOD FOR IMPROVING ESTIMATION OF LEAF AREA INDEX IN EARLY GROWTH STAGE OF WHEAT BASED ON RED-EDGE BAND OF SENTINEL-2 SATELLITE IMAGE
2y 5m to grant Granted Jan 13, 2026
Patent 12489964
PATH PLANNING
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
86%
With Interview (+28.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month