Prosecution Insights
Last updated: April 19, 2026
Application No. 18/119,127

ANALYTIC PIPELINE FOR OBJECT IDENTIFICATION AND DISAMBIGUATION

Final Rejection §103§112
Filed
Mar 08, 2023
Examiner
HON, MING Y
Art Unit
2666
Tech Center
2600 — Communications
Assignee
The MITRE Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
624 granted / 760 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
783
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Applicant' s amendment filed on October 2, 2025 is acknowledged. Currently Claims 1-25 and 28-29 are pending. Claims 28-29 are new. Applicant's arguments with respect independent claims 1, 10 and 21 have been considered but are moot in view of the new ground(s) of rejection. Amended claims 1, 10 and 21 results in a different scope than that of the originally presented Claims 1, 10 and 21 respectively. On page 17 of the applicant’s remarks, the applicant states, “The Office Action cites paragraph [0017] of Desai as teaching a plurality of supporting evidence from one or more evidence sources. Paragraph [0017] of Desai describes capturing one or more aerial images. The aerial images of Desai are not supplemental to image data; rather, they are images themselves. Desai only describes analyzing the aerial images and does not disclose any data sources that are supplemental to the aerial images” The examiner asserts that the claims do not differentiate that the supporting evidence cannot be the images itself. Paragraph [0067] in the applicant’s published application states, “The supporting evidence can include one or more photographs and/or satellite images from a public or commercial source, as discussed above” Therefore the examiner’s interpretation is consistent with the applicant’s definition. The newly amended claim limitation “wherein the one or more evidence sources are supplemental to the image data” The examiner interprets that the evidence sources is where the source of the image is from as in an evidence source is a camera that generates the image. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-25 and 28-29 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 10 and 21 contain the limitation, “receiving a plurality of supporting evidence from one or more evidence sources, wherein the one or more evidence sources are supplemental to the image data.” Support was provided by the applicant in the published application, Paragraph [0063]. However, the paragraph cited only provided examples of what the supporting evidence could be “In one or more examples, the supporting evidence can include, in addition to or alternatively, supplemental data from one or more alternative data sources” It implies that the image data is supplemental to evidence sources but does not imply that evidence sources are supplemental to the image data. Appropriate corrections are required. The dependent claims do not alleviate the issues and are also rejected under 35 U.S.C. 112(a). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-25 are rejected under 35 U.S.C. 103 as being unpatentable over Desai et al. US2022/0366167 hereinafter referred to as Desai in view of Lin et al. US2021/0271707 hereinafter referred to as Lin. As per Claim 1, Desai teaches the method for identifying objects of interest from image data, the method comprising: receiving a plurality of supporting evidence from one or more evidence sources; wherein the one or more evidence sources are supplemental to the image data; (Desai, Paragraph [0017], “The aerial imaging device 110 captures one or more aerial images of the geographic area 115. For example, the aerial imaging device 110 may capture a static image at a particular time or capture a sequence of images over a period of time” The imaging device is supplemental to the image data that it captured.) segmenting one or more potential objects of interest from the one or more selected candidate images, wherein segmenting the one or more potential objects of interest from the one or more selected candidate images comprises applying one or more segmentation analytics to the one or more selected candidate images to identify the one or more potential objects of interest; (Desai, Paragraph [0062], “The aircraft classification system 120 receives 510 an aerial image of a geographic area that includes one or more aircrafts. The aerial image may be received from an aerial imaging device (e.g., the aerial imaging device 110). The aircraft classification system 120 inputs 520 the aerial image into a machine learning model (e.g., the machine learning model 210). For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model”) selecting at least one classifier from a plurality of classifiers based on the segmentation of the one or more potential objects of interest; determining, using the at least one classifier, determining whether each of the one or more segmented potential objects of interest is an object of interest; determining an object type for each identified object of interest; and determining whether each identified object of interest is a specific known object of interest. (Desai, Paragraph [0031], [0063], “The aircraft classification system 120 identifies 560, based on the comparison, a known set of geometric measurements from the plurality of known sets of geometric measurements. The sub-classification determination engine 230 may identify the known set of geometric measurements due to the geometric measurements of the known set being within a predetermined threshold of the geometric measurements of the set of geometric measurements. The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification. The sub-classification may be displayed (e.g., via a graphics display 410) to a user of the aircraft classification system 120”) particular geolocation at a particular time; (Desai, [0022], “In addition to transmitting the captured aerial image(s), the aerial imaging device 110 transmits the location information for the geographic area 115 captured in the aerial image(s) to the aircraft classification system 120 for storage in the aerial image data store 130. In some embodiments, the aerial imaging device 110 may transmit metadata regarding itself to the aerial image data store 130, such as a time of image captured (e.g., local time or universal time/GMT), a geospatial and/or geographical location of device 110”) Desai does not explicitly teach identifying an indicator from the plurality of supporting evidence that indicates an object of interest may be located in a particular geolocation; selecting one or more candidate images from a plurality of digital images based on the indicator; Lin teaches identifying an indicator from the plurality of supporting evidence that indicates an object of interest may be located in a particular geolocation; selecting one or more candidate images from a plurality of digital images based on the indicator; (Lin, Paragraph [0078], “identifies a subset of the set of images (e.g., candidate image results) associated with the text input of the search query. For instance, the joint embedding model 112 can use the text input to determine textual information associated with the search query. In this example, the joint embedding model 112 obtains textual information that provides contextual information used to retrieve candidate images, including images 510-528 based on the contextual information… In addition, the joint embedding model 112 may utilized the geotag to generate more accurate search results. For instance, the joint embedding model 112 may select a subset of a subset of candidate image results (e.g., image results or images 510-528) based on a presence of the geotag”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Lin into Desai because by utilizing means to select subset of images to process will reduce the number of images for the algorithm need to process and provide the user with relevant images for object identification. Therefore it would have been obvious to one of ordinary skill to combine the two references to obtain the invention in Claim 1. As per Claim 2, Desai in view of Lin teaches the method of claim 1, wherein determining whether each of the one or more segmented potential objects of interest is an object of interest comprises applying one or more object detection analytics comprising one or more object detection classifiers to the one or more selected candidate images. (Desai, Paragraph [0062]-[0063], [0031], “For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model” and “The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 3, Desai in view of Lin teaches the method of claim 2, wherein the one or more object detection analytics identify one or more environmental characteristics in the one or more selected candidate images, and determining whether each of the one or more segmented potential objects of interest is an object of interest comprises: selecting one or more scene classifiers from a plurality of scene classifiers based on the identified one or more environmental characteristics; and applying one or more scene analytics comprising the one or more selected scene classifiers to the one or more selected candidate images. (Lin, Paragraph [0078], “For instance, the joint embedding model 112 can determine contextual information from the textual information that provides a geospatial relationship associated with more relevant candidate image results that include content tags or image feature vectors associate with Prague, Czech Republic. For instance, the on the contextual information. For instance, the joint may retrieve candidate image results based on a specific geolocation associated with the astronomical clock tower in Prague, Czech Republic (e.g., a geotag corresponding to the location of the astronomical clock tower in Prague, Czech Republic). In addition, the joint embedding model 112 may utilized the geotag to generate more accurate search results. For instance, the joint embedding model 112 may select a subset of a subset of candidate image results (e.g., image results or images 510-528) based on a presence of the geotag” and Desai, Figure 5, S530-S570, Paragraph [0062]-[0063]) The rationale applied to the rejection of claim 2 has been incorporated herein. As per Claim 4, Desai in view of Lin teaches the method of claim 2, wherein determining the object type for each identified object of interest comprises: selecting one or more object type classifiers from a plurality of object type classifiers; and applying one or more object type analytics comprising the one or more selected object type classifiers to the one or more selected candidate images. (Desai, Figure 5, S530-S570, Paragraph [0062]-[0063], [0031], “For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model” and “The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification”) The rationale applied to the rejection of claim 2 has been incorporated herein. As per Claim 5, Desai in view of Lin teaches the method of claim 4, wherein the one or more object type classifiers are selected based on the results of applying the one or more object detection analytics. (Desai, Figure 5, S530-S570, Paragraph [0062]-[0063], [0031], “For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model” and “The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification”) The rationale applied to the rejection of claim 4 has been incorporated herein. As per Claim 6, Desai in view of Lin teaches the method of claim 4, wherein determining whether each identified object of interest is a specific known object of interest comprises: selecting one or more known object classifiers from a plurality of known object classifiers; and applying one or more known object analytics comprising the one or more selected known object classifiers to the one or more selected candidate images. (Desai, Figure 5, S530-S570, Paragraph [0062]-[0063], [0031], “For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model” and “The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification”) The rationale applied to the rejection of claim 4 has been incorporated herein. As per Claim 7, Desai in view of Lin teaches the method of claim 6, wherein the one or more known object classifiers are selected based on the results of applying the one or more object type analytics. (Desai, Figure 5, S530-S570, Paragraph [0062]-[0063], [0031], “For example, the aerial image may be provided (e.g., via the bus 408) to the machine learning model from the aerial image data store 130. The aircraft classification system 120 receives 530 an output for each aircraft (e.g., the output image 330 with a bounding polygon corresponding to each aircraft, a classification for each aircraft, and a plurality of keypoints for each aircraft) from the machine learning model” and “The identified known set is mapped by a database (e.g., the aircraft data store 150) to a sub-classification. The aircraft classification system 120 outputs 570 the sub-classification”) The rationale applied to the rejection of claim 6 has been incorporated herein. As per Claim 8, Desai in view of Lin teaches the method of claim 1, comprising generating assessment data based on the indicator and embedding the assessment data as metadata in one or more selected candidate images. (Lin, Paragraph [0078], “identifies a subset of the set of images (e.g., candidate image results) associated with the text input of the search query. For instance, the joint embedding model 112 can use the text input to determine textual information associated with the search query. In this example, the joint embedding model 112 obtains textual information that provides contextual information used to retrieve candidate images, including images 510-528 based on the contextual information… In addition, the joint embedding model 112 may utilized the geotag to generate more accurate search results. For instance, the joint embedding model 112 may select a subset of a subset of candidate image results (e.g., image results or images 510-528) based on a presence of the geotag” and Desai, paragraph [0022], use of metadata. ) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 9, Desai in view of Lin teaches the method of claim 1, comprising determining one or more status indicators about the one or more identified objects of interest and embedding the one or more status indicators as metadata that accompanies the one or more selected candidate images. (Desai, [0022], “In addition to transmitting the captured aerial image(s), the aerial imaging device 110 transmits the location information for the geographic area 115 captured in the aerial image(s) to the aircraft classification system 120 for storage in the aerial image data store 130. In some embodiments, the aerial imaging device 110 may transmit metadata regarding itself to the aerial image data store 130, such as a time of image captured (e.g., local time or universal time/GMT), a geospatial and/or geographical location of device 110”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 10, Claim 10 claims a system for performing the method as claimed in Claim 1. Claim 10 additionally claims a memory; one or more processors (Desai, Paragraph [0006]) Therefore the rejection and rationale are analogous to that made in Claim 1. As per Claim 11-18, Claims 11-18 claims the same limitation as Claims 2-9 and are dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claims 2-9. As per Claim 19, Claim 19 claims a non-transitory computer readable medium storing one or more programs(Desai, Paragraph [0006]) for executing the method of Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. As per Claim 20-25, Claims 20-25 claims the same limitation as Claims 2-7 and are dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claims 2-7. Claims 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Desai et al. US2022/0366167 hereinafter referred to as Desai in view of Lin et al. US2021/0271707 hereinafter referred to as Lin as applied to Claims 1 and 10 respectively and further in view of Choi et al. US2021/0117774 hereinafter referred to as Choi. As per Claim 28, Desai in view of Lin teaches the method of claim 1, comprising, prior to selecting the one or more candidate images from the plurality of digital images, Desai in view of Lin does not explicitly teach controlling, based on the indicator, at least one surveillance system to obtain the plurality of digital images. Choi teaches controlling, based on the indicator, at least one surveillance system to obtain the plurality of digital images. (Choi, Paragraph [0038], “aircraft such as unmanned aerial vehicles (UAVs) or other remotely piloted vehicles, autonomous airborne vehicles or the like, may carry cameras for capturing aerial images for inspection, survey and surveillance in which the ANN classifier may be applied to detect and classify objects depicted in the images”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Choi into Desai in view of Lin because by utilizing the aerial device of Desai for use of surveillance as disclosed in Choi will provide additional application/uses for the aerial device of Desai. Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 28. As per Claim 29, Claim 29 claims the same limitation as Claim 28 and are dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 28. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §103, §112
Jul 24, 2025
Applicant Interview (Telephonic)
Jul 24, 2025
Examiner Interview Summary
Oct 02, 2025
Response Filed
Jan 16, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602904
METHOD AND ELECTRONIC DEVICE FOR RECOGNIZING OBJECT BASED ON MASK UPDATES
2y 5m to grant Granted Apr 14, 2026
Patent 12567244
METHOD AND APPARATUS FOR FUSING MULTI-SENSOR DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12555240
BRUCH'S MEMBRANE SEGMENTATION IN OCT VOLUME
2y 5m to grant Granted Feb 17, 2026
Patent 12555411
Facial Emotion Recognition System
2y 5m to grant Granted Feb 17, 2026
Patent 12536838
PATCH-BASED ADVERSARIAL ATTACK DETECTION AND MITIGATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+13.8%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month