Prosecution Insights
Last updated: April 19, 2026
Application No. 17/460,638

OBJECT SEGMENTATION

Final Rejection §102§103§112
Filed
Aug 30, 2021
Examiner
DEPALMA, CAROLINE ELIZABETH
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Ford Global Technologies LLC
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 were pending. Claims 1, 5, 7, 8, 10, 14, 18, 20 have been amended. Claims 4, 6, 17, 19 have been canceled. New claims 21-24 have been added. Thus, claims 1-3, 5, 7-16, 18, 20-24 are currently pending including independent claims 1, 14. Specification The objection to the specification is removed in light of the remarks and amendments filed 01/23/2026. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 5, 7-16, 18, 20-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the combined first sensor data and the second sensor data" in lines 8-9. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests amending to recite “the combined camera sensor data and radar sensor data”. Similar reasoning applies to claim 14. Claims 2-3, 5, 7-13, 15-16, 18, 20-24 are dependent on claims 1 or 14 and are thus similarly rejected. Claim 1 recites the term “respective locations in an environment” in line 4 and line 7. It is unclear and confusing as to whether the applicant intends the claim to recite: the “respective locations” of line 4, where this term refers to two distinct locations, one of which is captured by the camera sensor data and the other of which is captured by the radar sensor data; or the “respective locations” of line 7, where this term refers to as many locations as there are pixels in the combined image wherein each pixel corresponds to one location. The examiner interprets amended claim 1 according to option (b) above, wherein each pixel corresponds to a “respective location” in the combined image, and each pixel includes an object label and a hazard probability for that location. Examiner suggests amending lines 4-5 of amended claim 1 to instead recite “input camera sensor data and radar sensor data capturing an environment”. Additionally, amended claim 1 currently recites “an environment” in both line 5 and line 7. It is unclear and confusing as to whether the environment of line 5 is the same environment as in line 7. Examiner suggests amending the element of line 7 to recite “the environment” to explicitly refer to the same environment as is stated in line 5. Additionally, amended claim 1 recites “the hazard probabilities” in line 13. It is unclear and confusing as to whether the element of line 13 refers to the element “hazard probabilities” of line 11 or to the element of line 7 wherein each pixel includes “a hazard probability”. Examiner interprets amended claim 1 such that “the hazard probabilities” of line 13 refers to the element “hazard probabilities” of line 11. Examiner suggests amending for improved clarity. Similar reasoning applies to claim 14. Claims 2-3, 5, 7-13, 15-16, 18, 20-24 are dependent on claims 1 or 14 and are thus similarly rejected. Claim 21 recites the limitation "the system of claim 1”. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests amending to recite “the computer of claim 1”. Similar reasoning applies to claim 22. Claim Rejections - 35 USC § 102 The rejections under 35 U.S.C. 102 are removed in light of the remarks and amendments filed 01/23/2026. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5, 11-16, 18, 21, 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van Heukelom (US 20200174481 A1) in view of Laugier (C. Laugier et al., "Probabilistic Analysis of Dynamic Scenes and Collision Risks Assessment to Improve Driving Safety," in IEEE Intelligent Transportation Systems Magazine, vol. 3, no. 4, pp. 4-19, winter 2011, doi: 10.1109/MITS.2011.942779.). Regarding claim 1, Van Heukelom discloses, a computer, comprising: a processor; and a memory, the memory including instructions executable by the processor (Fig. 7, [0074]-[0075] computing device including processor and memory; [0113] the processor executing instructions) to: input camera sensor data and radar sensor data for respective locations in an environment to a deep neural network that outputs a combined image ([0026], [0119] camera sensor and radar sensor data are captured of a surrounding environment; [0084], [0087] the data is input in to the machine learning model (i.e. deep learning CNN) and a map including the data (i.e. combined image) is output; [0099], [0101] deep learning algorithm (e.g. convolutional neural network) as the machine learning model) determine, in the deep neural network based on the combined first sensor data and the second sensor data, a segmentation map from the combined sensor data that includes labeled segments (Fig. 7: perception component 722 within memory 718; [0099], [0101] the components of memory 718 may be implemented as a deep learning algorithm (e.g. convolutional neural network) as the machine learning model; [0077] perception component 722 performs segmentation; Fig. 2, [0042] maps of a segmented object and labeled segments), wherein the labeled segments include (a) pixels corresponding to objects in the combined sensor data, (b) hazard probabilities for respective labeled segments included in the segmentation map (Fig. 2, [0041]-[0042] the location of the object (i.e. pixels corresponding to the object) are labeled in the map in Fig. 2; [0043]-[0045] the probability distribution, including region probability 214 (likelihood of collision between the object and the vehicle, see [0038]) (i.e. hazard probability of the object), is included in the labeled segment); and output the segmentation map and the hazard probabilities ([0085] the maps including probabilities are output by the model). Van Heukelom fails to disclose pixels that each include an object label and a hazard probability for one of the respective locations in an environment. Laugier, in a related system from the same field of continuous assessment of collision risk based on fusion of on-board sensor data (Section I.C., first paragraph), discloses pixels that each include an object label and a hazard probability for one of the respective locations in an environment (Fig. 3; Section II.B. first paragraph: stereoscopic sensor data is aligned and pixels are classified as either obstacles or road surface (i.e. data from two sensors is combined and pixels include object label information at those locations); subheading "Occupancy Grid in u-Disparity" final 2 paragraphs: pixels of the u-disparity map are assigned probabilities of there being obstacles (i.e. hazard probabilities) at those locations in the environment). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Laugier with Van Heukelom and include pixels that each include an object label and a hazard probability for one of the respective locations in an environment, as disclosed by Laugier, as part of computer for outputting a segmentation map including hazard probabilities, as disclosed by Van Heukelom, for the purpose of aiding drivers and/or vehicles in safely dealing with complex traffic scenarios and improving the safety of car travel (See Laugier: Section V. first paragraph, Section I.A. last paragraph). Regarding claim 2, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses the instructions including further instructions to operate a vehicle based on the segmentation map and the hazard probabilities (Fig. 8, [0129] at step 820 the vehicle is controlled (i.e. operated) based on the probability displayed in the map (see [0127])). Regarding claim 3, Van Heukelom in view of Laugier discloses the computer of claim 2 as applied above. Van Heukelom further discloses the instructions including further instructions to operate the vehicle by controlling one or more of vehicle powertrain, vehicle brakes, and vehicle steering ([0129] operating the vehicle includes controlling the steering, braking, and/or acceleration). Regarding claim 5, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein the image data includes red, green, and blue pixels arranged in a rectangular array of image pixels ([0103] image data is gathered by an RGB (red, green, blue) camera). Regarding claim 11, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein the deep neural network is trained based on ground truth segmentation maps and ground truth hazard probabilities ([0112] machine learning model is trained by ground truth information including maps of object trajectories (i.e. maps which include segmented object and its trajectory) and probabilities; [0101] deep learning algorithm (e.g. convolutional neural network) as the machine learning model). Regarding claim 12, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein the hazard probabilities are grouped into two or more levels ([0035] the hazard probabilities are grouped into levels (e.g. colors such as white, light gray, dark gray) based on the relative probabilities). Regarding claim 13, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein the objects in the combined sensor data include pedestrians, vehicles, roadways, buildings, and foliage ([0077] objects can be a car (i.e. vehicle), pedestrian, road surface (i.e. roadway), building, tree (i.e. foliage)). Regarding claim 14, Van Heukelom in view of Laugier discloses everything claimed as applied above (see rejection of claim 1). Regarding claim 15, Van Heukelom in view of Laugier discloses everything claimed as applied above (see rejection of claim 2). Regarding claim 16, Van Heukelom in view of Laugier discloses everything claimed as applied above (see rejection of claim 3). Regarding claim 18, Van Heukelom in view of Laugier discloses everything claimed as applied above (see rejection of claim 5). Regarding claim 21, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein pixel locations included in the segmentation map correspond to same locations in the environment as pixel locations included in the camera sensor data and pixel locations included in the radar sensor data ([0091] segmentation maps (i.e. comprising pixels) are merged such that they correspond to the same locations in the surrounding environment as one another; [0084], [0119] the maps are made based on the sensor data which is combined (i.e. the sensor data must also correspond to the locations depicted in the maps)). Regarding claim 23, Van Heukelom in view of Laugier discloses everything claimed as applied above (see rejection of claim 21). Claim(s) 7-8, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van Heukelom (US 20200174481 A1) in view of Laugier (C. Laugier et al., "Probabilistic Analysis of Dynamic Scenes and Collision Risks Assessment to Improve Driving Safety," in IEEE Intelligent Transportation Systems Magazine, vol. 3, no. 4, pp. 4-19, winter 2011, doi: 10.1109/MITS.2011.942779.) in further view of Nobis (F. Nobis, M. Geisslinger, M. Weber, J. Betz and M. Lienkamp, "A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection," 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, 2019, pp. 1-7, doi: 10.1109/SDF.2019.8916629.). Regarding claim 7, Van Heukelom in view of Laugier discloses the computer of claim 6 as applied above. Van Heukelom fails to disclose wherein the radar data includes azimuth angle, distance, and radar cross-section arranged in a rectangular array of radar pixels. Nobis, in a related system from the same field of invention of object detection using deep learning (see Abstract), discloses wherein the radar data includes azimuth angle, distance, and radar cross-section arranged in a rectangular array of radar pixels (Pg. 2 III. Radar data preprocessing: radar data including azimuth angle, distance, and radar cross-section, the radar data is stored as pixel values of an image (i.e. rectangular array of radar pixels)). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nobis with Van Heukelom and include azimuth angle, distance and radar cross-section in radar data, as disclosed by Nobis, as part of a computer implementing instructions to create a map including object segmentation and hazard probabilities, as disclosed by Van Heukelom, for the purposes of improved reliability and accuracy of an object detection and segmentation process combining data from multiple different sensors (see Nobis: pg. 1, Introduction). Regarding claim 8, Van Heukelom in view of Laugier discloses the computer of claim 6 as applied above. Van Heukelom fails to disclose wherein the radar data includes a plurality of radar scans acquired at different times and combined by compensating for motion. Nobis, in a related system from the same field of invention of object detection using deep learning (see Abstract), discloses wherein the radar data includes a plurality of radar scans acquired at different times and combined by compensating for motion (Pg. 3 left column: we increase the density of radar data by jointly fusing the last 13 radar cycles (around 1 s) to our data format. Ego-motion is compensated for this projection method (i.e. 13 radar scans at different timepoints, totaling around 1 second of time, are acquired and combined, including compensating for the motion of the sensor). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Nobis with Van Heukelom and include a plurality of radar scans acquired at different times compensating for motion, as disclosed by Nobis, as part of a computer implementing instructions to create a map including object segmentation and hazard probabilities, as disclosed by Van Heukelom, for the purposes of improved reliability and accuracy of an object detection and segmentation process combining data from multiple different sensors (see Nobis: pg. 1, Introduction). Regarding claim 20, Van Heukelom in view of Laugier and Nobis discloses everything claimed as applied above (see rejection of claim 7). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van Heukelom (US 20200174481 A1) in view of Laugier (C. Laugier et al., "Probabilistic Analysis of Dynamic Scenes and Collision Risks Assessment to Improve Driving Safety," in IEEE Intelligent Transportation Systems Magazine, vol. 3, no. 4, pp. 4-19, winter 2011, doi: 10.1109/MITS.2011.942779.) in further view of Zhu (Aichun Zhu, Sai Zhang, Yaoying Huang, Fangqiang Hu, Ran Cui, Gang Hua; Exploring hard joints mining via hourglass-based generative adversarial network for human pose estimation. AIP Advances 1 March 2019; 9 (3): 035321.). Regarding claim 9, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom further discloses wherein the deep neural network is a convolutional neural network including convolutional layers ([0101] the deep learning algorithm may be a convolutional neural network (i.e. including convolutional layers)). Van Heukelom fails to disclose the CNN including max pooling layers and upsampling layers arranged in an hourglass configuration. Zhu, in a related system from the same field of endeavor of image processing using neural networks (see I. Introduction, A. Motivation), discloses a CNN that includes convolutional layers, max pooling layers, and upsampling layers arranged in an hourglass configuration (Fig. 2, pg. 3-4 II. HARD JOINT MINING VIA HOURGLASS-BASED GENERATIVE ADVERSARIAL NETWORK: CNN including convolution layers, max pooling, upsampling, and an hourglass structure). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Zhu with Van Heukelom and implement a CNN which includes convolution, max pooling, upsampling and an hourglass configuration, as disclosed by Zhu, as part of a computer implementing instructions to create a map including object segmentation and hazard probabilities, as disclosed by Van Heukelom, for the purposes of improving the accuracy of the network outputs (see Zhu, pg. 2-3, A. Motivation). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Van Heukelom (US 20200174481 A1) in view of Laugier (C. Laugier et al., "Probabilistic Analysis of Dynamic Scenes and Collision Risks Assessment to Improve Driving Safety," in IEEE Intelligent Transportation Systems Magazine, vol. 3, no. 4, pp. 4-19, winter 2011, doi: 10.1109/MITS.2011.942779.) in further view of Wang-180 (US 20200082180 A1). Regarding claim 10, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. Van Heukelom fails to disclose wherein the first sensor data and the second sensor data are combined based on a camera calibration matrix. Wang-180, in a related system from the same field of endeavor of object detection from a vehicle sensor (see Abstract), discloses wherein the first sensor data and the second sensor data are combined based on a camera calibration matrix ([0005] multiple cameras (i.e. sensors); Fig. 21, [0071] images input to camera calibration matrices to get combined output data). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Wang-180 with Van Heukelom and combine first and second sensor data based on a camera calibration matrix, as disclosed by Wang-180, as part of a computer implementing instructions to create a map including object segmentation and hazard probabilities, as disclosed by Van Heukelom, for the purposes of improving the robustness and safety level of an autonomous driving system (see Wang-180, [0005]-[0006]). Allowable Subject Matter Claims 22, 24 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 22, Van Heukelom in view of Laugier discloses the computer of claim 1 as applied above. However, neither Van Heukelom nor any obvious combination of the closest known prior art discloses wherein the hazard probabilities are assigned to an object based on the size of an image segment and a radar cross-section. Similar reasoning applies to claim 24. Response to Arguments Applicant's arguments filed 01/23/2026 have been fully considered but they are not persuasive. Applicant asserts on page 9 that "The detailed and informative representation of the environment disclosed by Van Heukelom does not specify what fusing or combining means, much less does Van Heukelom provide any teaching or suggestion concerning data included in pixels". Examiner disagrees. Van Heukelom states as indicated above that data from sensors such as camera and radar sensors can be combined ([0119]) and that data is input to a neural network ([0087]) to output a new image which includes information from the sensor data. Amended claim 1 states to "input camera sensor data and radar sensor data…to a deep neural network that outputs a combined image" but does not claim further details as to what the combining of the data from the two sensors entails. Thus, Van Heukelom discloses everything claimed with respect to the above limitation of amended claim 1. Amended claim 1 is rejected under 35 U.S.C. 103 under Van Heukelom in view of Laugier as applied above. Applicant further asserts on page 9 that "neither Van Heukelom nor Wang, whether separately or in combination, teaches or suggests that 'the first sensor is an image sensor, the second sensor is a radar sensor and first sensor data, and second sensor data are combined into one image so that each pixel from the first sensor and the second sensor correspond to the same location in an environment'". Examiner disagrees. As stated above, Van Heukelom discloses data from multiple sensors that may be fused or combined where the sensors may be radar sensors and camera sensors ([0026], [0119]). As further stated above, Van Heukelom further discloses wherein the data are combined such that each pixel from each sensor corresponds to the same locations in the environment in that Ven Heukelom discloses creating an aligned map of the surrounding environment using data from the multiple sensors; thus the sensor data must comprise information about the same locations in the environment ([0091], [0084], [0119]). Applicant further asserts on page 9 that "with reference to new claims 21 and 23, neither Van Heukelom nor Wang teach or suggest combining first and second sensor data into one image so that each pixel from the first sensor and the second sensor correspond to a same location in an environment". Examiner disagrees. As stated above, Van Heukelom further discloses wherein the data are combined such that each pixel from each sensor corresponds to the same locations in the environment in that Ven Heukelom discloses creating an aligned map of the surrounding environment using data from the multiple sensors; thus the sensor data must comprise information about the same locations in the environment ([0091], [0084], [0119]). Thus, Van Heukelom discloses everything claimed with respect to the above limitation of claims 21, 23. Claims 21, 23 are rejected under 35 U.S.C. 103 under Van Heukelom in view of Laugier as applied above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Byrne (J. Byrne and C. J. Taylor, "Expansion segmentation for visual collision detection and estimation," 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pp. 875-882, doi: 10.1109/ROBOT.2009.5152487.) discloses collision detection and estimation based on classifying pixels as either collision or non-collision (i.e. hazard probability, object label) based on sensor data. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 9:00am-4pm Eastern Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE E. DEPALMA/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Aug 30, 2021
Application Filed
Oct 20, 2025
Non-Final Rejection — §102, §103, §112
Jan 14, 2026
Interview Requested
Jan 21, 2026
Applicant Interview (Telephonic)
Jan 21, 2026
Examiner Interview Summary
Jan 23, 2026
Response Filed
Feb 26, 2026
Final Rejection — §102, §103, §112
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602777
APPARATUS AND METHOD FOR QUANTITATIVE ASSESSMENT OF MEDICAL IMAGES FOR DIAGNOSIS OF CHRONIC OBSTRUCTIVE PULMONARY DISEASE
2y 5m to grant Granted Apr 14, 2026
Patent 12586409
DETECTING EMOTIONAL STATE OF A USER BASED ON FACIAL APPEARANCE AND VISUAL PERCEPTION INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586246
SYSTEM AND METHOD FOR VICARIOUS CALIBRATION OF OPTICAL DATA FROM SATELLITE SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12573046
METHODS AND SYSTEMS FOR ANALYZING BRAIN LESIONS FOR THE DIAGNOSIS OF MULTIPLE SCLEROSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12567226
METHOD AND DEVICE OF ACQUIRING FEATURE INFORMATION OF DETECTED OBJECT, APPARATUS AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month