Prosecution Insights
Last updated: April 19, 2026
Application No. 18/609,469

SEMANTIC GUIDED SCENE FLOW ESTIMATION

Non-Final OA §101§103
Filed
Mar 19, 2024
Examiner
STREGE, JOHN B
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
929 granted / 1072 resolved
+24.7% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
1094
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1072 resolved cases

Office Action

§101 §103
Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 25 rejected under 35 U.S.C. 101 because the claim is directed to a computer readable medium which does not exclude transitory signals. Examiner suggests amending the claim to recite a nontransitory computer readable medium. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3, 12-15, and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Drews et al. DE 102022208714 (hereinafter “Drews”, machine translation relied upon for the rejection) in view of Abbeloos et al. US 2024/0144487 (hereinafter “Abbeloos”). Drews discloses a method for scene flow estimation comprising: receiving multimodal data having at least a first modality and a second modality, wherein the multimodal data represents a plurality of points in a scene (see page 3, a perception layer for aggregating scene-specific sensor data from at least two different sensor modalities, these modalities include lidar, camera, radar) PNG media_image1.png 334 858 media_image1.png Greyscale ; extracting a first set of features from the first modality and extracting a second set of features from the second modality (see above cited section, for each sensor modality a separate feature extractor to generate a scene-specific feature map); projecting the first set of features and the second set of features into a shared latent space to generate a first latent representation of the first set of features and a second latent representation of the second set of features (see above cited section, fusion layer for fusing the latent features of at least two different sensor modalities into a common representation space of the scene, by combining the first and second features it is generating a first latent representation of the first set of features and a second latent representation of the second set of features). While Drews discloses using the fused latent features of the two different sensor modalities into a common representation and analyzing the scene based on the fused latent features, he does not explicitly go into detail as to what type of analysis is carried out in the semantic analysis of the scene. Drews does suggest that the semantic scene analysis can involve tracking (see page 6), but does not go into specifics regarding the tracking. Thus Drew does not explicitly disclose estimating a flow of the plurality of points of the scene based on one or more relationships between the first set of features and the second set of features comprising using a model trained to learn the one or more relationships between the first set of features and the second set of features based on the first latent representation and the second latent representation. Abbeloos discloses a scene feature correspondence model which provides an estimate of movement of a scene feature captured with a first and second imaging device by applying a Lucas-Kanade flow algorithm (see paragraph 0041and figure 2) PNG media_image2.png 474 634 media_image2.png Greyscale PNG media_image3.png 395 351 media_image3.png Greyscale Drews and Abbeloos are analogous art because they are from the same field of endeavor of scene feature analysis. Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to combine Drews scene analysis of the fused latent features with Abbeloos to estimate the flow of points between the features as carried out by Abbeloos. The motivation would be to track the movement of features in the scene as suggested by Drews with the specific algorithm for doing so of Abbeloos. Regarding claim 2, Drews discloses defining a surface of each object depicted in the scene using a local coordinate system corresponding to each object (see page 6, the system allows for the detection of objects and object recognition as well as segmentation based on the input feature map, see cited section from page 3 above which reads on defining a surface in the scene using a coordinate system) PNG media_image4.png 100 836 media_image4.png Greyscale Regarding claim 3, Drews discloses wherein the first modality comprises image data and the second modality comprises LiDAR point cloud data, wherein extracting the first set of features from the first modality comprises extracting one or more semantic priors providing semantic information about the scene and wherein extracting the second set of features from the second modality comprises extracting geometric features capturing a 3D structure and layout of the scene (see above cited section from page 3, fusion of lidar, cameras, and radars for long range 3D object detection). Regarding claim 12, Drews discloses operating an Advanced Driver Assistance Systems (ADAS) system based the 3D object recognition carried out (see section from page 5) PNG media_image5.png 184 872 media_image5.png Greyscale , and as discussed above Abbeloos discloses the estimation of flow of objects. Claims 13-15 are similarly analyzed to claims 1-3. Claim 24 is similarly analyzed to claim 12. Claim 25 is similarly analyzed to claim 1. Allowable Subject Matter Claims 4-11, and 16-23 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see the attached 892 notice of references cited. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN B STREGE whose telephone number is (571)272-7457. The examiner can normally be reached M-F 9-5 (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571)272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN B STREGE/ Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Jan 20, 2026
Non-Final Rejection — §101, §103
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597234
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR VERIFYING CLASSIFICATION RESULT
2y 5m to grant Granted Apr 07, 2026
Patent 12592056
MACHINE LEARNING AND COMPUTER VISION SOLUTIONS TO SEAMLESS VEHICLE IDENTIFICATION AND ENVIRONMENTAL TRACKING THEREFOR
2y 5m to grant Granted Mar 31, 2026
Patent 12591951
SINGLE IMAGE SUPER-RESOLUTION PROCESSING METHOD AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586339
METHODS AND SYSTEMS FOR VIDEO PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12555112
WEARABLE AUTHENTICATION SYSTEM AND RING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+14.2%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 1072 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month