Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,000

ADAPTIVE PERCEPTION MODELS USING SENSOR IMAGING TENSOR

Non-Final OA §102§103
Filed
Mar 19, 2024
Examiner
WU, PAYSUN
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
59 granted / 92 resolved
+12.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
121
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 92 resolved cases

Office Action

§102 §103
DETAILED ACTION This is the first Office action on the merits and is responsive to the papers filed 03/19/2024. Claims 1-20 are currently pending and examined below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 7-10 and 16-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Smolyanskiy et al. (US 20210150230 A1; hereinafter Smolyanskiy). Regarding claim 1, Smolyanskiy discloses: A method (Fig. 4) for processing image data and sensor data (Fig. 4: sensor data 402) in an autonomous vehicle ([0046] vehicle), the method comprising: receiving image data and sensor data generated by one or more sensors of an autonomous vehicle ([0043] the sensors 401 may include one or more sensor(s) 401 of an ego-object or ego-actor—such as LiDAR sensors) 1664 of the autonomous vehicle 1600 of FIGS. 16A-16D—and the sensors 401 may be used to generate sensor data 402 representing objects in the 3D environment around the ego-object; [0045] the sensor data 402 may include raw sensor data, LiDAR point cloud data, and/or reflection data processed into some other format. For example, reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to form a point cloud representing detected reflections from the environment; [0045] the sensor data 402 may additionally or alternatively include sensor data from other sensors, such as RADAR data (e.g., RADAR point clouds), image data (e.g., RBG images from one or more cameras mounted around an ego-actor), and/or other types; [0050] different sensors 401 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space); encoding the sensor data into a multichannel sensor imaging tensor to generate encoded sensor data ([0047] The sensor data 402 may be accumulated 510 (which may include transforming to a single coordinate system), ego-motion-compensated 520, and/or encoded 530 into a suitable representation such as a projection image (e.g., a LiDAR range image) and/or a tensor, for example, with multiple channels storing different reflection characteristics; [0050] image data from different sensors 401 or sensor modalities may be stored in separate channels of a tensor); providing the image data and the encoded sensor data to an autonomous driving system trained to control the autonomous vehicle (Fig. 4: sensor data 402/input data 406 [Wingdings font/0xE0] machine learning model 408 [Wingdings font/0xE0] drive stack 422; [0046] the projection image (e.g., the LiDAR range image) and/or other reflection data may be stored and/or encoded into a suitable representation (e.g., the input data 406), which may serve as the input into the machine learning model(s) 408; [0082] the object detections 416 may be used by one or more layers of the autonomous driving software stack 422 (alternatively referred to herein as “drive stack 422”)); and executing, by the autonomous driving system, one or more operations for controlling the autonomous vehicle based at least in part on the image data and the encoded sensor data ([0085] The world model may be used to help inform planning component(s) 428, control component(s) 430, obstacle avoidance component(s) 432, and/or actuation component(s) 434 of the drive stack 422; [0098] the vehicle 1600 may use this information (e.g., as the edges, or rails of the paths) to navigate, plan, or otherwise perform one or more operations (e.g. lane keeping, lane changing, merging, splitting, etc.) within the environment.). Regarding claim 7, Smolyanskiy discloses: wherein the autonomous driving system is trained to infer relationships between the image data and the sensor data ([0050] different sensors 401 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space, and image data from different sensors 401 or sensor modalities may be stored in separate channels of a tensor). Regarding claim 8, Smolyanskiy discloses: wherein the sensor data comprises at least one of: exposure time data (implicit [0140] rolling shutters, global shutters, another type of shutter, or a combination thereof), ISO/gain data, lens aperture data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), focal length data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), focus distance data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), pixel size data and bit depth data. Regarding claim 9, Smolyanskiy discloses: wherein the sensor data comprises video capture data ([0141] One or more of the camera(s) (e.g., all of the cameras) may record and provide image data (e.g., video) simultaneously). Regarding claim 10, Smolyanskiy discloses: An apparatus (Fig. 4: object detection system; [0239] FIG. 17 is a block diagram of an example computing device(s) 1700 suitable for use in implementing some embodiments of the present disclosure) configured to process image data and sensor data (Fig. 4: sensor data 402) in an autonomous vehicle ([0046] vehicle), the apparatus comprising: a memory (Fig. 17: memory 1704); and one or more processors (Fig. 17: CPU 1706) implemented in circuitry and in communication with the memory ([0241] the CPU 1706 may be directly connected to the memory 1704), the one or more processors configured to: receive image data and sensor data generated by one or more sensors of an autonomous vehicle ([0043] the sensors 401 may include one or more sensor(s) 401 of an ego-object or ego-actor—such as LiDAR sensors) 1664 of the autonomous vehicle 1600 of FIGS. 16A-16D—and the sensors 401 may be used to generate sensor data 402 representing objects in the 3D environment around the ego-object; [0045] the sensor data 402 may include raw sensor data, LiDAR point cloud data, and/or reflection data processed into some other format. For example, reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to form a point cloud representing detected reflections from the environment; [0045] the sensor data 402 may additionally or alternatively include sensor data from other sensors, such as RADAR data (e.g., RADAR point clouds), image data (e.g., RBG images from one or more cameras mounted around an ego-actor), and/or other types; [0050] different sensors 401 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space); encode the sensor data into a multichannel sensor imaging tensor to generate encoded sensor data ([0047] The sensor data 402 may be accumulated 510 (which may include transforming to a single coordinate system), ego-motion-compensated 520, and/or encoded 530 into a suitable representation such as a projection image (e.g., a LiDAR range image) and/or a tensor, for example, with multiple channels storing different reflection characteristics; [0050] image data from different sensors 401 or sensor modalities may be stored in separate channels of a tensor); provide the image data and the encoded sensor data to an autonomous driving system trained to control the autonomous vehicle (Fig. 4: sensor data 402/input data 406 [Wingdings font/0xE0] machine learning model 408 [Wingdings font/0xE0] drive stack 422; [0046] the projection image (e.g., the LiDAR range image) and/or other reflection data may be stored and/or encoded into a suitable representation (e.g., the input data 406), which may serve as the input into the machine learning model(s) 408; [0082] the object detections 416 may be used by one or more layers of the autonomous driving software stack 422 (alternatively referred to herein as “drive stack 422”)); and execute, by the autonomous driving system, one or more operations for controlling the autonomous vehicle based at least in part on the image data and the encoded sensor data ([0085] The world model may be used to help inform planning component(s) 428, control component(s) 430, obstacle avoidance component(s) 432, and/or actuation component(s) 434 of the drive stack 422; [0098] the vehicle 1600 may use this information (e.g., as the edges, or rails of the paths) to navigate, plan, or otherwise perform one or more operations (e.g. lane keeping, lane changing, merging, splitting, etc.) within the environment.). Regarding claim 16, Smolyanskiy discloses: wherein the autonomous driving system is trained to infer relationships between the image data and the sensor data ([0050] different sensors 401 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space, and image data from different sensors 401 or sensor modalities may be stored in separate channels of a tensor). Regarding claim 17, Smolyanskiy discloses: wherein the sensor data comprises at least one of: exposure time data (implicit [0140] rolling shutters, global shutters, another type of shutter, or a combination thereof), ISO/gain data, lens aperture data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), focal length data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), focus distance data (implicit from [0144] wide-view cameras 1670, long-range camera(s) 1698 (e.g., a long-view stereo camera pair) may be used for depth-based object detection; [0203] The RADAR sensor(s) 1660 may include different configurations, such as long range with narrow field of view, short range with wide field of view, short range side coverage, etc), pixel size data and bit depth data. Regarding claim 18, Smolyanskiy discloses: wherein the sensor data comprises video capture data ([0141] One or more of the camera(s) (e.g., all of the cameras) may record and provide image data (e.g., video) simultaneously). Regarding claim 19, Smolyanskiy discloses: A non-transitory computer-readable storage medium (Fig. 17: memory 1704) storing instructions that, when executed, cause one or more processors (Fig. 17: CPU 1706) of a device (Fig. 4: object detection system; [0239] FIG. 17 is a block diagram of an example computing device(s) 1700 suitable for use in implementing some embodiments of the present disclosure) configured to process image data and sensor data (Fig. 4: sensor data 402) in an autonomous vehicle ([0046] vehicle) to: receive image data and sensor data generated by one or more sensors of an autonomous vehicle ([0043] the sensors 401 may include one or more sensor(s) 401 of an ego-object or ego-actor—such as LiDAR sensors) 1664 of the autonomous vehicle 1600 of FIGS. 16A-16D—and the sensors 401 may be used to generate sensor data 402 representing objects in the 3D environment around the ego-object; [0045] the sensor data 402 may include raw sensor data, LiDAR point cloud data, and/or reflection data processed into some other format. For example, reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to form a point cloud representing detected reflections from the environment; [0045] the sensor data 402 may additionally or alternatively include sensor data from other sensors, such as RADAR data (e.g., RADAR point clouds), image data (e.g., RBG images from one or more cameras mounted around an ego-actor), and/or other types; [0050] different sensors 401 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space); encode the sensor data into a multichannel sensor imaging tensor to generate encoded sensor data ([0047] The sensor data 402 may be accumulated 510 (which may include transforming to a single coordinate system), ego-motion-compensated 520, and/or encoded 530 into a suitable representation such as a projection image (e.g., a LiDAR range image) and/or a tensor, for example, with multiple channels storing different reflection characteristics; [0050] image data from different sensors 401 or sensor modalities may be stored in separate channels of a tensor); provide the image data and the encoded sensor data to an autonomous driving system trained to control the autonomous vehicle (Fig. 4: sensor data 402/input data 406 [Wingdings font/0xE0] machine learning model 408 [Wingdings font/0xE0] drive stack 422; [0046] the projection image (e.g., the LiDAR range image) and/or other reflection data may be stored and/or encoded into a suitable representation (e.g., the input data 406), which may serve as the input into the machine learning model(s) 408; [0082] the object detections 416 may be used by one or more layers of the autonomous driving software stack 422 (alternatively referred to herein as “drive stack 422”)); and execute, by the autonomous driving system, one or more operations for controlling the autonomous vehicle based at least in part on the image data and the encoded sensor data ([0085] The world model may be used to help inform planning component(s) 428, control component(s) 430, obstacle avoidance component(s) 432, and/or actuation component(s) 434 of the drive stack 422; [0098] the vehicle 1600 may use this information (e.g., as the edges, or rails of the paths) to navigate, plan, or otherwise perform one or more operations (e.g. lane keeping, lane changing, merging, splitting, etc.) within the environment.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 11-12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy in view of Liu et al. (US 20210073575 A1; hereinafter Liu). Regarding claim 2, Smolyanskiy does not specifically disclose: further comprising: compressing the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data. However, Liu discloses: further comprising: compressing the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data ([0060] video compression and encoding; [0079] without compression, G has a huge dimension (e.g., n2×n2), too large for a CNN to directly learn; [0106] configured to autonomous vehicle platforms). Smolyanskiy and Liu are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Liu’s image processing for the advantage of video compression which results in data reduction and faster process (Liu’s [0079]). Regarding claim 3, Smolyanskiy does not specifically disclose: wherein compressing the sensor data comprises projecting the encoded sensor data onto a lower-dimensional subspace. However, Liu discloses: wherein compressing the sensor data comprises projecting the encoded sensor data onto a lower-dimensional subspace ([0078] significantly reducing the output dimensions of G; [0079] without compression, G has a huge dimension (e.g., n2×n2), too large for a CNN to directly learn; [0106] autonomous vehicle platforms). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Liu’s image processing for the advantage of reducing output dimension which results in data reduction and faster process (Liu’s [0079]). Regarding claim 11, Smolyanskiy does not specifically disclose: wherein the one or more processors are further configured to: compress the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data. However, Liu discloses: wherein the one or more processors are further configured to: compress the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data ([0060] video compression and encoding; [0079] without compression, G has a huge dimension (e.g., n2×n2), too large for a CNN to directly learn; [0106] configured to autonomous vehicle platforms). Smolyanskiy and Liu are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Liu’s image processing for the advantage of video compression which results in data reduction and faster process (Liu’s [0079]). Regarding claim 12, Smolyanskiy does not specifically disclose: wherein to compress the sensor data, the one or more processors are further configured to: project the encoded sensor data onto a lower-dimensional subspace. However, Liu discloses: wherein to compress the sensor data, the one or more processors are further configured to: project the encoded sensor data onto a lower-dimensional subspace ([0078] significantly reducing the output dimensions of G; [0079] without compression, G has a huge dimension (e.g., n2×n2), too large for a CNN to directly learn; [0106] autonomous vehicle platforms). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Liu’s image processing for the advantage of reducing output dimension which results in data reduction and faster process (Liu’s [0079]). Regarding claim 20, Smolyanskiy does not specifically disclose: wherein the instructions further cause the one or more processors to: compress the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data. However, Liu discloses: wherein the instructions further cause the one or more processors to: compress the encoded sensor data prior to providing the encoded sensor data to the autonomous driving system to generate compressed encoded sensor data ([0060] video compression and encoding; [0079] without compression, G has a huge dimension (e.g., n2×n2), too large for a CNN to directly learn; [0106] configured to autonomous vehicle platforms). Smolyanskiy and Liu are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Liu’s image processing for the advantage of video compression which results in data reduction and faster process (Liu’s [0079]). Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy, in view of Liu and in view of Rodrigues et al. (US 20180089855 A1; hereinafter Rodrigues). Regarding claim 4, Smolyanskiy as modified does not specifically disclose: wherein the compressed encoded sensor data comprises compressed Camera Response Function (CRF) data. However, Rodrigues discloses: wherein the compressed sensor data comprises compressed Camera Response Function (CRF) data ([0005] cameras are no exception with such non-linearity being used intentionally for compressing the spectral range of the sensors. The camera response function (CRF) models in part this non-linear relation by describing how (physically meaningful) incoming light is mapped to quantized image brightness values). Rodrigues is analogous to the claimed invention because it pertains to the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing as currently modified to further incorporate Rodrigues’ image processing for the advantage of learning non-linear relationship using CRF which results in contribution in color standardization across cameras (Rodrigues’ [0005]). Regarding claim 13, Smolyanskiy as modified does not specifically disclose: wherein the compressed encoded sensor data comprises compressed Camera Response Function (CRF) data. However, Rodrigues discloses: wherein the compressed sensor data comprises compressed Camera Response Function (CRF) data ([0005] cameras are no exception with such non-linearity being used intentionally for compressing the spectral range of the sensors. The camera response function (CRF) models in part this non-linear relation by describing how (physically meaningful) incoming light is mapped to quantized image brightness values). Rodrigues is analogous to the claimed invention because it pertains to the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing as currently modified to further incorporate Rodrigues’ image processing for the advantage of learning non-linear relationship using CRF which results in contribution in color standardization across cameras (Rodrigues’ [0005]). Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy in view of Rosewarne et al. (US 20170127090 A1; hereinafter Rosewarne). Regarding claim 5, Smolyanskiy does not specifically disclose: wherein encoding the sensor data comprises encoding a plurality of distortion coefficients into separate channels within the multichannel sensor imaging tensor. However, Rosewarne discloses: wherein encoding the sensor data comprises encoding a plurality of distortion coefficients into separate channels within the multichannel sensor imaging tensor ([0128] the rate-distortion criterion considers the rate and distortion for the luma colour channel and thus the encoding decision is made based on characteristics of the luma channel). Smolyanskiy and Rosewarne are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Rosewarne’s image processing for the advantage of encoding channel distortion which results in results in factoring in channel characteristics and less distorted video data. Regarding claim 14, Smolyanskiy does not specifically disclose: wherein to encode the sensor data, the one or more processors are further configured to: encode a plurality of distortion coefficients into separate channels within the multichannel sensor imaging tensor. However, Rosewarne discloses: wherein to encode the sensor data, the one or more processors are further configured to: encode a plurality of distortion coefficients into separate channels within the multichannel sensor imaging tensor ([0128] the rate-distortion criterion considers the rate and distortion for the luma colour channel and thus the encoding decision is made based on characteristics of the luma channel). Smolyanskiy and Rosewarne are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Rosewarne’s image processing for the advantage of encoding channel distortion which results in results in factoring in channel characteristics and less distorted video data. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy in view of Yajko (US 20150262383 A1). Regarding claim 6, Smolyanskiy does not specifically disclose: wherein the sensor data comprises spectral response data. However, Yajko discloses: wherein the sensor data comprises spectral response data ([0039] A spectral response with respect to a light source is determined from the enhanced image). Smolyanskiy and Yajko are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Yajko’s image processing for the advantage of determining spectral response from images which results in better discernment of color variation in images when naked eye is limited (Yajko’s [0003]). Regarding claim 15, Smolyanskiy does not specifically disclose: wherein the sensor data comprises spectral response data. However, Yajko discloses: wherein the sensor data comprises spectral response data ([0039] A spectral response with respect to a light source is determined from the enhanced image). Smolyanskiy and Yajko are considered to be analogous to the claimed invention because they are in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Smolyanskiy’s image processing to further incorporate Yajko’s image processing for the advantage of determining spectral response from images which results in better discernment of color variation in images when naked eye is limited (Yajko’s [0003]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAYSUN WU whose telephone number is (571)272-1528. The examiner can normally be reached Monday-Friday 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571)272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAYSUN WU/Examiner, Art Unit 3665 /DONALD J WALLACE/Primary Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576881
METHOD FOR POLYNOMIAL BASED PREDICTIONS OF EGO VEHICLES AND ADVERSARIAL AGENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12559129
CAPTURING AND SIMULATING RADAR DATA FOR AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12545288
DEPOT BEHAVIORS FOR AUTONOMOUS VEHICLES
2y 5m to grant Granted Feb 10, 2026
Patent 12529574
DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD
2y 5m to grant Granted Jan 20, 2026
Patent 12509119
SERVICE AREA MAPS FOR AUTONOMOUS VEHICLES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
81%
With Interview (+17.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 92 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month