Prosecution Insights
Last updated: April 19, 2026
Application No. 18/583,473

SYNCHRONIZING CAMERA, LIDAR AND RADAR FOR OBJECT DETECTION USING RADAR-GUIDED SCENE FLOW ESTIMATION AND ADAPTIVE ATTENTION

Non-Final OA §103§112
Filed
Feb 21, 2024
Examiner
HENSON, BRANDON JAMES
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
38 granted / 55 resolved
+17.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
53.1%
+13.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§103 §112
DETAILED ACTION Status of Claims Claims 1-24 are currently pending and have been examined in this application. This NON-FINAL communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims The claims are objected to because of the following informalities: [Claim 6] Typographical error, “perform encoding [[to]] on the camera data to generate camera feature data;”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recite “range azimuth information”. It is unclear what this information is and the term does not seem to have support in the instant specification. The examiner has interpreted the limitation as “range information and azimuth information”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 12, 14-15, 17-24 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi (US 20220066456) in view of Moloney (US 20200158514). Regarding Claims 1, 23-24 Ebrahimi teaches the following limitations: A device comprising: (Ebrahimi - [0249] In embodiments, the MCU reads data from sensors such as obstacle sensors or IR transmitters and receivers on the robot or a dock or a remote device,) a processing system that includes processor circuitry and memory circuitry that stores code and is coupled with the processor circuitry, the processing system configured to cause the device to: (Ebrahimi - [1508] The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine readable medium.) (Claim 23) A method comprising: (Ebrahimi - [0006] Some aspects include a method for operating a robot, including: capturing, by at least one image sensor disposed on the robot, images of a workspace;) (Claim 24) A non-transitory computer-readable medium stores instructions that, when executed by a processor, cause the processor to perform operations comprising: (Ebrahimi - [1508]) receive point cloud data for two or more frames from a radar device; (Ebrahimi - [0342] Examples of different types of data that may be bundled include any of GPS data, IMU data, SFM data, laser range finder data, depth data, optical tracker data, odometer data, radar data, sonar data, etc. Bundling data is an iterative process that may be implemented locally or globally. For SFM, the process solves a non-linear least squares problem by determining a vector x that minimizes a cost function, x=argmin∥y−F(x)∥.sup.2. The vector x may be multidimensional.) [0362] In some embodiments, the processor generates a 3D model of the environment using captured sensor data. In some embodiments, the process of generating a 3D model based on point cloud data captured with a LIDAR or other device (e.g., depth camera) comprises obtaining a point cloud, optimization, triangulation, and optimization (decimation). For instance, in a first step of the process, the cloud is optimized and duplicate or unwanted points are removed. Then, in a second step, a triangulated 3D model is generated by connecting each nearby three points to form a face. These faces form a high poly count model. In a third step, the model is optimized for easier storing, viewing, and further manipulation. Optimizing the model may be done by combining small faces (i.e., triangles) to larger faces using a given variation threshold. [0422] For example, measurements collected by a distance sensor may indicate a change in distance measurement to a perimeter or obstacle, while measurements by a camera may indicate a change between two captured frames. While the two types of sensing differ, they may both be used to correct one another for movement.) generate scene flow parameter data based on the point cloud data; (Ebrahimi - [0362]) generate voxel position adjustment data based on the scene flow parameter data; (Ebrahimi - [0362], [0248] Each set of features corresponding to the various objects may be tracked as they evolve with time using iterative closest point algorithm or other algorithms. In embodiments, depth awareness creates more value and accuracy to for the system as a whole. Prior to elaborating further on the techniques and methods used in associating feature maps with geometric coordinates, the system of the robot is described. [0346] For example, a 3D LIDAR and a camera or a depth camera and a camera, the data of which may be combined. For instance, a depth measurement may be associated with a pixel of an image captured by a camera. Ebrahimi does not explicitly teach “voxel”.) generate feature concatenation information associated with two or more sensors based on the voxel position adjustment data and feature information associated with the two or more sensors; (Ebrahimi - [0346], [0328] In some embodiments, a displacement matrix measured by an IMU or odometer may be used as a kernel and convolved with an input image to produce a feature map comprising depth values that are expected for certain points. For example, a distance to corner may be determined, which may be used in localizing the robot. Although the point range finding sensor has fixed relations with the camera, pixel x.sub.1′, y.sub.1′ is not necessarily the same as pixel as x.sub.1, y.sub.1. With iteration of t, to t′, to t″ and finally to t.sup.n we have n number of states. Ebrahimi does not explicitly teach “voxel concatenation”.) perform feature detection and tracking based on the feature concatenation information to generate tracking information for one or more objects; and (Ebrahimi - [0248], [0346], [0362]) output the tracking information. (Ebrahimi - [0248], [0342], [0387] the processor uses the function to assign a vector of features to class ω.sub.i if ƒ.sub.i (x)>ƒ.sub.j(x) for all j≠i. In one example the complex function ƒ(x) receives inputs x.sub.1, x.sub.2, . . . , x.sub.n of features and outputs the classes ω.sub.i, ω.sub.j, ω.sub.k, ω.sub.l, . . . to which the vectors of features are assigned.) Ebrahimi does not explicitly teach the following limitations, however Moloney, in the same field of endeavor, teaches: voxel concatenation (Moloney – [0083] For instance, a synthetic voxel may be generated using a 2D buffer of lists, where each entry of the list stores the depth information of a polygon rendered at that pixel. For instance, a model can be rendered using an orthographic viewpoint (e.g., top-down). For example, every (x, y) provided in an example buffer may represent the column at (x, y) in a corresponding voxel volume (e.g., from (x,y,0) to (x,y,4095)). Each column may then be rendered from the information as 3D scanlines using the information in each list. [0180] The values 5160a-b may be concatenated to form a 6-bit address 5215, which may be provided to a lookup table circuit 5205. As shown in table 5210, each of the possible 6-bit combinations (built from values 5160a-c) may correspond to one of 64 voxels (and corresponding entry bits) at a particular level of detail. This relatively small lookup table 5205 may be used to quickly return a 64-bit bitmask 5220 that corresponds with the voxel identified (e.g., using the example circuitry of FIG. 51) as being intersected by (or containing a segment of) the particular ray.) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the pixel and depth information of Ebrahimi with the voxel concatenation of Moloney in order to render 3d information (Moloney – [0083]). Regarding Claim 2, Ebrahimi further teaches: wherein the processing system configured to cause the device to output the tracking information includes to: (Ebrahimi – [0248], [0387], [1508]) provide the tracking information to an autonomous driving system; or (Ebrahimi - [1274] In embodiments, collective artificial intelligence technology (CAIT) may be applied to various types of robots, such as robot vacuums, personal passenger pods with or without a chassis, and an autonomous car.) transmit a transmission based on the tracking information. (Ebrahimi - [0248], [0006] the map is transmitted to an application of a communication device previously paired with the robot; and the application is configured to display the map on a screen of the communication device.) Regarding Claim 3, Ebrahimi further teaches: wherein the tracking information accounts for motion of the device and for motion of the one or more objects, and (Ebrahimi – [0248], [0422]) wherein the voxel position adjustment data corresponds to object motion correction information for the one or more objects. (Ebrahimi – [0346], [0422]) Regarding Claim 4, Ebrahimi further teaches: wherein the two or more sensors include a camera device and a LiDAR device. (Ebrahimi – [0346], [0362]) Regarding Claim 5, Ebrahimi further teaches: wherein the two or more sensors include a camera device, a LiDAR device, and the radar device. (Ebrahimi - [0342], [0346], [0362]) Regarding Claim 6, Ebrahimi further teaches: wherein the processing system is further configured to cause the device to: receive camera data from a camera device; (Ebrahimi - [0238] a plurality of sensors… camera… The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths.) perform encoding to on the camera data generate camera feature data; (Ebrahimi – [0346], [0310] Some embodiments may merge various types of data into a data structure, clean the data, extract the converged data, encode the data to automatic encoders,) receive LiDAR data from a LiDAR device; (Ebrahimi – [0346]) perform encoding on the LiDAR data to generate LiDAR feature data; and (Ebrahimi – [0310], [0346]) perform encoding on the point cloud data from the radar device to generate radar feature data, and (Ebrahimi – [0310], [0346], [0362]) wherein the processing system configured to cause the device to generate the feature concatenation information includes to: (Ebrahimi – [0238], [0248], [0346]) perform feature concatenation with spatio-temporal condition attention to generate the feature concatenation information based on the camera feature data, the LiDAR feature data, the radar feature data, and the voxel position adjustment data. (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) Regarding Claim 7, Ebrahimi further teaches: wherein the processing system configured to cause the device to perform the feature concatenation with the spatio-temporal condition attention to generate the feature concatenation information includes to: (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) adjust a voxel position of voxels in one or more of the camera feature data, the LiDAR feature data, and the radar feature data based on the voxel position adjustment data to account for motion of objects in the scene flow, (Ebrahimi – [0248], [0346], [0362], [0422]) wherein the feature concatenation information is generated based on the adjusted voxel position of the voxels. (Ebrahimi – [0248], [0346], [0362], [0422]) Regarding Claim 8, Ebrahimi further teaches: wherein the processing system configured to cause the device to perform the feature concatenation with the spatio-temporal condition attention to generate the feature concatenation information further includes to: (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) identify corresponding voxels for a particular object in two or more of the camera feature data, the LiDAR feature data, and the radar feature data based on the adjusted position of the voxels; and (Ebrahimi – [0248], [0346], [0362], [0422]) associate the identified corresponding voxels for the particular object in two or more of the camera feature data, the LiDAR feature data, and the radar feature data to combine identified corresponding voxels from different timestamps into a single timestamp, (Ebrahimi – [0248], [0346], [0362], [0422], [0317] In another example of a neural network, images are captured from cameras positioned at different locations on the robot and are provided to a first layer (layer 1) of the network, in addition to data from other sensors such as IMU, odometry, timestamp etc.) wherein the feature concatenation information is generated based on the combined voxels. (Ebrahimi – [0346]) Regarding Claim 9, Ebrahimi further teaches: wherein the processing system configured to cause the device to adjust the feature concatenation information based on the voxel position adjustment data includes to: (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) adjust a three dimensional position of one or more voxels of the feature concatenation information based on the voxel position adjustment data. (Ebrahimi – [0248], [0346], [0362], [0422]) Regarding Claim 12, Ebrahimi further teaches: wherein the processing system configured to cause the device to generate the voxel position adjustment data based on the scene flow parameter data includes to: (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) generate scene flow parameters based on the point cloud data using a Radar Oriented Flow Estimation (ROFE) module and (Ebrahimi - [0342], [0362], [0422], [0590] In some embodiments, several components may exist separately, such as an image sensor, imaging module, depth module, depth sensor, etc. and data from the different the components may be combined in an appropriate data structure.) a Static Flow Refinement (SFR) module; and (Ebrahimi - [0362], [0422], [0590], [0495] In some embodiments, the processor may identify static or dynamic obstacles within a captured image. In some embodiments, the processor may use different characteristics to identify a static or dynamic obstacle.) generate the voxel position adjustment data based on the scene flow parameters. (Ebrahimi - [0346], [0362], [0422]) Regarding Claim 14, Ebrahimi further teaches: wherein the ROFE module is configured to estimate a course scene flow based on voxel information from the point cloud and generates initial radar scene flow estimate information, (Ebrahimi - [0342], [0362], [0422], [0590]) wherein the SFR module is configured to refine the course scene flow to generate a final scene flow based on radial relative velocity (RRV) information from the point cloud and generates rigid radar scene flow information, and (Ebrahimi - [0362], [0422], [0495], [0590]) wherein the scene flow parameter data is generated based on the initial radar scene flow estimate information and the rigid radar scene flow information. (Ebrahimi - [0362], [0422], [0495]) Regarding Claim 15, Ebrahimi further teaches: wherein the ROFE module is configured to: (Ebrahimi - [0342], [0362], [0590]) generate local and global features from the point cloud data; (Ebrahimi – [0342], [0362]) generate correlated feature information based on the local and global features; (Ebrahimi – [0342], [0362]) generate grouped feature information based on the local and global features and the correlated feature information; and (Ebrahimi – [0342], [0362]) generate the initial radar scene flow estimate information based on the grouped feature information, (Ebrahimi – [0342], [0362]) wherein the scene flow parameter data is generated based on the initial radar scene flow estimate information. (Ebrahimi – [0342], [0362]) Regarding Claim 17, Ebrahimi further teaches: wherein the processing system configured to cause the device to adjust the feature concatenation information based on the voxel position adjustment data includes to: (Ebrahimi – [0248], [0346], [0362], [0422]) adjust a three dimensional position of one or more voxels of the feature concatenation information based on the voxel position adjustment data. (Ebrahimi – [0248], [0346], [0362], [0422]) Regarding Claim 18, Ebrahimi further teaches: wherein the processing system configured to cause the device to perform feature detection and tracking based on the feature concatenation information to generate the tracking information includes to: (Ebrahimi – [0248], [0346], [0362], [0422]) perform feature decoding on adjusted voxel positions of the feature concatenation information to determine decoded feature data; (Ebrahimi - [0362], [0422], [0590], [1362]) identify features based on the decoded feature data; and (Ebrahimi - [0362], [0422], [0590], [1362]) track the identified features based on the decoded feature data over the two or more frames. (Ebrahimi – [0248], [0362], [0422], [0590], [1362]) Regarding Claim 19, Ebrahimi further teaches: wherein the processing system configured to cause the device to generate the feature concatenation information includes to: (Ebrahimi – [0248], [0346], [0362], [0422]) perform feature concatenation with spatio-temporal condition attention to generate the feature concatenation information based on camera feature data, LiDAR feature data, radar feature data, and the voxel position adjustment data. (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) Regarding Claim 20, Ebrahimi further teaches: wherein the processing system configured to cause the device to perform the feature concatenation with the spatio-temporal condition attention to generate the feature concatenation information includes to: (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) combine features of the camera feature data, the LiDAR feature data, and the radar feature data, based on the voxel position adjustment data to generate fused features of the feature concatenation information. (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) Regarding Claim 21, Ebrahimi further teaches: wherein the fused features of the feature concatenation information are generated based on spatial information from LiDAR feature data, semantic information from the camera feature data, and motion information from the scene flow parameter data. (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) Regarding Claim 22, Ebrahimi further teaches: wherein the features of the camera feature data, the LiDAR feature data, and the radar feature data are combined and refined over multiple frames to generate the fused features, (Ebrahimi – [0238], [0248], [0346], [0362], [0422]) the multiple frames including the two or more frames. (Ebrahimi – [0362], [0422]) Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi (US 20220066456) in view of Moloney (US 20200158514) as applied to Claims 1, 23-24 above, and further in view of Amir (US 20220179062). Regarding Claim 10, Ebrahimi further teaches: wherein the point cloud data corresponds to a radar point cloud with range doppler information and range azimuth information. (Ebrahimi – [0362], [0422], [0446] A magnetic map may be created in advance with magnetic field magnitudes, magnetic field inclination, and magnetic field azimuth with horizontal and vertical components. Ebrahimi does not explicitly teach “Doppler information”.) Ebrahimi does not explicitly teach the following limitations, however Liu, in the same field of endeavor, teaches: Doppler information (Amir – [0048] Each point in the point cloud may be defined by a 3-dimensional spatial position from which a radar reflection was received, and defining a peak reflection value, and a doppler value from that spatial position. Thus, a measurement received from a radar-reflective object may be defined by a single point, or a cluster of points from different positions on the object, depending on its size.) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the point cloud measurements of Ebrahimi with the Doppler measurements of Amir in order to further define a point cloud measurement by deriving radial velocity information from a Doppler signal (Amir – [0028], [0048]). Regarding Claim 11, Ebrahimi further teaches: wherein each point in the point cloud data contains three-dimensional (3D) positional information and 3D feature information, (Ebrahimi – [0362]) wherein 3D feature information includes radial relative velocity (RRV) information, (Ebrahimi – [0362], [0969] In some embodiments, the Galilean Group transformation is three dimensional and there are ten parameters used in relating vectors X and X′. There are three rotation angles, three space displacements, three velocity components and one time component, with the three rotation matrices) Ebrahimi does not explicitly teach the following limitations, however Amir, in the same field of endeavor, teaches: radar cross section (RCS) information, (Amir – [0034] Each radar measurement magnitude, is in some embodiments, a radar cross section magnitude, which compensates for distance between the radar device and the location of reflection. In other embodiments, each radar measurement magnitude is the proportional to, or otherwise correlated with, the absolute value of the received radar reflection (i.e. without distance compensation). For example, each radar measurement magnitude may be correlated with an amount of reflected radar energy,) power measurement information, or a combination thereof. (Amir – [0034]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the point cloud measurements of Ebrahimi with the RCS and reflected energy measurements of Amir in order to obtain Doppler information (Amir – [0034], [0048]). Claims 13, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi (US 20220066456) in view of Moloney (US 20200158514) as applied to Claims 1, 23-24 above, and further in view of Liu (CN 115542340). Regarding Claim 13, Ebrahimi further teaches: wherein the ROFE module includes a multi-scale encoder, (Ebrahimi - [0342], [0362], [0422], [0590]) a cost volume layer, and (Ebrahimi - [0342], [0590]) a flow decoder, and (Ebrahimi - [0362], [0422], [0590], [1362] In some embodiments, transmitted signals may be modulated over the airwaves and the receiving end may decode this chip sequence back to the originally transmitted data.) wherein the SFR module includes a static mask generator and a Kabsch refiner. (Ebrahimi - [0362], [0422], [0495], [0590] Ebrahimi does not explicitly teach “static mask or Kabsch algorithm”.) Ebrahimi does not explicitly teach the following limitations, however Liu, in the same field of endeavor, teaches: static mask generator and a Kabsch refiner (Liu – [Claim 1] extracting static point clouds in a scene by a solid laser radar background filtering algorithm, extracting low-position point clouds lower than a certain height in the static point clouds, obtaining ground points in the low-position point clouds by a point cloud ground extraction algorithm, rotating the static point clouds to a state that a ground normal vector is a z-axis direction vector according to extracted ground parameters, and filtering the ground points to obtain a detection space;… calculating an optimal rotation displacement matrix by adopting a kabsch algorithm to obtain the pose of the source point cloud in the target point cloud; Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the algorithm and modules of Ebrahimi with the static point cloud filtering and the Kabsch algorithm of Liu in order to obtain the pose of the source point cloud in the target point cloud (Liu – [Claim 1]). Regarding Claim 16, Ebrahimi further teaches: wherein the SFR module is configured to: (Ebrahimi - [0362], [0422], [0495], [0590]) generate, by the static mask generator, a static mask; (Ebrahimi - [0362], [0422], [0495], [0590]) determine static points of point clouds for the two or more frames based on the static mask and the point cloud data; and (Ebrahimi - [0362], [0422], [0495]) generate, by the Kabsch refiner, a transformation matrix based on the static points and on a differentiable Kabsch algorithm; and (Ebrahimi - [0362], [0422], [0246], In some embodiments, the distances and geometries between components of the robot may be stored in one or more transformation matrices. Ebrahimi does not explicitly teach “Kabsch algorithm”.) derive the rigid radar scene flow information from the transformation matrix, (Ebrahimi – [0246], [0362], [0422], [0495]) wherein the scene flow parameter data is generated based on the rigid radar scene flow information. (Ebrahimi – [0246], [0362], [0422], [0495]) Ebrahimi does not explicitly teach the following limitations, however Liu, in the same field of endeavor, teaches: static mask generator and a Kabsch refiner (Liu – [Claim 1] extracting static point clouds in a scene by a solid laser radar background filtering algorithm, extracting low-position point clouds lower than a certain height in the static point clouds, obtaining ground points in the low-position point clouds by a point cloud ground extraction algorithm, rotating the static point clouds to a state that a ground normal vector is a z-axis direction vector according to extracted ground parameters, and filtering the ground points to obtain a detection space;… calculating an optimal rotation displacement matrix by adopting a kabsch algorithm to obtain the pose of the source point cloud in the target point cloud; Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the algorithm and modules of Ebrahimi with the static point cloud filtering and the Kabsch algorithm of Liu in order to obtain the pose of the source point cloud in the target point cloud (Liu – [Claim 1]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Shomron (US 20240152725) describes apparatuses, systems, and techniques to perform matrix computations associated with computing output of a neural network. Chandler (US 20220277515) describes a computer-implemented method of modelling a common structure component, the method comprising, in a modelling computer system: receiving a plurality of captured frames, each frame comprising a set of 3D structure points, in which at least a portion of a common structure component is captured. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON JAMES HENSON whose telephone number is (703)756-1841. The examiner can normally be reached Monday-Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached at 571-272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON JAMES HENSON/Examiner, Art Unit 3645 /ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Feb 21, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601830
METHOD AND APPARATUS FOR OBTAINING LOCATION INFORMATION USING RANGING BLOCK AND RANGING ROUNDS
2y 5m to grant Granted Apr 14, 2026
Patent 12584996
HARDWARE GENERATION OF 3D DMA CONFIGURATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12566242
RADIO FREQUENCY APPARATUS AND METHOD FOR ASSEMBLING RADIO FREQUENCY APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566258
SYSTEM AND METHOD OF FULLY POLARIMETRIC PULSED RADAR
2y 5m to grant Granted Mar 03, 2026
Patent 12560700
METHOD AND DEVICE FOR DETERMINING AT LEAST ONE ARTICULATION ANGLE OF A VEHICLE COMBINATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+27.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month