Prosecution Insights
Last updated: April 19, 2026
Application No. 18/050,323

OBJECT RECOGNITION DEVICE

Non-Final OA §103§112
Filed
Oct 27, 2022
Examiner
CLOUSER, BENJAMIN WADE
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
DENSO CORPORATION
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
5 granted / 14 resolved
-16.3% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
39 currently pending
Career history
53
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§103 §112
DETAILED ACTION Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/11/2023 was considered by the examiner. Claim Rejections - 35 USC § 112 Claim 2 recites the limitation " the discontinuity index " in Page 24, Line 29 . There is insufficient antecedent basis for this limitation in the claim. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Yang (US 2018/0189578 A1) in view of Browning (US 2018/0005407 A1) and in view of Thorsen (US 2021/0303956 A1) . Regarding Claim 1, Yang discloses a n object recognition device including: a detection point acquisition unit configured to acquire detection points in a plurality of orientations by using a sensor ( [0065]: “ The vehicle sensors 105 comprise a camera, a light detection and ranging sensor (LIDAR), a global positioning system (GPS) navigation system, an inertial measurement unit (IMU), and others ” ) . ; and an object recognition unit configured to recognize an object by using at least some of the detection points acquired by the detection point acquisition unit ( [0070]: “ The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on. ” ) , the object recognition device comprising: a detection point number calculation unit configured to calculate a detection point number defined below for each of the plurality of detection points ( [0132]: “ the HD map system maximizes the number of points contained inside of the 3D box by iteratively computing the number of points contained within the box at different 3D positions. ” ; The points are counted. ) ; an exclusion unit configured to exclude the detection point that does not satisfy the detection point the detection point number of which is a threshold value or smaller, from the detection points acquired by the detection point acquisition unit ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue. ” ) , wherein the object recognition unit is configured to recognize the object by using the detection point that is acquired by the detection point number calculation unit and has not been excluded by the exclusion unit ( [0070]: “ The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on. The perception module 210 processes the sensor data 230 to populate data structures storing the sensor data and provides the information to the prediction module 215 . ” ) , the detection point number: the number of the detection points present in a predetermined region including the detection point for which the detection point number is calculated ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue.” ), Yang does not teach and Browning does teach a stereoscopic point determination unit configured to determine whether each of the plurality of detection points satisfies a stereoscopic point condition defined below ( [0151]: “ the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces ) ; the stereoscopic point condition: another detection point having a different height is present in a predetermined region including the detection point for which whether to satisfy the stereoscopic point condition is determined, an exclusion unit configured to exclude the detection point that does not satisfy the stereoscopic point condition, from the detection points acquired by the detection point acquisition unit ( [0151]: “the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces ; The excluded surfaces here are non-vertical, horizontal surfaces like rooflines, which is equivalent to requiring at least one point with a different height for inclusion in the point cloud data set. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventio n to modify the device of Yang with the teaching of Browning to exclude non-vertical, horizontal surfaces like rooflines. Browning notes that contextual information can render horizontal surfaces in images particularly unreliable, e.g., “ snow and debris can affect the appearance of such surfaces. ” Thus, excluding these features and associated points can result in more reliable retrievals of objects. Yang does not teach and Thorsen does teach a discontinuity index calculation unit configured to calculate a discontinuity index that becomes higher as the number of missing detection points defined below increases, for each of the plurality of detection points; and an exclusion unit configured to exclude the detection point that does not satisfy the detection point the discontinuity index of which is a threshold value or higher, from the detection points acquired by the detection point acquisition unit ( [0077]: “ In order to do so, a local surfel map may be built for static objects for LIDAR sensor data using free space constraints such that if an area of space is ever identified as empty, that area is assumed to always be empty. A ray can then be cast from the vehicle to the location of the bounding box to determine whether there are any intervening objects, or rather, dynamic or static occlusions. Labels with high occlusion ratios may then be removed or discarded. In other words, if the ray intersects with another object before the location of the bounding box, the camera images at the second point in time can be removed or discarded. ” ) , the missing detection point: the detection point present in the orientation between the orientation of a first detection point and the orientation of a second detection point and outside a predetermined reference region, the first and second detection points being any two of the detection points present in the reference region including the detection point for which the discontinuity index is calculated ( [0020]; [0077] ) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Thorsen to remove data which is heavily occluded by intervening objects. It is well-known in the art that image processing becomes more expensive as the amount of occlusion increases, so simply discarding heavily occluded regions can result in significant savings of time and processing power. Regarding Claim 4, which depends from rejected Claim 1, Yang further discloses a cluster formation unit configured to form a cluster by using the detection point that is acquired by the detection point acquisition unit and has not been excluded by the exclusion unit ( [0157]: “ The clustering module 2825 groups neighboring points into lane line clusters which are stored within the lane line cluster store 2830 . The segment center analysis module 2835 simplifies stored lane line clusters by removing outlier points from the cluster and draws a center line through the remaining points. ” ) ; and an invalidation unit that invalidates the detection point included in the cluster corresponding to a predetermined invalid condition ( [0167]: “ When grouping neighboring points into clusters, a lane line cluster 3210 originating at the first point may include the second, third, and fourth point, but not the fifth point because it is a distance from the first point greater than a threshold distance. ”; [0176] ) , wherein the object recognition unit is configured to recognize the object by using the detection point that is acquired by the detection point acquisition unit, and has not been excluded by the exclusion unit and not been invalidated by the invalidation unit ( [0136]: “ The identified angled traffic sign 1060 is unobstructed in the first image 1000 and is square in shape when viewed in a planar view. ”; [0137]: “ The HD map system 110 applies a convolutional neural network model to the identified portion of the image 1120 which in return identifies text on the portion of the image 1120 . ” ) . Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Browning and in view of Kulkarni (US 2019/0266779 A1) . Regarding Claim 2, Yang discloses an object recognition device including: a detection point acquisition unit configured to acquire detection points in a plurality of orientations by using a sensor ( [0065]: “The vehicle sensors 105 comprise a camera, a light detection and ranging sensor (LIDAR), a global positioning system (GPS) navigation system, an inertial measurement unit (IMU), and others” ) . ; and an object recognition unit configured to recognize an object by using at least some of the detection points acquired by the detection point acquisition unit ( [0070]: “The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on.” ), the object recognition device comprising: a detection point number calculation unit configured to calculate a detection point number defined below for each of the plurality of detection points ( [0132]: “the HD map system maximizes the number of points contained inside of the 3D box by iteratively computing the number of points contained within the box at different 3D positions.”; The points are counted. ); an exclusion unit configured to exclude the detection point that does not satisfy the detection point the detection point number of which is a threshold value or smaller, from the detection points acquired by the detection point acquisition unit ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue.” ), wherein the object recognition unit is configured to recognize the object by using the detection point that is acquired by the detection point number calculation unit and has not been excluded by the exclusion unit ( [0070]: “The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on. The perception module 210 processes the sensor data 230 to populate data structures storing the sensor data and provides the information to the prediction module 215 .” ), the detection point number: the number of the detection points present in a predetermined region including the detection point for which the detection point number is calculated ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue.” ), Yang does not teach and Browning does teach a stereoscopic point determination unit configured to determine whether each of the plurality of detection points satisfies a stereoscopic point condition defined below ( [0151]: “the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces ); the stereoscopic point condition: another detection point having a different height is present in a predetermined region including the detection point for which whether to satisfy the stereoscopic point condition is determined, an exclusion unit configured to exclude the detection point that does not satisfy the stereoscopic point condition, from the detection points acquired by the detection point acquisition unit ( [0151]: “the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces; The excluded surfaces here are non-vertical, horizontal surfaces like rooflines, which is equivalent to requiring at least one point with a different height for inclusion in the point cloud data set. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Browning to exclude non-vertical, horizontal surfaces like rooflines. Browning notes that contextual information can render horizontal surfaces in images particularly unreliable, e.g., “snow and debris can affect the appearance of such surfaces.” Thus, excluding these features and associated points can result in more reliable retrievals of objects. Yang does not teach and Kulkarni does teach wherein an irregularity index calculation unit configured to calculate an irregularity index defined below for each of the plurality of detection points; and the irregularity index: an index that is higher as irregularity of a position of the detection point present in a predetermined region including the detection point for which the irregularity index is calculated is higher , an exclusion unit configured to exclude the detection point that does not satisfy the stereoscopic point condition, from the detection points acquired by the detection point acquisition unit ( [0070]: “ In other aspects, a 3×3 point grid can be used to determine a neighborhood of points. Proceeding to a step 720 , points on the polar depth map are analyzed and a best fit plane is determined for the point. The system attempts a best fit model for points that are within a specified distance from the point being analyzed, for example, 3 or 5 points in each direction. ” The line of best fit necessarily includes a metric such as the variance (used in least squares fitting) or the coefficient of determination which is then used to determine whether to keep or reject points. ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Kulkarni to exclude points based on the irregularity of their neighborhood of points. This type of outlier removal is computationally inexpensive compared to identifying objects in the presence of outliers, and therefore can result in savings of power and computation time, as well as more robust results. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Browning and in view of Thorsen and in view of Kulkarni . Regarding Claim 3, Yang discloses an object recognition device including: a detection point acquisition unit configured to acquire detection points in a plurality of orientations by using a sensor ( [0065]: “The vehicle sensors 105 comprise a camera, a light detection and ranging sensor (LIDAR), a global positioning system (GPS) navigation system, an inertial measurement unit (IMU), and others” ) . ; and an object recognition unit configured to recognize an object by using at least some of the detection points acquired by the detection point acquisition unit ( [0070]: “The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on.” ), the object recognition device comprising: a detection point number calculation unit configured to calculate a detection point number defined below for each of the plurality of detection points ( [0132]: “the HD map system maximizes the number of points contained inside of the 3D box by iteratively computing the number of points contained within the box at different 3D positions.”; The points are counted. ); an exclusion unit configured to exclude the detection point that does not satisfy the detection point the detection point number of which is a threshold value or smaller, from the detection points acquired by the detection point acquisition unit ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue.” ), wherein the object recognition unit is configured to recognize the object by using the detection point that is acquired by the detection point number calculation unit and has not been excluded by the exclusion unit ( [0070]: “The perception module 210 uses the sensor data to determine what objects are around the vehicle, the details of the road on which the vehicle is travelling, and so on. The perception module 210 processes the sensor data 230 to populate data structures storing the sensor data and provides the information to the prediction module 215.” ), the detection point number: the number of the detection points present in a predetermined region including the detection point for which the detection point number is calculated ( [0122]: “ If there is enough information (at least 3 non-collinear points) on the sign then the HD map system has enough data and can continue.” ), Yang does not teach and Browning does teach a stereoscopic point determination unit configured to determine whether each of the plurality of detection points satisfies a stereoscopic point condition defined below ( [0151]: “the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces ); the stereoscopic point condition: another detection point having a different height is present in a predetermined region including the detection point for which whether to satisfy the stereoscopic point condition is determined, an exclusion unit configured to exclude the detection point that does not satisfy the stereoscopic point condition, from the detection points acquired by the detection point acquisition unit ( [0151]: “the point selection rules 1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces; The excluded surfaces here are non-vertical, horizontal surfaces like rooflines, which is equivalent to requiring at least one point with a different height for inclusion in the point cloud data set. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Browning to exclude non-vertical, horizontal surfaces like rooflines. Browning notes that contextual information can render horizontal surfaces in images particularly unreliable, e.g., “snow and debris can affect the appearance of such surfaces.” Thus, excluding these features and associated points can result in more reliable retrievals of objects. Yang does not teach and Thorsen does teach a discontinuity index calculation unit configured to calculate a discontinuity index that becomes higher as the number of missing detection points defined below increases, for each of the plurality of detection points; and an exclusion unit configured to exclude the detection point that does not satisfy the detection point the discontinuity index of which is a threshold value or higher, from the detection points acquired by the detection point acquisition unit ( [0077]: “In order to do so, a local surfel map may be built for static objects for LIDAR sensor data using free space constraints such that if an area of space is ever identified as empty, that area is assumed to always be empty. A ray can then be cast from the vehicle to the location of the bounding box to determine whether there are any intervening objects, or rather, dynamic or static occlusions. Labels with high occlusion ratios may then be removed or discarded. In other words, if the ray intersects with another object before the location of the bounding box, the camera images at the second point in time can be removed or discarded.” ), the missing detection point: the detection point present in the orientation between the orientation of a first detection point and the orientation of a second detection point and outside a predetermined reference region, the first and second detection points being any two of the detection points present in the reference region including the detection point for which the discontinuity index is calculated ( [0020]; [0077] ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Thorsen to remove data which is heavily occluded by intervening objects. It is well-known in the art that image processing becomes more expensive as the amount of occlusion increases, so simply discarding heavily occluded regions can result in significant savings of time and processing power. Yang does not teach and Kulkarni does teach wherein an irregularity index calculation unit configured to calculate an irregularity index defined below for each of the plurality of detection points; and the irregularity index: an index that is higher as irregularity of a position of the detection point present in a predetermined region including the detection point for which the irregularity index is calculated is higher, an exclusion unit configured to exclude the detection point that does not satisfy the stereoscopic point condition, from the detection points acquired by the detection point acquisition unit ( [0070]: “ In other aspects, a 3×3 point grid can be used to determine a neighborhood of points. Proceeding to a step 720 , points on the polar depth map are analyzed and a best fit plane is determined for the point. The system attempts a best fit model for points that are within a specified distance from the point being analyzed, for example, 3 or 5 points in each direction.” The line of best fit necessarily includes a metric such as the variance (used in least squares fitting) or the coefficient of determination which is then used to determine whether to keep or reject points. ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Yang with the teaching of Kulkarni to exclude points based on the irregularity of their neighborhood of points. This type of outlier removal is computationally inexpensive compared to identifying objects in the presence of outliers, and therefore can result in savings of power and computation time, as well as more robust results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Schöler et al. ( Schöler , F., Behley , J., Steinhage , V., Schulz, D. and Cremers , A.B., 2011, May. Person tracking in three-dimensional laser range data with explicit occlusion adaption. In 2011 IEEE International Conference on Robotics and Automation (pp. 1297-1303). IEEE. ) disclose a tracking method which calculates an occlusion ratio which calculates the number of occluded voxels divided by the number of voxels in a bounding box. Zhang, et al. ( Zhang, Shanxin , Cheng Wang, Ming Cheng, and Jonathan Li. "Automated visibility field evaluation of traffic sign based on 3D lidar point clouds." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42 (2019): 1185-1190. ) calculates an occlusion ratio and occlusion degree. Habib, et al., ( Habib, Ayman F., Yu-Chuan Chang, and Dong Cheon Lee. "Occlusion-based methodology for the classification of LiDAR data." Photogrammetric Engineering & Remote Sensing 75, no. 6 (2009): 703-712. ) discloses an occlusion-based methodology for the classification of LiDAR data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT BENJAMIN WADE CLOUSER whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-0378 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 7:30 - 5:00 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ISAM ALSOMIRI can be reached at (571) 272-6970 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.W.C./ Examiner, Art Unit 3645 /ISAM A ALSOMIRI/ Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Oct 27, 2022
Application Filed
Mar 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541026
COHERENT LIDAR IMAGING SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12535581
DISTANCE MEASURING DEVICE AND DISTANCE MEASURING METHOD
2y 5m to grant Granted Jan 27, 2026
Patent 12504520
APPARATUS, PROCESSING CIRCUITRY AND METHOD FOR MEASURING DISTANCE FROM DIRECT TIME OF FLIGHT SENSOR ARRAY TO AN OBJECT
2y 5m to grant Granted Dec 23, 2025
Patent 12474568
SYSTEM AND METHOD FOR COHERENT APERTURE OF STEERED EMITTERS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
99%
With Interview (+75.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month