Prosecution Insights
Last updated: April 19, 2026
Application No. 18/616,396

ROAD DEFECT LEVEL PREDICTION

Non-Final OA §103
Filed
Mar 26, 2024
Examiner
XIAO, DI
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Laboratories America Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
463 granted / 600 resolved
+22.2% vs TC avg
Strong +22% interview lift
Without
With
+21.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
24 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This action is responsive to communications: Application filed on March 26, 2024, and Drawings filed on March 26, 2024. 2. Claims 1–20 are pending in this case. Claim 1, 9, 17 are independent claims. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 5, 6, 7, 9, 12, 13, 14, 15, 17, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasayama, Pub. No.: WO 2022190314 A1, in view of Theverapperuma, Pub. No.: 20220024485 A1. With regard to claim 1: Sasayama discloses a computer-implemented method for road defect level prediction employing a processor device comprising: obtaining a depth map from an image data received from input peripherals by employing a vision transformer model (See fig. 4 for depth map, The depth estimation unit 12 performs depth estimation on the input image and generates depth estimation information: “The depth estimation unit 12 performs depth estimation on the input image and generates depth estimation information. The depth estimation unit 12 analyzes the captured image using, for example, a model based on a depth estimation algorithm using a convolutional neural network, and estimates the depth of each pixel. A depth estimation algorithm is, for example, CNN-Depth. The depth estimation information indicates the correspondence relationship between the position coordinates and the depth of each pixel of the captured image.”); obtaining a plurality of semantic maps from the image data by employing a semantic segmentation model (Semantic Segmentation using a Fully Convolutional Network (FCN), for example, can be used as the region segmentation technique : “As shown in FIG. 1, the image analysis device 10 includes an area division unit 11, a depth estimation unit 12, and a detection unit 13. The region division unit 11 performs region division (also referred to as region recognition or segmentation) on the above-described image of the surrounding road surface, and generates a region division result in which the image is classified into image regions of a plurality of categories. Semantic Segmentation using a Fully Convolutional Network (FCN), for example, can be used as the region segmentation technique. Semantic segmentation performs class classification (inference) on a pixel-by-pixel basis for an input image, and labels each classified pixel with a category to divide the image into image regions of multiple categories. , are output as segmentation results. The segmentation result is, for example, an image in which each segmented image region is color-coded.”) to give segmentation results of road scenes to detect road (see fig. 3 wherein system give segmentation results of road scenes: “FIG. 3 is a diagram showing the result of segmentation generated from the image in FIG. For the sake of explanation, in FIG. 3, the segmentation result is superimposed on the captured image. Also, in FIG. 3, dashed-dotted lines have been added to respectively enclose a plurality of image areas for clarity of each image area. The multiple categories include road categories. In the example of FIG. 3, the input image is divided into two image areas, a road area 1A (check pattern hatched area) and a vehicle area 2A (diagonal hatched area), which are road categories.”); detecting a region of interest (ROI) utilizing the road pixels; predicting road defect levels by fitting the ROI and the depth map into a road surface model (the system detects uneven levels of roads as ROI, and selectively detects unevenness present on the road surface within the road area 1A among the plurality of divided categories. “FIG. 4 is a diagram showing depth estimation information generated from the image in FIG. FIG. 4 is a depth map that visualizes the estimated depth information, in which locations with high (deep) depth are drawn in dark colors, and locations with low (shallow) depths are drawn in light colors. For the sake of explanation, in FIG. 3, the depth estimation information is superimposed on the captured image. Also in FIG. 3, dashed lines have been added to enclose areas of similar depth. As shown in FIG. 3, it can be seen that a high-depth area 1B exists within the road shoulder 1b. In this way, a boundary on the depth map occurs where there is a gutter, a curbstone that defines the boundary between the road and the sidewalk, or the like. The detection unit 13 detects unevenness existing on the road surface based on the region division result and the depth estimation information. FIG. 5 is a diagram showing a detection result in which the segmentation result and the depth estimation information are superimposed. For the sake of explanation, in FIG. 3, the region division result and the depth estimation information are superimposed on the captured image. The detection unit 13 detects the high-depth area 1B within the road area 1A shown in FIG. In this way, by using the region division result and the depth estimation information, it is possible to detect unevenness for each image region divided into categories. As a result, for example, when driving on a narrow road with ditches on the left and right sides, it is possible to recognize unevenness on the road surface and prevent derailment into the ditches and contact with curbs. By analyzing images from a plurality of cameras with the image analysis device 10, it is possible to recognize irregularities on the road surface around the vehicle and operate the vehicle more safely.It is preferable that the detection unit 13 selectively detects unevenness present on the road surface within the road area 1A among the plurality of divided categories. As a result, it is possible to selectively detect irregularities on the road on which the vehicle 2 is supposed to enter, and to appropriately alert the driver.”); and outputting the predicted road defect levels on a road map (The user is warned but only when the detect level is at a specific threshold: “The input/output interface 24 is connected to the display device 30, the input device 31, the sound output device 32, and the like. The display device 30 is an LCD (Liquid Crystal Display) or the like, and displays a captured image of the road surface, the segmentation result processed by the processor 21, and an image corresponding to the depth estimation information. For example, when the distance between the unevenness of the road surface obtained based on the segmentation result and the depth estimation information and the vehicle is equal to or less than a predetermined value, the display device 30 can display a warning to inform the driver of danger.”). Sasayama does not disclose the aspect wherein the segmentation results of road scenes to detect road are pixel-wise segmentation results of road scenes to detect road pixels. However Theverapperuma discloses the aspect wherein The segmentation results of road scenes to detect road are pixel-wise segmentation results of road scenes to detect road pixels (The segmented image can be an RGB formatted 2D image in which each pixel has been assigned a class of “road” or a class of “non-road”. paragraph 98: “Surface segmentation module 426 is configured to generate, using the extracted features, a segmented image that is divided into different surfaces. The segmented image is a 2D image indicating which areas correspond to potentially drivable surfaces (e.g., road surfaces) and which areas correspond to non-drivable surfaces (e.g., grass, hills, or other terrain). For example, the segmented image can be an RGB formatted 2D image in which each pixel has been assigned a class of “road” or a class of “non-road”. Thus, the segmented image can represent the result of performing classification on the extracted features, possibly classification that divides regions in the segmented image into one of two types of surfaces: potentially drivable and non-drivable. In some embodiments, the surface segmentation module 426 is configured to detect additional surface classes, e.g., different types of roads or different non-road surfaces. The surface segmentation module 426 can be implemented as a CNN trained to determine whether a particular set of feature values corresponds to a drivable surface. For instance, the surface segmentation module 426 can be trained with positive examples (e.g., feature values representing road surfaces) and/or negative examples (e.g., feature values representing non-road surfaces). In some embodiments, the CNN implementing the surface segmentation module 426 may employ conditional random fields (CRFs) to estimate the probability of a particular set of feature values corresponding to a drivable surface. CRFs provide a probabilistic framework for labeling and segmenting structured data and are often used for image segmentation.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Theverapperuma to Sasayama so the system can use pixel-wise segmentation to precisely determine different parts of the image and identify which pixel represent the road in the image in order to accurately identify ROI on the road. With regard to claims 4 and 12: Sasayama and Theverapperuma disclose the computer-implemented method of claim 1, wherein the ROI is selected by employing a ROI detection module (Theverapperuma paragraph 85: “The object detection performed by the surface identification subsystem 320 does not necessarily involve identifying every object represented in the sensor data. Instead, the surface identification subsystem 320 can be configured to detect only certain objects of interest, including objects that are relevant to determining whether a surface is drivable or not. For example surface identification subsystem 320 can be configured to detect objects that render an otherwise drivable surface unsuitable for driving on (e.g., buildings, other vehicles, cone markers, poles, pools of liquid, cracks, and the like). An object does not have to pose a hazard in order to indicate that a surface is unsafe for driving. For example, the presence of a pile of soil or debris along an edge of a road and extending from a hillside could indicate that there is a risk of landslides, thereby making the road unsuitable for driving on even though the pile may not be an obstacle to a vehicle traveling along the road. Similarly, deformations or anomalies indicating that a surface is safe for driving can manifest in various, often subtle, ways. For example, a drivable surface could be indicated by the absence or trampling of grass or other plants in certain areas, where the absence or trampling is a result of earlier vehicle travel through those areas. Still other indicators may be specific to the manner in which a particular work site is configured. For instance, in mining sites, berms are typically shortened near road intersections so that the locations of intersections can be identified through detecting berms and where the berms end. Intended as a safety measure, berms are often required by government organizations to be at least half as tall as the wheels of the largest mining machine on-site.”) based on the road pixels (Theverapperuma paragraph 98: “Surface segmentation module 426 is configured to generate, using the extracted features, a segmented image that is divided into different surfaces. The segmented image is a 2D image indicating which areas correspond to potentially drivable surfaces (e.g., road surfaces) and which areas correspond to non-drivable surfaces (e.g., grass, hills, or other terrain). For example, the segmented image can be an RGB formatted 2D image in which each pixel has been assigned a class of “road” or a class of “non-road”. Thus, the segmented image can represent the result of performing classification on the extracted features, possibly classification that divides regions in the segmented image into one of two types of surfaces: potentially drivable and non-drivable. In some embodiments, the surface segmentation module 426 is configured to detect additional surface classes, e.g., different types of roads or different non-road surfaces. The surface segmentation module 426 can be implemented as a CNN trained to determine whether a particular set of feature values corresponds to a drivable surface. For instance, the surface segmentation module 426 can be trained with positive examples (e.g., feature values representing road surfaces) and/or negative examples (e.g., feature values representing non-road surfaces). In some embodiments, the CNN implementing the surface segmentation module 426 may employ conditional random fields (CRFs) to estimate the probability of a particular set of feature values corresponding to a drivable surface. CRFs provide a probabilistic framework for labeling and segmenting structured data and are often used for image segmentation.”) and road distances obtained by utilizing the semantic segmentation model (Theverapperuma paragraph 5: “In certain embodiments, techniques are described for identifying a drivable surface based on sensor data, where the sensor data includes camera data in combination with LIDAR (Light Detection and Ranging) data and/or radar data. The sensor data is processed through a surface identification subsystem configured to detect various attributes of a physical environment surrounding an autonomous vehicle, including attributes of a drivable surface in the environment. For instance, the surface identification subsystem can include a plurality of modules configured to detect known objects in the environment, estimate the depth (e.g., distance from sensor) of surfaces, segment an image or other representation of the environment into different regions based on object class, and/or perform other processing of sensor data to generate information usable for making a decision as to whether a particular surface is drivable and for estimating the attributes of the particular surface.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Theverapperuma to Sasayama so the system can use pixel-wise segmentation to precisely determine different parts of the image and identify which pixel represent the road in the image in order to accurately identify ROI on the road. With regard to claims 5 and 13 and 20: Sasayama and Theverapperuma The computer-implemented method of claim 1, wherein the road surface model predicts road defect levels by calculating a severity of differences of road points that are beyond a road surface threshold (Sasayama: “In addition, depending on the height (depth) of the grooves and steps on the road surface, there may be cases where the passage of vehicles is not hindered. For this reason, the detection unit 13 can detect only locations where the estimated depth is outside the predetermined threshold range as unevenness existing on the road surface. In other words, it is possible to appropriately detect deep grooves into which the vehicle tires can fit and uneven heights that cannot be overcome. Note that the width and length of the groove may be calculated based on the depth estimation information, and unevenness on the road surface may be detected using the calculated size.”). With regard to claims 6 and 14: Sasayama and Theverapperuma The computer-implemented method of claim 5, wherein the road surface model utilizes the depth map and the ROI (Sasayama “ Then, the detection unit 13 detects unevenness existing on the road surface based on the region division result and the depth estimation information (step S13), and outputs the detection result (step S14). The detection result can be displayed, for example, on a display device inside the vehicle. Further, for example, when the distance between the detected unevenness of the road surface and the vehicle is equal to or less than a predetermined value, a warning sound may be output to inform the driver of the danger.") to filter road points up to a certain distance (see fig. 4 wherein the distance is based on the distance of the road that can fit in the image) to determine the road defect level above or below the road surface threshold (Sasayama “In addition, depending on the height (depth) of the grooves and steps on the road surface, there may be cases where the passage of vehicles is not hindered. For this reason, the detection unit 13 can detect only locations where the estimated depth is outside the predetermined threshold range as unevenness existing on the road surface. In other words, it is possible to appropriately detect deep grooves into which the vehicle tires can fit and uneven heights that cannot be overcome. Note that the width and length of the groove may be calculated based on the depth estimation information, and unevenness on the road surface may be detected using the calculated size.”). With regard to claims 7 and 15: Sasayama and Theverapperuma disclose the computer-implemented method of claim 1, wherein the depth map obtained from the image data is converted to a three-dimensional point cloud to calculate a road surface plane equation (Theverapperuma, paragraph 84: “Surface identification subsystem 320 is configured to receive the pre-processed sensor data from the pre-processing subsystem 310 and to determine which portions of the sensor data correspond to a drivable surface or a class of object. Surface identification subsystem 320 may partition sensor data into segments, where each segment is represented by an enclosed 2D or 3D boundary. For example, segmenting a 2D image captured by a camera may involve generating a border around a group of pixels based on determining that the pixels belong to the same object (e.g., a pole or traffic sign). In the case of a road surface, the segmenting performed by the surface identification subsystem 320 may involve generating a border around a group of pixels along the edges of the road. Segmentation is typically performed concurrently with classification (determining the class of each segment). The process of dividing an input representation into segments of one or more classes is sometimes referred to as semantic segmentation. Semantic segmentation can be viewed as forming a mask by which the input representation is filtered, where the mask comprises shapes that are labeled according to the type of object to which the shape corresponds. LIDAR or radar data (e.g., a 3D point cloud) can also be segmented, for example, by generating a 3D surface (e.g. a geometric mesh) representing the boundaries of an object. Segmentation can be performed algorithmically (e.g., using a software algorithm that performs geometric calculations to generate a surface of polygons as a geometric mesh) or using a machine learning (ML) model trained to infer the boundaries of an object from sensor data.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Theverapperuma to Sasayama so the system can use three-dimensional point cloud to precisely determine the depth of the road in order to accurately determine the ROI and whether it is safe to drive through. Claim 9 is rejected for the same reason as claim 1. Claim 17 is rejected for the same reason as claim 1. With regard to claim 18: Sasayama and Theverapperuma disclose the system for road defect level prediction of claim 17, wherein the input peripherals are mounted on a vehicle (Sasayama: “The vehicle is equipped with multiple cameras. For example, the camera may be mounted on the left side of the vehicle body. With this camera, the vicinity of the left front wheel of the vehicle, which is a blind spot for the driver, can be imaged. FIG. 2 is a diagram showing an example of an image of the road surface around the vehicle. In FIG. 2, the captured image shows a part of the left side of the road 1 and the vehicle 2 . In this example, the road 1 includes a roadway 1a on which the vehicle 2 travels and a road shoulder 1b, which are demarcated by white lines.”). Claims 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasayama, Pub. No.: WO 2022190314 A1, in view of Theverapperuma, Pub. No.: 20220024485 A1, and further in view of Zhang, WO 2024031999 A1. With regard to claims 2 and 10: Sasayama and Theverapperuma do not disclose the computer-implemented method of claim 1, wherein the vision transformer model is a dense prediction transformation model (DPT). However Zhang discloses the aspect wherein the vision transformer model is a dense prediction transformation model (DPT) (“For example, the depth estimation model can be obtained by pairing a pre-established initial depth network model with a sample two-dimensional image and an expected depth image corresponding to the sample two-dimensional image; for example, the depth estimation model can be determined based on the depth estimation model that has been trained. The target depth image of the dimensional scene image. For example, the initial deep network model may be a dense prediction vision (dense prediction transformer, DPT) neural network model.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Zhang to Sasayama and Theverapperuma so the system can use a dense prediction transformation model (DPT) in order to accurately determine the depth information based on the context of the image. Claims 3, 8, 11, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasayama, Pub. No.: WO 2022190314 A1, in view of Theverapperuma, Pub. No.: 20220024485 A1, and further in view of Zhou, Pub. No.: CN 115631205 A With regard to claims 3 and 11: Sasayama and Theverapperuma do not disclose the computer-implemented method of claim 1, wherein the semantic segmentation model is a Universal Segmentation (UniSeg) model. However Zhou discloses the aspect wherein the semantic segmentation model is a Universal Segmentation (UniSeg) model (“Optionally, the image segmentation model training party and image segmentation method can be respectively realized on different servers. Specifically, the first server stores a plurality of data set, different data set using different category system, having different category name set. the first server uses a plurality of data sets with different category name sets to train the image segmentation model, obtaining the universal image segmentation model suitable for multiple different category system scenes, deploying the image segmentation model to the second server. the second server provides the image segmentation service to the outside, obtaining the image to be processed and the category name to be selected uploaded by the end side device, and executing the processing flow of the image segmentation method, inputting the image and the category name into the image segmentation model, extracting the image characteristic of the image through the image segmentation model, mapping the category name as a text embedded vector, and according to the image feature and text embedded vector, determining the position mask of the image and the category information corresponding to the position mask. the second server outputs the position mask of the image and the category information corresponding to the position mask.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Zhou to Sasayama and Theverapperuma so the system can use Universal Segmentation model to accurately segment the road and the ROI from the image efficiently. With regard to claims 8 and 16: Sasayama and Theverapperuma and Zhou disclose the computer-implemented method of claim 3, wherein the semantic segmentation model generates semantic maps (Sasayama: “As shown in FIG. 1, the image analysis device 10 includes an area division unit 11, a depth estimation unit 12, and a detection unit 13. The region division unit 11 performs region division (also referred to as region recognition or segmentation) on the above-described image of the surrounding road surface, and generates a region division result in which the image is classified into image regions of a plurality of categories. Semantic Segmentation using a Fully Convolutional Network (FCN), for example, can be used as the region segmentation technique. Semantic segmentation performs class classification (inference) on a pixel-by-pixel basis for an input image, and labels each classified pixel with a category to divide the image into image regions of multiple categories. , are output as segmentation results. The segmentation result is, for example, an image in which each segmented image region is color-coded. Note that the vehicle-mounted camera may, for example, capture the driving situation of the vehicle in real time at a predetermined frame rate and input the captured image to the image analysis device 10 . The image analysis apparatus 10 acquires an image input at a predetermined frame rate, classifies objects included in the image, and outputs the classified image as a result of segmentation at a predetermined frame rate. can be done. FIG. 3 is a diagram showing the result of segmentation generated from the image in FIG. For the sake of explanation, in FIG. 3, the segmentation result is superimposed on the captured image. Also, in FIG. 3, dashed-dotted lines have been added to respectively enclose a plurality of image areas for clarity of each image area. The multiple categories include road categories. In the example of FIG. 3, the input image is divided into two image areas, a road area 1A (check pattern hatched area) and a vehicle area 2A (diagonal hatched area), which are road categories.) that include road scene attributes and road categories that are employed to select the ROI (Sasayama: “In this way, by using the region division result and the depth estimation information, it is possible to detect unevenness for each image region divided into categories. As a result, for example, when driving on a narrow road with ditches on the left and right sides, it is possible to recognize unevenness on the road surface and prevent derailment into the ditches and contact with curbs. By analyzing images from a plurality of cameras with the image analysis device 10, it is possible to recognize irregularities on the road surface around the vehicle and operate the vehicle more safely. It is preferable that the detection unit 13 selectively detects unevenness present on the road surface within the road area 1A among the plurality of divided categories. As a result, it is possible to selectively detect irregularities on the road on which the vehicle 2 is supposed to enter, and to appropriately alert the driver.”). Claim 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasayama, Pub. No.: WO 2022190314 A1, in view of Theverapperuma, Pub. No.: 20220024485 A1, and further in view of Pasihapaori, Pub. No.: GB 2610881 A. With regard to claim 19: Sasayama and Theverapperuma do not disclose the aspect wherein coordinates of the predicted road defect levels are broadcast to other vehicles when a vehicle implementing the system for road defect level prediction approaches the coordinates. However Pasihapaori discloses the aspect wherein coordinates of the predicted road defect levels are broadcast to other vehicles when a vehicle implementing the system for road defect level prediction approaches the coordinates. (“Initially the vehicle is operating in the CRUISE mode, meaning the vehicle drives autonomously or otherwise along a road segment where there is no prior information of any pre-existing road defects, and/or no signal is received from the mobile command hub 404 and/or localised network node 406 telling the vehicle to enter the AWARE state. The vehicle 402 can perform method step 302 of Figure 3 during this time, acting as the first vehicle attempting to identify road defects. In other words, the vehicle scans for road defects in a lower priority' mode using the first neural network. Computational and data-transfer resources are therefore prioritised for other core functionalities of the vehicle 402 such as a perception neural network and navigation. Next, either a communication is received by the vehicle 402 indicating there is an existing (identified) road-defect up ahead in the road surface, or the vehicle 402 has stored pre-existing knowledge of an upcoming road defect, which may include location coordinates of the road defect. In either case, a control signal is generated to activate the AWARE mode, meaning the additional sensory information is collected by the plurality of cameras 160 (the second input data) and optionally the thermal sensor 170. The second neural network, which includes larger Al models requiring more data bandwidth and computational requirements, is also activated for the limited stretch of the road including the road defect, and takes priority over the operation of the first neural network. In Figure 6 two road defects D1 and D2 are shown. The vehicle 402 observes the road defects in detail using stereoscopic imaging with the plurality of cameras 160, and optionally using the thermal sensor 170. The vehicle 402 and relays the information captured (including the second input data), and/or the determination of the dimension made by the second neural network back to the localised information node and/or mobile command hub. The above described method can be used to perform surveys of road defects in an automated fashion, providing dimension data for the road defects. In some embodiments, the dimension data determined by the second neural network may be communicated to other vehicles passing by that road defect, in order to provide the other vehicles with a more detailed understanding of the road defect, which may lead to, for example, the other vehicles taking avoiding action to avoid the road defect, thus preventing potential damage to the vehicles.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Pasihapaori to Sasayama and Theverapperuma so the road defect information can be communicated to other vehicles to raise awareness about the defects and prevent accidents. Pertinent Arts The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hanko, Pub. No.: 20240218613 A1: With the development of modern road networks, the maintenance and management of pavement has become increasingly prominent. Conventionally, manual detection has been used to evaluate road conditions, such as holes and cracks in roads. In some conventional road condition management approaches, an engineer visually checks the number of cracks and calculates a crack ratio for each portion of road. These approaches suffer from low efficiency and interference of subjective factors, particularly when large road networks are considered. Tran, Pub. No.: US 20240317254 A1: Smart car method for autonomous navigation by creating a 3D model based on outputs of the camera and sensor, accessing a high definition map database and generating a trip with travel segments from origin to destination; detecting a freeway entrance or an exit lane based on a road marking using a camera and a sensor, if the travel segment passes the freeway entrance or exit, then follow the current lane without exiting; and otherwise following the freeway entrance or exit. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DI XIAO/Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §103
Mar 20, 2026
Interview Requested
Mar 26, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599341
AUTONOMOUS, CONSENT DRIVEN AND GENERATIVE DEVICE, SYSTEM AND METHOD THAT PROMOTES USER PRIVACY, SELF-KNOWLEDGE AND WELL-BEING
2y 5m to grant Granted Apr 14, 2026
Patent 12597519
METHODS FOR CHARACTERIZING AND TREATING A CANCER TYPE USING CANCER IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12588967
PRESENTATION OF PATIENT INFORMATION FOR CARDIAC SHUNTING PROCEDURES
2y 5m to grant Granted Mar 31, 2026
Patent 12586456
SYSTEMS AND METHODS FOR PROVIDING SECURITY SYSTEM INFORMATION USING AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579773
DISPLAY APPARATUS AND DISPLAY METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+21.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month