Prosecution Insights
Last updated: April 19, 2026
Application No. 17/864,943

METHOD AND APPARATUS FOR ESTIMATING POSITION OF MOVING OBJECT

Non-Final OA §103
Filed
Jul 14, 2022
Examiner
SANTOS, AARRON EDUARDO
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Non-Final)
45%
Grant Probability
Moderate
4-5
OA Rounds
3y 4m
To Grant
58%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
59 granted / 131 resolved
-7.0% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
63 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01-02-2026 has been entered. Response to Amendment Claims 1, 10, and 19 have been amended. No claims have been added. No claims have been cancelled. Claims 1-20 are currently pending. The official correspondence below is an after non-final. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akbarzadeh (US 11713978 B2) in view of Hirzer (US 10546387 B2). REGARDING CLAIM 1, Akbarzadeh discloses, generating two-dimensional (2D) feature point information (Akbarzadeh: (Col. 72, Ln. 55-65)) in a landmark-based probability map from a surrounding image (Akbarzadeh: (Col. 18, Ln. 34 - Col. 19, Ln. 3) a stream of 2D camera images ... The base conversion 506 may correspond to the landmarks … the 3D landmark locations may be converted—using base conversion 506—to a map format to generate base layer of the map 504; (Col. 11, Ln. 39-50) a raw output corresponds to a confidences for each point) acquired by a capturing device mounted on a moving object (Akbarzadeh: (Col. 4, Ln. 5-7)); obtaining landmark-based three-dimensional (3D) feature point information (Akbarzadeh: (Col. 18, Ln. 36-38)) from high-definition (HD) map data of a vicinity of the moving object (Akbarzadeh: (Col. 2, Ln. 62-63); (Col. 3, Ln. 15-19); (Col. 13, Ln. 59-64)); converting one of the 2D feature point information of the surrounding image to 3D data (Akbarzadeh: (Col. 8, Ln. 24-27)) or converting the 3D feature point information of the HD map data to 2D (Akbarzadeh: (Col. 28, Ln. 64-65) 2D landmarks generated from the 3D landmarks); determining a first similarity between the 2D feature point information and the 2D data or determining a second similarity between the 3D feature point information and the 3D data (Akbarzadeh: (Col. 28, Ln. 24-35) the 3D landmark locations from multiple maps 504 may be fused together to generate a final representation … the separate map layers may be fused to generate the aggregate map layers … where a first map layer includes data that matches up—within some threshold similarity—to data of another map layer, the matching data may be used (e.g., averaged) to generate a final representation; (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13); see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps) by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map (Akbarzadeh: for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18)); and estimating a position of the moving object (Akbarzadeh: (Col. 3, Ln. 16-20)). The examiner respectfully submits, Akbarzadeh discloses based on the first similarity or the second similarity (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31) the fused map to one or more vehicles for use in executing one or more operations. For example, the map data 108 representative of the final fused HD map may be transmitted to one or more vehicle 1500 for localization, path planning, control decisions, and/or other operations; Layering 2D and 3D data (Col. 6, Ln. 22-55; Col. 28, Ln. 23-52; Col. 34, Ln. 65 - Col. 35, Ln. 5; Col. 35, Ln. 39-43; Col. 72, Ln. 62 - Col. 73, Ln. 2) to determine similarities). However, should it be found that Akbarzadeh fails to disclose, based on the first similarity or the second similarity, in the same field of endeavor, Hirzer discloses, based on the first similarity or the second similarity (Hirzer: (Col. 16, Ln. 39-45) The pose probability block 1108 then determines a respective probability that each respective 3D rendering matches or aligns with the segmented image. The pose probability block 1108 combines the respective probabilities (i.e., the column likelihood function) over all of the plurality of regions (i.e., all of the integral columns) to provide a pose probability), for the benefit of selecting a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Akbarzadeh to include determining a confidence associated with a vehicle pose taught by Hirzer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to select a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. REGARDING CLAIM 2, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, the 2D feature point information of the surrounding image (Akbarzadeh: (Col. 18, Ln. 35-38)) is obtained according to a landmark (Akbarzadeh: (Col. 12, Ln. 56-57)), based on deep neural network (DNN)- based semantic segmentation (Akbarzadeh: (Col. 10, Ln. 54-58); (Col. 11, Ln. 17-20)). REGARDING CLAIM 3, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, receiving 3D feature point information on a world domain for a landmark (Akbarzadeh: (Col. 22, Ln. 36-52)) in the vicinity of the moving object (Akbarzadeh: (Col. 22, Ln. 36-52)) from a HD map database based on position information of the moving object (Akbarzadeh: (Col. 22, Ln. 36-52)); and converting the 3D feature point information on the world domain into a local domain for the capturing device (Akbarzadeh: (Col. 18, Ln. 67 - Col. 19, Ln. 5)). REGARDING CLAIM 4, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, converting the 2D feature point information of the surrounding image to a form of a 3D probability map (Akbarzadeh: (Col. 18, Ln. 25-29); (Col. 22, Ln. 2-8)) based on inverse perspective mapping (Akbarzadeh: (Col. 10, Ln. 49-53); (Col. 19, Ln. 16-20)). REGARDING CLAIM 5, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, projecting the 3D feature point information of the HD map data onto a 2D probability map (Akbarzadeh: (Col. 22, Ln. 37-41)) obtained from the surrounding image, based on perspective mapping (Akbarzadeh: (Col. 23, Ln. 11-14)). REGARDING CLAIM 6, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, summing first probabilities of the 2D data corresponding to each landmark (Akbarzadeh: (Col. 18, Ln. Ln. 63 - Col. 19, Ln. 1-8) The base conversion 506 may correspond to the landmarks … the 3D landmark locations may be converted—using base conversion 506—to a map format to generate base layer of the map 504(1). In addition to the landmark locations, the base layer 520 may further represent the trajectories or paths (e.g., global or relative) of the vehicle 1500 that generated the mapstream 210(1). When generating the base layer 520, a 1:1 mapping between the aggregated input frames), in the probability map of the 2D feature point information (Akbarzadeh: (Col. 22, Ln. 28-35) a pose from a trajectory of a first section may be known, and the pose of the trajectory from the second section may be sampled with respect to the first section using localization to determine—once the alignment between landmarks is achieved—the relative poses between the two. This process may be repeated for each of the poses of the sections such that pose links between the poses are generated); and calculating the first similarity by multiplying summed first probabilities corresponding to each landmark (Akbarzadeh: (Col. 34, Ln. 65 - Col. 35, Ln. 5); (Col. 35, Ln. 39-43); (Col. 72, Ln. 62 - Col. 73, Ln. 2)), wherein the determining of the second similarity comprises: summing the second probabilities of the 3D feature point information corresponding to each landmark, in a 3D probability map of the 3D data (Akbarzadeh: (Col. 8, Ln. 18-28); (Col. 18, Ln. 34-43)); and calculating the first similarity by multiplying the summed first probabilities corresponding to each landmark (Akbarzadeh: (Col. 51, Ln. 46-63)), wherein the determining of the second similarity comprises: summing the second probabilities of the 3D feature point information corresponding to each landmark, in a 3D probability map of the 3D data (Akbarzadeh: (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities); and calculating the second similarity by multiplying summed second probabilities corresponding to each landmark (Akbarzadeh: the cost for that particular point(s) may be set to a max cost. The geometric cost and the semantic cost may then be used together to determine a final cost for each pose (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities). Akbarzadeh does not explicitly recite the terminology "multiplying summed". However, Akbarzadeh discloses an accumulation/aggregation of matches and predictions. Which, the examiner respectfully submits, is parallel in service and result for a determination over an accumulation of results. REGARDING CLAIM 7, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, updating a result of estimating the position of the moving object according to a particle filter or a maximum likelihood (ML) optimization scheme, based on the first similarity of the second similarity (Akbarzadeh: where semantic information—e.g., lane line type, pole, sign type, etc.—does not match for a particular point(s), the cost for that particular point(s) may be set to a max cost. The geometric cost and the semantic cost may then be used together to determine a final cost for each pose (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps). REGARDING CLAIM 8, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, the moving object is an autonomous vehicle or a vehicle supporting advanced driver-assistance systems (ADAS) (Akbarzadeh: (Col. 5, Ln. 4-6)). REGARDING CLAIM 9, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, the landmark comprises any one or any combination of a white lane line, a yellow lane line, a crosswalk, a speed bump, a traffic light, and a traffic sign (Akbarzadeh: (Col. 11, Ln. 20-36)). REGARDING CLAIM 10, Akbarzadeh discloses, generating two-dimensional (2D) feature point information (Akbarzadeh: (Col. 72, Ln. 55-65) one or more sensors of a vehicle, outputs indicative of locations in two-dimensional (2D) image space corresponding to detected landmarks; generating a distance function representation of the detected landmarks based at least in part on the locations; generating a cost space by, for at least two poses of a plurality of poses of the vehicle represented in the cost space: projecting map landmarks corresponding to a map into the 2D image space to generate projected map landmarks) in a landmark-based probability map from a surrounding image (Akbarzadeh: (Col. 18, Ln. 34 - Col. 19, Ln. 3) The camera layer may contain information obtained by executing perception—e.g., via the DNNs 202—on a stream of 2D camera images ... The base conversion 506 may correspond to the landmarks—e.g., lane lines, road boundary lines, signs, poles, trees, other vertical structures or objects, crosswalks, etc.—as determined using perception via the DNN(s) 202. For example, the 3D landmark locations may be converted—using base conversion 506—to a map format to generate base layer 520 (or “camera layer” or “perception layer”) of the map 504) acquired by a capturing device mounted on a moving object (Akbarzadeh: (Col. 4, Ln. 5-7) FIGS. 5E-5F depict example visualizations of registering two base layer map segments using a forward facing camera and a rearward facing camera); obtaining landmark-based three-dimensional (3D) feature point information (Akbarzadeh: (Col. 18, Ln. 36-38)) from high- definition (HD) map data of a vicinity of the moving object (Akbarzadeh: (Col. 2, Ln. 62-63); (Col. 3, Ln. 15-19); (Col. 13, Ln. 59-64)); predicting positions of particles corresponding to candidate positions of the moving object (Akbarzadeh: (Col. 35, Ln. 10-22)); projecting, for each of the positions of the particles, the 3D feature point information onto the probability map obtained from the surrounding image to obtain projected feature point information (Akbarzadeh: (Col. 22, Ln. 37-41); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps); determining, for each of the positions of the particles, a similarity between the projected feature point information and the 2D feature point information (Akbarzadeh: (Col. 28, Ln. 24-35); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps) by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map (Akbarzadeh: for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18)); and estimating a position of the moving object (Akbarzadeh: (Col. 3, Ln. 16-20)). The examiner respectfully submits, Akbarzadeh discloses rearranging the particles based on the similarity (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31) the fused map to one or more vehicles for use in executing one or more operations. For example, the map data 108 representative of the final fused HD map may be transmitted to one or more vehicle 1500 for localization, path planning, control decisions, and/or other operations; Layering 2D and 3D data (Col. 6, Ln. 22-55; Col. 28, Ln. 23-52; Col. 34, Ln. 65 - Col. 35, Ln. 5; Col. 35, Ln. 39-43; Col. 72, Ln. 62 - Col. 73, Ln. 2) to determine similarities; The relative pose links are then used to align the maps 504 corresponding to each of the mapstreams 210 such that landmarks and other features—e.g., points clouds, LiDAR image maps, RADAR image maps, etc.—are aligned in a final, aggregate, HD map (Col. 22, Ln. 4-8)). However, should it be found that Akbarzadeh fails to disclose, based on the similarity, in the same field of endeavor, Hirzer discloses, based on the similarity (Hirzer: (Col. 16, Ln. 39-45) The pose probability block 1108 then determines a respective probability that each respective 3D rendering matches or aligns with the segmented image. The pose probability block 1108 combines the respective probabilities (i.e., the column likelihood function) over all of the plurality of regions (i.e., all of the integral columns) to provide a pose probability; (Col. 4, Ln. 3-13) determine a pose of an image capture device at any given time using a 3D tracker, such as visual odometry tracking or simultaneous localization and mapping (SLAM) ... The pose determination based on semantic segmentation of the captured image is considered to be ground truth, and is used to update the 3D tracker), for the benefit of selecting a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Akbarzadeh to include determining a confidence associated with a vehicle pose and updating segmentations taught by Hirzer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to select a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. REGARDING CLAIM 11, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, the 2D feature point information (Akbarzadeh: (Col. 18, Ln. 35-38)) is obtained according to a landmark (Akbarzadeh: (Col. 12, Ln. 56-57)), based on deep neural network (DNN)-based semantic segmentation (Akbarzadeh: (Col. 10, Ln. 54-58); (Col. 11, Ln. 17-20)). REGARDING CLAIM 12, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, receiving 3D feature point information on a world domain for a landmark (Akbarzadeh: (Col. 22, Ln. 36-52)) in the vicinity of the moving object (Akbarzadeh: (Col. 22, Ln. 36-52)) from a HD map database based on position information of the moving object (Akbarzadeh: (Col. 22, Ln. 36-52)); and converting the 3D feature point information on the world domain into a local domain for the capturing device (Akbarzadeh: (Col. 18, Ln. 67 - Col. 19, Ln. 5)). REGARDING CLAIM 13, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, predicting the positions of the particles (Akbarzadeh: (Col. 35, Ln. 10-22)) based on position information of particles rearranged at a previous point in time and a displacement of the moving object from the previous point in time (Akbarzadeh: [ABS]). REGARDING CLAIM 14, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, the 3D feature point information is projected onto a 2D probability map (Akbarzadeh: (Col. 22, Ln. 37-41)) obtained from the surrounding image based on perspective mapping (Akbarzadeh: (Col. 23, Ln. 11-14)). REGARDING CLAIM 15, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, summing probabilities of the projected feature point information corresponding to each landmark (Akbarzadeh: (Col. 18, Ln. Ln. 63 - Col. 19, Ln. 1-8)), in the probability map (Akbarzadeh: (Col. 22, Ln. 28-35)); and multiplying summed probabilities corresponding to respective landmarks (Akbarzadeh: (Col. 34, Ln. 65 - Col. 35, Ln. 5); (Col. 35, Ln. 39-43); (Col. 72, Ln. 62 - Col. 73, Ln. 2)). Akbarzadeh does not explicitly recite the terminology "multiplying summed". However, Akbarzadeh discloses an accumulation/aggregation of matches and predictions. Which, the examiner respectfully submits, is parallel in service and result for a determination over an accumulation of results. REGARDING CLAIM 16, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, setting weights for the respective positions of the particles according to the similarity (Akbarzadeh: (Col. 11, Ln. 39-42); (Col. 16, Ln. 25-50)); rearranging the particles according to the weights (Akbarzadeh: (Col. 16, Ln. 25-50)); see (Col. 32, Ln. 47 - Col. 33, Ln. 7) for more cost and filtering and updating (rearranging)); and estimating the position of the moving object by calculating a mean value of the rearranged particles (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31)). REGARDING CLAIM 17, Akbarzadeh, as modified, remains as applied above to claim 10, and further, Akbarzadeh also discloses, the moving object is an autonomous vehicle or a vehicle supporting advanced driver-assistance systems (ADAS) (Akbarzadeh: (Col. 5, Ln. 4-6)). REGARDING CLAIM 18, Akbarzadeh, as modified, remains as applied above to claim 1, and further, Akbarzadeh also discloses, A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the operating method of claim 1 (Akbarzadeh: (Col. 5, Ln. 63-67) Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory). REGARDING CLAIM 19, Akbarzadeh discloses, a communication module (Akbarzadeh: (Col. 5, Ln. 15-16)) configured to receive high-definition (HD) map data of a vicinity of a moving object (Akbarzadeh: (Col. 6, Ln. 27-30); (Col. 31, Ln. 19-25)) and a surrounding image acquired by a capturing device mounted on the moving object (Akbarzadeh: (Col. 53, Ln. 37-39); (Col. 63, Ln. 1-4)); and a surrounding image acquired by a capturing device mounted on the moving object (Akbarzadeh: (Col. 53, Ln. 37-39); (Col. 63, Ln. 1-4)); a memory configured to store computer-executable instructions (Akbarzadeh: (Col. 5, Ln. 63-67) Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory), the HD map data, and the surrounding image (Akbarzadeh: (Col. 6, Ln. 26-55)); and a processor configured to execute the computer-executable instructions (Akbarzadeh: (Col. 5, Ln. 63-67) Various functions described herein as being performed by entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory) to configure the processor to: generate two-dimensional (2D) feature point information (Akbarzadeh: (Col. 72, Ln. 55-65)) of a landmark-based probability map from the surrounding image (Akbarzadeh: (Col. 18, Ln. 34 - Col. 19, Ln. 3)), obtain landmark-based three-dimensional (3D) feature point information (Akbarzadeh: (Col. 18, Ln. 36-38) converting the (2D and/or 3D) detections or outputs 204 of the DNNs 202 into 3D landmarks and paths) from the HD map data (Akbarzadeh: (Col. 8, Ln. 24-27) the sensor data 102 may be converted—e.g., using data converter 206—from two-dimensional (2D) coordinate space (e.g., image space) to 3D coordinate space, and then included in the mapstream), convert the 2D feature point information of the surrounding image to 3D data (Akbarzadeh: (Col. 8, Ln. 24-27) the sensor data 102 may be converted—e.g., using data converter 206—from two-dimensional (2D) coordinate space (e.g., image space) to 3D coordinate space, and then included in the mapstream) or the 3D feature point information of the HD map data to 2D data (Akbarzadeh: (Col. 28, Ln. 64-65) 2D landmarks generated from the 3D landmarks), determine a first similarity between the 2D feature point information and the 2D data or determine a second similarity between the 3D feature point information and the 3D data (Akbarzadeh: (Col. 28, Ln. 24-35); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps) by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map (Akbarzadeh: for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18)); and estimate a position of the moving object (Akbarzadeh: (Col. 3, Ln. 16-20)). The examiner respectfully submits, Akbarzadeh discloses based on the similarity (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31) the fused map to one or more vehicles for use in executing one or more operations. For example, the map data 108 representative of the final fused HD map may be transmitted to one or more vehicle 1500 for localization, path planning, control decisions, and/or other operations). However, should it be found that Akbarzadeh fails to disclose, based on the similarity, in the same field of endeavor, Hirzer discloses, based on the similarity (Hirzer: (Col. 16, Ln. 39-45) The pose probability block 1108 then determines a respective probability that each respective 3D rendering matches or aligns with the segmented image. The pose probability block 1108 combines the respective probabilities (i.e., the column likelihood function) over all of the plurality of regions (i.e., all of the integral columns) to provide a pose probability), for the benefit of selecting a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Akbarzadeh to include determining a confidence associated with a vehicle pose taught by Hirzer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to select a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. REGARDING CLAIM 20, Akbarzadeh, as modified, remains as applied above to claim 19, and further, Akbarzadeh also discloses, sum the probabilities of the 2D data corresponding to each landmark (Akbarzadeh: (Col. 18, Ln. Ln. 63 - Col. 19, Ln. 1-8)), in the probability map of the 2D feature point information (Akbarzadeh: (Col. 22, Ln. 28-35); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps), and calculate the first similarity by multiplying summed first probabilities corresponding to respective landmarks (Akbarzadeh: (Col. 34, Ln. 65 - Col. 35, Ln. 5); (Col. 35, Ln. 39-43); (Col. 72, Ln. 62 - Col. 73, Ln. 2); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps), sum second probabilities of the 3D feature point information corresponding to each landmark, in the 3D probability map of the 3D data (Akbarzadeh: (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps); and calculate the second similarity by multiplying summed second probabilities corresponding to each landmark (Akbarzadeh: (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps). Alternate or Additional Claim Rejections - 35 USC § 103 Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akbarzadeh (US 11713978 B2) in view of Hirzer (US 10546387 B2). REGARDING CLAIM 1, Akbarzadeh discloses, generating two-dimensional (2D) feature point information (Akbarzadeh: (Col. 72, Ln. 55-65)); obtaining landmark-based three-dimensional (3D) feature point information (Akbarzadeh: (Col. 18, Ln. 36-38)) from high-definition (HD) map data of a vicinity of the moving object (Akbarzadeh: (Col. 2, Ln. 62-63); (Col. 3, Ln. 15-19); (Col. 13, Ln. 59-64)); converting one of the 2D feature point information of the surrounding image to 3D data (Akbarzadeh: (Col. 8, Ln. 24-27)) or converting the 3D feature point information of the HD map data to 2D (Akbarzadeh: (Col. 28, Ln. 64-65) 2D landmarks generated from the 3D landmarks); determining a first similarity between the 2D feature point information and the 2D data or determining a second similarity between the 3D feature point information and the 3D data (Akbarzadeh: (Col. 28, Ln. 24-35) the 3D landmark locations from multiple maps 504 may be fused together to generate a final representation of each lane line, each road boundary, each sign, each pole, etc. Similarly, for LiDAR intensity maps, LiDAR elevation maps, LiDAR distance function images, RADAR distance function images, and/or other layers of the maps 504, the separate map layers may be fused to generate the aggregate map layers. As such, where a first map layer includes data that matches up—within some threshold similarity—to data of another map layer, the matching data may be used (e.g., averaged) to generate a final representation; where semantic information—e.g., lane line type, pole, sign type, etc.—does not match for a particular point(s), the cost for that particular point(s) may be set to a max cost. The geometric cost and the semantic cost may then be used together to determine a final cost for each pose (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps) by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map (Akbarzadeh: for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18));and estimating a position of the moving object (Akbarzadeh: (Col. 3, Ln. 16-20)). The examiner respectfully submits, Akbarzadeh discloses in a landmark-based probability map from a surrounding image (Akbarzadeh: (Col. 18, Ln. 34 - Col. 19, Ln. 3); (Col. 11, Ln. 39-50)); based on the first similarity or the second similarity (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31) the fused map to one or more vehicles for use in executing one or more operations. For example, the map data 108 representative of the final fused HD map may be transmitted to one or more vehicle 1500 for localization, path planning, control decisions, and/or other operations; Layering 2D and 3D data (Col. 6, Ln. 22-55; Col. 28, Ln. 23-52; Col. 34, Ln. 65 - Col. 35, Ln. 5; Col. 35, Ln. 39-43; Col. 72, Ln. 62 - Col. 73, Ln. 2) to determine similarities). However, should it be found that Akbarzadeh fails to disclose, in a landmark-based probability map from a surrounding image, based on the first similarity or the second similarity, in the same field of endeavor, Hirzer discloses, in a landmark-based probability map from a surrounding image (Hirzer: (Col. 12, Ln. 44-59) FIG. 7F shows an example of a 3D rendering of a 2.5D city model corresponding to the captured image of FIG. 7A, as viewed from a certain pose hypothesis. A plurality of 3D renderings of the scene are generated. Each of the plurality of 3D renderings corresponding to one of a plurality of poses. The plurality of poses are chosen based on the initial pose. The plurality of poses including the initial pose, and the calibration data, position data or motion data are used to determine a pose search space containing the plurality of poses. For example, each of the plurality of 3D renderings can correspond to one of a plurality of pose hypotheses around and including an initial estimated 3D pose. The pose hypothesis sampling block 310 (FIG. 3) selects the pose hypothesis for which the corresponding rendering optimally fits the semantic segmentation results (i.e., the probability maps). FIGS. 8A and 8B show another example of a captured image (FIG. 8A), and a corresponding segmented image as processed by the CNN or FCN. The individual probability maps for the image of FIG. 8A is omitted); based on the first similarity or the second similarity (Hirzer: (Col. 16, Ln. 39-45) The pose probability block 1108 then determines a respective probability that each respective 3D rendering matches or aligns with the segmented image. The pose probability block 1108 combines the respective probabilities (i.e., the column likelihood function) over all of the plurality of regions (i.e., all of the integral columns) to provide a pose probability), for the benefit of selecting a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Akbarzadeh to include determining a confidence associated with a vehicle pose taught by Hirzer. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to select a pose from the plurality of poses, such that the 3D rendering corresponding to the selected pose aligns with the segmented image. Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akbarzadeh (US 11713978 B2) in view of Yang (US 20180189578 A1). REGARDING CLAIM 1, Akbarzadeh discloses, generating two-dimensional (2D) feature point information (Akbarzadeh: (Col. 72, Ln. 55-65)) acquired by a capturing device mounted on a moving object (Akbarzadeh: (Col. 4, Ln. 5-7)); obtaining landmark-based three-dimensional (3D) feature point information (Akbarzadeh: (Col. 18, Ln. 36-38)) from high-definition (HD) map data of a vicinity of the moving object (Akbarzadeh: (Col. 2, Ln. 62-63); (Col. 3, Ln. 15-19); (Col. 13, Ln. 59-64)); converting one of the 2D feature point information of the surrounding image to 3D data (Akbarzadeh: (Col. 8, Ln. 24-27)) or converting the 3D feature point information of the HD map data to 2D (Akbarzadeh: (Col. 28, Ln. 64-65) 2D landmarks generated from the 3D landmarks); determining a first similarity between the 2D feature point information and the 2D data or determining a second similarity between the 3D feature point information and the 3D data (Akbarzadeh: (Col. 28, Ln. 24-35); (Col. 22, Ln. 61-66); see at least (Col. 23, Ln. 26 - Col 24, Ln. 2; Col. 28, Ln. 23-52; Col. 29, Ln. 7 - Col. 30, Ln. 13) for comparing points, lines, and dots for matching and similarities, aggregated matching points, values of matches for 2D and 3D multi-layered maps, this includes first, second, third, and beyond similarities; see at least (Col. 6, Ln. 22-55; Col. 23, Ln. 26 - Col 24) for first and second maps, points, etc., and matching and similarities; see at least (Col. 2, Ln. 64-67) for aggregating data and creating new maps) by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map (Akbarzadeh: for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18)); and estimating a position of the moving object (Akbarzadeh: (Col. 3, Ln. 16-20)). The examiner respectfully submits, Akbarzadeh discloses in a landmark-based probability map from a surrounding image (Akbarzadeh: (Col. 18, Ln. 34 - Col. 19, Ln. 3); (Col. 11, Ln. 39-50)) based on the first similarity or the second similarity (Akbarzadeh: ((Col. 21, Ln. 57 - Col. 23, Ln. 4), (Col. 23, Ln. 5 - Col. 25, Ln. 24)) examiner: disclose aggregation of scores/weights (sum) and averages of landmarks in a map; (Col. 32, Ln. 26-31) the fused map to one or more vehicles for use in executing one or more operations. For example, the map data 108 representative of the final fused HD map may be transmitted to one or more vehicle 1500 for localization, path planning, control decisions, and/or other operations; Layering 2D and 3D data (Col. 6, Ln. 22-55; Col. 28, Ln. 23-52; Col. 34, Ln. 65 - Col. 35, Ln. 5; Col. 35, Ln. 39-43; Col. 72, Ln. 62 - Col. 73, Ln. 2) to determine similarities). However, should it be found that Akbarzadeh fails to disclose, in a landmark-based probability map from a surrounding image, based on the first similarity or the second similarity, in the same field of endeavor, Yang discloses, in a landmark-based probability map from a surrounding image (Yang: [0094] The occupancy map 530 comprises spatial 3-dimensional (3D) representation of the road and all physical objects around the road. The data stored in an occupancy map 530 is also referred to herein as occupancy grid data. The 3D representation may be associated with a confidence score indicative of a likelihood of the object existing at the location; also see at least [0157-0165] for probability maps and matching corresponding coordinates for 2D and 3D maps and images), based on the first similarity or the second similarity (Yang: [0094] The occupancy map 530 comprises spatial 3-dimensional (3D) representation of the road and all physical objects around the road. The data stored in an occupancy map 530 is also referred to herein as occupancy grid data. The 3D representation may be associated with a confidence score indicative of a likelihood of the object existing at the location; also see at least [0157-0165] for probability maps and matching corresponding coordinates for 2D and 3D maps and images), for the benefit of validating high definition maps that allows autonomous vehicles to safely navigate through their environments. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Akbarzadeh to include determining a confidence associated with a vehicle pose taught by Yang. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to validate high definition maps that allows autonomous vehicles to safely navigate through their environments. Response to Arguments Applicant's arguments, beginning on page 9, filed 01-02-2026, have been fully considered but they are not persuasive. The applicant, to the examiner’s best understanding, has contended the prior art of Akbarzadeh (US 11713978 B2) fails to disclose “by multiplying summed probabilities corresponding to each landmark determined based on the landmark-based probability map”. However, as cited above, Akbarzadeh discloses “for some number of iterations (e.g., 100, 1000, 2000, etc.). Once completed for the number of iterations, the layout that had the most agreement may be used ... with respect to FIG. 6F, a cost of the error may be computed for each of the pose links 608 other than the minimum sampled—e.g., according to equation (4), below [Eq. 4: cost = sqrt(∑(u^2/s^2))] (Col. 26, Ln. 50-57); pose graph 650 of FIG. 6G may undergo an optimization process, such as a non-linear optimization process (e.g., a bundle adjustment process) with the goal of minimizing the sum of squared costs of inliers—e.g., using the computed cost function described herein with respect to FIG. 6F (Col. 27, Ln. 13-18))”. Further, because applying any mathematical formulae, including that of the claimed invention, would have been an obvious design choice for one of ordinary skill in the art because it facilitates known mathematical means for deriving maximum accuracy, as shown by Akbarzadeh. Since the invention failed to provide novel or unexpected results from the usage of said claimed formula, use of any mathematical means, including that of the claimed invention, would be an obvious matter of design choice well within the scope of customary practices for one of ordinary skill in the art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gausebeck (US 20190026958 A1) Youmans (US 20200324898 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S./Examiner, Art Unit 3663 /ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Jul 14, 2022
Application Filed
Nov 02, 2024
Non-Final Rejection — §103
Jan 07, 2025
Interview Requested
Jan 31, 2025
Response Filed
Feb 11, 2025
Interview Requested
Jun 03, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 23, 2025
Interview Requested
Dec 30, 2025
Applicant Interview (Telephonic)
Dec 30, 2025
Examiner Interview Summary
Jan 02, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12482356
TRANSPORT MANAGEMENT DEVICE, TRANSPORT MANAGEMENT METHOD, AND TRANSPORT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Patent 12454311
STEER-BY-WIRE STEERING DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Oct 28, 2025
Patent 12428170
METHODS AND APPARATUS FOR AUTOMATIC DRONE RESUPPLY OF A PRODUCT TO AN INDIVIDUAL BASED ON GPS LOCATION, WITHOUT HUMAN INTERVENTION
2y 5m to grant Granted Sep 30, 2025
Patent 12427974
MULTIPLE MODE BODY SWING COLLISION AVOIDANCE SYSTEM AND METHOD
2y 5m to grant Granted Sep 30, 2025
Patent 12372360
Methods and Systems for Generating Alternative Routes
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
45%
Grant Probability
58%
With Interview (+12.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month