Prosecution Insights
Last updated: April 19, 2026
Application No. 18/178,720

SYSTEMS AND METHODS FOR DETECTING AND LABELING A COLLIDABILITY OF ONE OR MORE OBSTACLES ALONG TRAJECTORIES OF AUTONOMOUS VEHICLES

Final Rejection §103
Filed
Mar 06, 2023
Examiner
FABER, DAVID
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Kodiak Robotics Inc.
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
4y 8m
To Grant
88%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
274 granted / 531 resolved
-3.4% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
41 currently pending
Career history
572
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 531 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the amendment and the Information Disclosure Statement filed on 3 February 2026. This office action is made Final. Claims 1, 8, and 15 have been amended. The objection to the specification as presented in the previous office action has been withdrawn as neccessited by the amendment. Claims 1-23 are pending. Claims 1, 8, and 15 are independent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 2/3/26 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The amendment to the specification/abstract filed on 2/3/26 has been accepted and entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-10, 14-17 and 20-23 remain rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al (“Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications”, IEEE SENSORS JOURNAL, 2020, p4901-4913)(Disclosed in IDS filed on 6/24/24) in further view of Chen et al (20210192234, 2021) As per independent claim 1, Zhao discloses a method comprising: generating one or more data points from one or more sensors coupled to a vehicle (p4903; Section III: 3D point clouds generated by lidar which represented by Cartesian coordinates p = [px , py , pz, pI], which contain a large number of points; p4908; IV. A: dataset consists of a synchronized stereo camera images and a 3D LIDAR frames captured from an autonomous vehicle; p4909: ) wherein: the one or more sensors comprise: a Light Detection and Ranging (LiDAR) sensor and a camera, (p4903: Section III) the one or more data points comprise: a LiDAR point cloud generated by the LiDAR sensor; and an image captured by the camera (p4903; Section III; p4908; IV. A: dataset consists of a synchronized stereo camera images and a 3D LIDAR frames captured from an autonomous vehicle) processor (p4808: Section IV) detecting one or more obstacles within the LiDAR point cloud (p4903: lidar point cloud to extract object-region proposals (obstacles); clustering of non-ground obstacles, calculation of the 3D bounding boxes (BBs) of the clustered obstacles; Section III A: Object-Region Proposal Generation Using 3D LIDAR Data) performing a factor query on the image to query at least one factor feature for each of the one or more detected obstacles; (Abstract: regions of interest (ROI) of the proposals are selected and input to a convolutional neural network (CNN) for further object recognition and discloses the functionality to precisely identify the sizes of all the objects; form of query at least one factor feature; p4903: extract the features from the corresponding image region and identify the object in the region) for each obstacle of the one or more detected obstacles, based on the at least factor feature, and a first label indicating an object type of a plurality of object types. (p4907-4908, p4910: determine object-region proposal/ROI and classifying/categorizing ROI based on stored training labels.) However, Zhao et al fails to specifically disclose labeling each said obstacle with a second label being assigned a confidence value indicative of whether the obstacle is approved for collision, wherein the second label indicates whether the obstacle is: a thing capable of being approved for collision by the vehicle, when the confidence value is above a threshold value; or a thing that is not capable of being approved for collision by the vehicle, when the confidence value is below the threshold value. (It is noted that the term “confidence value” is not defined in the claim or in applicant’s specification. In addition, the language does not explain how the confidence value is “indicative of whether the obstacle is approved for collision” other than using a threshold value to indicate the obstacle approval status for colliding. Therefore, the broadest reasonable interpretation is applied) However, Chen et al discloses identifying objects within a planned path of a vehicle using lidar. (0036-0038, 0063-0064;FIG 10) Then, Chen determines a classification of an identified object such as the semantic class of the object. (e.g. person, tree) (0018, 0027, 0029, 0044, 0059, 0079, 0112) For example, Chen discloses detecting objects like leaves and small branches being identified as contactable/hittable (0017, 0029) while detected objects like large rocks or bikes are identified as not contactable/hittable (0028, 0045, 0068) (indicative of whether the obstacle is approved for collision) Furthermore, 0044 clearly states “the semantic class estimation may include segmenting and/or classifying extracted deep convolutional features into semantic data (e.g., rigidity, hardness, safety risk, risk of position change, class or type, potential direction of travel, etc.).” The semantic data for each semantic class results in a collection of values associated with the semantic class of the object (form of confidence values). Then, Chen determines if the vehicle is able to pass/proceed through the object based on the semantic class criterion being exceeded or not (of a threshold). (0045, 0061, 0098) In other words, it determines if the object is safe, animate/inanimate, hard/not hard, or rigid/not rigid, etc (0044, 0059) and may make contact with the object as the vehicle continues its path once it is determined if the set of criteria is exceeded (e.g. safe, inanimate). For example, 0061 discloses the vehicle may compare characteristics of the class, such as rigidity, hardness, to various criteria or thresholds to determine if the object is safe. (also a form of a confidence value, amount safeness) Thus, the vehicle would continue its path passing through the object. Thus, if the semantic class criterion (of an object) is not met or exceeded, the vehicle is unable to pass through the object (may not contact) and is unable to continue or move around the object. (0045-0046, 0061, 0068, 0098) For example, 0045 disclose “If the semantic class does not meet or exceeds the proceed criteria (such as the object is animate, too hard, or rigid, etc.) than the process 700 moves to 728 and the vehicle alerts an operator (such as a remote vehicle operator)” One of a skilled artisan would have realized that at least one characteristic of the sematic class of the object (e.g. animate, too hard, rigid) represent values (form of a confidence value) is/are compared to a set threshold ( semantic class criterion) to determine if a value, associated with a characteristic, meets or exceeds the semantic class criterion.) Thus, 0045 discloses that the comparison of these values may not meet the set threshold in some situations. Furthermore, if the semantic class criterion (of an object) is met or exceeded, the vehicle is able to pass through the object (and may contact). (0045, 0061, 0068, 0098) For example, based on 0045, in other situations, the semantic class does meet or exceeds the proceed criteria (such as the object is inanimate, not hard, or not rigid, etc.). Thus, Chen discloses classifying the object with classes and sub-class which include the risk of contact that indicate if the objects the vehicle can or cannot hit. (0022-0023, 0027-0028, 0031) Furthermore, Chen also determines if the object is allowed to be contacted based on criterion exceeding a threshold or to be avoided if the criterion is below a threshold. Thus, Chen discloses assigning/labeling if an object/thing is approved to make contact (approved for colliding) or should be avoided making contact (not approved for colliding) (form of “labeling each said obstacle with a second label being assigned a confidence value indicative of whether the obstacle is approved for collision…”) It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to have modified the cited art with the discussed features of Chen et al since it would have provided the benefit of improving overall safety of autonomous vehicles and its passengers (0015, 0061) As per dependent claim 2, Zhao et al discloses wherein the performing the factor query comprises one or more of: performing a color query on the image for each of the one or more detected obstacles; performing a shape query on the image for each of the one or more detected obstacles; and performing a movement query on the image for each of the one or more detected obstacles. (Abstract: discloses the functionality to precisely identify the sizes of all the objects; form of a shape query; p4907: discloses adjusting the sizes of the boundary rectangles so that the entire object is inside the rectangle) As per dependent claim 3, based on the rejection of Claim 1 and the rationale along with the motivation to combine is incorporated, Chen et al discloses using the processor: for each of the one or more detected obstacles, based on the first label and/or second label of the obstacle, determining one or more vehicle actions for the vehicle to perform; and causing the vehicle to perform the one or more actions; wherein the one or more actions comprises one or more of: increasing a speed of the vehicle; decreasing a speed of the vehicle; stopping the vehicle; and adjusting a trajectory of the vehicle (0045-0046, 0068: either pass through the object (may contact) or avoid the object (don’t make contact). As per dependent claim 7, Zhao et al discloses for each of the one or more detected obstacles, based on the factor query, determining whether the obstacle is one or more of: a piece of vegetation; a pedestrian; and a vehicle. (Abstract; FIG. 11; p4908: pedestrians, cars) As per independent claims 8 and 15, Claims 8 and 15 recite similar limitations as in Claim 1 and are rejected under similar rationale. Furthermore, Zhao et al discloses a vehicle (p4808: Section IV A. 4.1) and a computing device with a processor and memory (p4808: Section IV: A skilled artisan would understand that the combination of an Intel (R) Core (TM) i7-4790 3.6 GHz processor, with 64 GB RAM are part of a computing device ) As per dependent claims 9-10, 14, 16-17 and 20, claims 9-10, 14, 16-17 and 20 recite similar limitations as in Claims 2-3, 7 and are rejected under similar rationale. As per dependent claim 21, based on the rejection of the Claim 1 and the rationale, along with the motivation, incorporated, Chen et al discloses comparing the confidence value to a threshold value. (0045, 0061, 0098: discloses semantic class criterion if its met or exceed. One of a skilled artisan would have realized that at least one characteristic of the sematic class of the object (e.g. animate, too hard, rigid) represent values (form of a confidence value) is/are compared to a set threshold ( semantic class criterion) to determine if a value, associated with a characteristic, meets or exceeds the semantic class criterion.) As per dependent claim 22, based on the rejection of the Claim 1 and 21 and the rationale, along with the motivation, incorporated, Chen et al discloses planning a trajectory of the vehicle by accepting one or more plans that would collide with the one or more obstacles, when the confidence value is above the threshold value.( 0045, 0061, 0098: discloses accepting the plan of colliding/contacting the obstacle when semantic class criterion is exceeded) As per dependent claim 23, based on the rejection of the Claim 1 and 21 and the rationale, along with the motivation, incorporated, Chen et al discloses planning a trajectory of the vehicle by accepting one or more plans that would collide with the one or more obstacles, when the confidence value is above the threshold value.( 0045-0046, 0061: discloses not accepting the plan of colliding/contacting the obstacle when semantic class criterion is not met) Claim(s) 4, 11, 18 remain rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al in further view of Chen et al in further view of Lee et al (US20210370928, 2021) As per dependent claims 4, the cited art fails to disclose wherein the one or more actions comprises one or more of: increasing a speed of the vehicle; decreasing a speed of the vehicle; stopping the vehicle; and adjusting a trajectory of the vehicle However, Lee discloses wherein the one or more actions comprises one or more of: increasing a speed of the vehicle; decreasing a speed of the vehicle; stopping the vehicle; and adjusting a trajectory of the vehicle (0142: discloses transmit information about the detected object having a risk of collision to the braking system and the steering system. 0050, 0143-0145: in response to the detection, the braking system may brake the vehicle (0041) or the steering system may performing steering to avoid the collision. Steering changes the traveling direction of the vehicle (0041) which would change the trajectory of the vehicle) It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to have modified the cited art with the discussed features of Lee et al since it would have provided the benefit of assisting driving of improving distinction between an object having a risk of collision and an object having no risk of collision. (0008) As per dependent claims 11, 18, claims 11 and 18 recite similar limitations as in Claim(s) 4 and are rejected under similar rationale. Claim(s) 5-6, 12-13 and 19 remain rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al in further view of Chen et al in further view of Levy et al (US20240193805, EFD 12/9/2022) As per dependent claim 5, Zhao et al discloses: generating at least one patch in the LiDAR point cloud indicative of the one or more detected obstacles; (Abstract: p4903: clustering of non-ground obstacles, calculation of the 3D bounding boxes (BBs) of the clustered obstacles; p4907: Section III A(3)) projecting the at least one patch of the LiDAR point cloud into the image to obtain combined data, (p4903: projection of the BBs onto an image; Abstract: 3D LIDAR data to generate accurate object-region proposals. Then, these candidates are mapped onto the image space) said at least one patch designates a region of the image that is to be analyzed for each of the one or more detected obstacles, and said at least one patch forms a bounding box on the image; (p4903: clustering of non-ground obstacles, calculation of the 3D bounding boxes (BBs) of the clustered obstacles; p4907: generate object-region proposals in an image using 3D LIDAR data; bounding box for each cluster; 2D candidate boundary rectangles are generated from the mapping area of each 3D boundary box in the image) Furthermore, based on the rejection of claim 1 and the rationale, along with the motivation, incorporated, Chen et al discloses said at least one patch designates a region of the image that is to be analyzed for determining whether each of the one or more detected obstacles is a thing which is capable of being approved for collision by the vehicle (0036-0037, 0063-0064;FIG 10: outputs of LIDAR data is analyzed to determine objects and their class (which includes contactable or not)) However, the cited art fails to specifically cropping the region of the image within the bounding box, forming a cropped image. However, Levy et al discloses cropping the image to the size of the region of the image of the bounding box (FIG. 1; Abstract; 0021-0022) It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to have modified the cited art with the discussed features of Levy et al since it would have provided the benefit of achieving fast and accurate detections on small objects in images. As per dependent claim 6, based on the rejection of Claim 5 and the rationale along with the motivation to combine is incorporated, Levy et al discloses resizing the cropped image, forming a resized image, wherein performing the factor query comprises performing the factor query on the resized image. (FIG. 1; Abstract; 0021-0022: cropping the image to the size of the region of the image of the bounding box includes resizing the image to resized image. Furthermore, 0018, 0044 discloses keypoints/features from the contents of the resized image are determined/identified. Form of performing a factor query on the resized image) As per dependent claims 12-13 and 19, claims 12-13 and 19 recite similar limitations as in Claims 5-6 and are rejected under similar rationale. Response to Arguments Applicant's arguments filed 2/3/26 have been fully considered but they are not persuasive. On pages 13-14, in regards to independent claims 1, 8, and 15 rejected under 35 USC 103, Applicant argues that Zhao and Chen do not teach the argued subject matter/limitation(s) “… labeling each said obstacle with a second label being assigned a confidence value, wherein the second label indicates whether the obstacle is: a thing capable of being approved for collision by the vehicle, when the confidence value is above a threshold value; or a thing that is not capable of being approved for collision by the vehicle, when the confidence value is below the threshold value.” Applicant argues that “Chen does not create “such a collidability label. Instead, Chen evaluates a set of class-based characteristics (e.g., rigidity, hardness, or safety risk) and uses those characteristics as factors in making a procedural "proceed vs. stop" determination for the vehicle. Chen's characteristics are not framed as labels assigned to obstacles, nor are they used to generate a binary label for approved or not approved for collision. They are merely environmental descriptors used to support downstream heuristic decisions” Thus, Chen does not disclose the required second label, nor the concept of labeling each detected obstacle with an explicit collidability label independent from its object-type classification according to the Applicant. Furthermore, Applicant argues “The Examiner attempts to equate Chen's "semantic class criteria" with a confidence value. But semantic class criteria (such as hardness or rigidity) are not confidence values, nor are they expressed as probabilistic indicators of whether an obstacle is safe to collide with. They are characteristics of object classes, not confidence scores associated with a discrete binary label. Additionally, while Chen compares class characteristics to heuristic "criteria," such as hardness or risk level, it does not evaluate a probability-based confidence score, nor does it compare such a score to a threshold used to determine the final approval for collision. There is no indication that Chen's criteria are numeric, let alone probabilistic, nor are they "assigned" as part of a label as the claims require” Therefore, Applicant argues that Chen does not teach assigning a "confidence value indicative of whether the obstacle is approved for collision," either. However, the Examiner disagrees. Based on the arguments provided by the Applicant in respect to claimed features in the claim limitation, the Examiner respectfully submits that the Applicant states Chen does not teach the limitations by merely summarizing the Chen reference overall without pointing to any portion of Chen, and merely concludes that Chen does not teach the limitation. Applicant does not disclose how the claim language of the claim limitation is different from the teachings of each of the teachings of Chen by describing the differences that involve any supporting evidence from the specification stating or describing the limitation, or how Chen is specifically different from Applicant's invention. Applicant merely argues that Chen does not teach the argued limitations without any explanation or describing how the claim language and invention is performed regarding the claimed subject matter. In other words, Applicant argues does not argue how Chen does not teach the argued limitations based on the current language without referring any particular portion of Chen at all. Thus, Applicant's arguments fail to disclose how the cited art is silent or doesn't teach on the limitation since the Applicant does not fully describe the differences that involve any supporting evidence from Applicant's specification stating or describing the limitations, or how the cited art is specifically different from the invention itself. Therefore, the Applicant did not explicitly state how Applicant's invention, other than stating each reference, alone, doesn't teach the limitations, is different to prove that the cited art’s functionality does not equivalently teach the limitation. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., confidence scores associated with a discrete binary label; numeric, probability-based confidence score) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the Examiner respectfully states that the language “labeling each said obstacle with a second label being assigned a confidence value indicative of whether the obstacle is approved for collision” is broad on a number of elements. The claimed language does not define, limit, or explain what exactly a “confidence value” is. In other words, the language is silent on how exactly to properly interpreted the term “confidence value”. Furthermore, Applicant’s specification does not define, limit, or explain what exactly a “confidence value” is. In fact, Applicant’s specification doesn’t even use the term “confidence value”. 0067 of Applicant’s specification is the only paragraph that disclose a confidence of any kind and merely states “assigning a confidence”. However, 0067 does not define what a “confidence” is in anyway either. It does not explicitly state “confidence scores associated with a discrete binary label” in any way. 0067 nor any other paragraph recite “score(s)”, “probability-based”, or “binary”. It appears “confidence” is merely another word for a “value” wherein the term is not explained, defined, or limited in anyway. Therefore, the broadest reasonable interpretation is applied. Therefore, the “confidence value”/”value” could be any value. In addition, the language states that the “confidence value indicative of whether the obstacle is approved for collision; however, the language does not explain how the confidence value is “indicative of whether the obstacle is approved for collision” other than using a threshold value to indicate the obstacle approval status for colliding. Therefore, the broadest reasonable interpretation is applied. Therefore, the Examiner respectfully states that the “confidence value” alone does not indicate that the obstacle is approved or not approved for collision until it is compared to a threshold value according to the claim. Thus, according to the claimed language, the value needs to be compared to another value to determine if the obstacle is approved for collision or not. Thus, the “confidence value” is only indicative of an obstacle being approved (or not approved) for collision AFTER it is compared to a threshold of some form. Furthermore, the language is silent on how an obstacle is labeled with a second label. The language does not explain or clarify what the label itself is exactly other than being labeled to the obstacle in some way. Therefore, the broadest reasonable interpretation is applied. Thus, the second label may be an assigned attribute/metadata to the obstacle. Furthermore, the Examiner refers the Applicant to MPEP 904.01 (b) that states "All subject matter that is the equivalent of the subject matter as defined in the claim, even though specifically different from the definition in the claim, must be considered unless expressly excluded by the claimed subject matter." In other words, while the prior art cited may not explicitly use the same terminology as disclosed in the claim limitations, it doesn't mean the art doesn't teach it and can't be considered to reject Applicant’s claimed invention. Thus, examiner submits that what is taught by the references of the cited art is considered functionally equivalent to that which is claimed discussed below. Thus, based on the broadest reasonable interpretation, in light of Applicant’s specification, of the language of the limitations, Zhao teaches the subject matter of performing a factor query on the image to query at least one factor feature for each of the one or more detected obstacles; (Abstract: regions of interest (ROI) of the proposals are selected and input to a convolutional neural network (CNN) for further object recognition and discloses the functionality to precisely identify the sizes of all the objects; form of query at least one factor feature; p4903: extract the features from the corresponding image region and identify the object in the region). Furthermore, Zhao teaches the subject matter of for each obstacle of the one or more detected obstacles, based on the factor feature, and a first label indicating an object type of a plurality of object types. (p4907-4908, p4910: determine object-region proposal/ROI and classifying/categorizing ROI based on stored training labels.) However, Zhao et al fails to specifically disclose labeling each said obstacle with a second label being assigned a confidence value indicative of whether the obstacle is approved for collision wherein the second label indicates whether the obstacle is: a thing capable of being approved for collision by the vehicle, when the confidence value is above a threshold value; or a thing that is not capable of being approved for collision by the vehicle, when the confidence value is below the threshold value. (It is noted that the term “confidence value” is not defined in the claim or in applicant’s specification. In addition, the language does not explain how the confidence value is “indicative of whether the obstacle is approved for collision” other than using a threshold value to indicate the obstacle approval status for colliding. Therefore, the broadest reasonable interpretation is applied) However, Chen et al discloses identifying objects within a planned path of a vehicle using lidar. (0036-0038, 0063-0064;FIG 10) Then, Chen determines a classification of an identified object such as the semantic class of the object. (e.g. person, tree) (0018, 0027, 0029, 0044, 0059, 0079, 0112) For example, Chen discloses detecting objects like leaves and small branches being identified as contactable/hittable (0017, 0029) while detected objects like large rocks or bikes are identified as not contactable/hittable (0028, 0045, 0068) (indicative of whether the obstacle is approved for collision) Furthermore, 0044 clearly states “the semantic class estimation may include segmenting and/or classifying extracted deep convolutional features into semantic data (e.g., rigidity, hardness, safety risk, risk of position change, class or type, potential direction of travel, etc.).” The semantic data for each semantic class results in a collection of values associated with the semantic class of the object (form of confidence values). Then, Chen determines if the vehicle is able to pass/proceed through the object based on the semantic class criterion being exceeded or not (of a threshold). (0045, 0061, 0098) In other words, it determines if the object is safe, animate/inanimate, hard/not hard, or rigid/not rigid, etc (0044, 0059) and may make contact with the object as the vehicle continues its path once it is determined if the set of criteria is exceeded (e.g. safe, inanimate). For example, 0061 discloses the vehicle may compare characteristics of the class, such as rigidity, hardness, to various criteria or thresholds to determine if the object is safe. (also a form of a confidence value, amount safeness) Thus, the vehicle would continue its path passing through the object. Thus, if the semantic class criterion (of an object) is not met or exceeded, the vehicle is unable to pass through the object (may not contact) and is unable to continue or move around the object. (0045-0046, 0061, 0068, 0098) For example, 0045 disclose “If the semantic class does not meet or exceeds the proceed criteria (such as the object is animate, too hard, or rigid, etc.) than the process 700 moves to 728 and the vehicle alerts an operator (such as a remote vehicle operator)” One of a skilled artisan would have realized that at least one characteristic of the sematic class of the object (e.g. animate, too hard, rigid) represent values (form of a confidence value) is/are compared to a set threshold ( semantic class criterion) to determine if a value, associated with a characteristic, meets or exceeds the semantic class criterion.) Thus, 0045 discloses that the comparison of these values may not meet the set threshold in some situations. Furthermore, if the semantic class criterion (of an object) is met or exceeded, the vehicle is able to pass through the object (and may contact). (0045, 0061, 0068, 0098) For example, based on 0045, in other situations, the semantic class does meet or exceeds the proceed criteria (such as the object is inanimate, not hard, or not rigid, etc.). Thus, Chen discloses classifying the object with classes and sub-class which include the risk of contact that indicate if the objects the vehicle can or cannot hit. (0022-0023, 0027-0028, 0031) Furthermore, Chen also determines if the object is allowed to be contacted based on criterion exceeding a threshold or to be avoided if the criterion is below a threshold. Thus, Chen discloses assigning/labeling if an object/thing is approved to make contact (approved for colliding) or should be avoided making contact (not approved for colliding) (form of “labeling each said obstacle with a second label being assigned a confidence value indicative of whether the obstacle is approved for collision…”) It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to have modified the cited art with the discussed features of Chen et al since it would have provided the benefit of improving overall safety of autonomous vehicles and its passengers (0015, 0061) Thus, the combination of Zhao and Chen teaches the argued limitations of Claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. If the Applicant chooses to amend the claims in future filings, the Examiner kindly states any new limitation(s) added to the claims must be described in the specification in such a way as to reasonably convey to one skilled in the relevant art in order to meet the written description requirement of 35 USC 112, first paragraph. To help expedite prosecution, promote compact prosecution and prevent a possible 112(a)/first paragraph rejection, the Examiner respectfully requests for each new limitation added to the claims in a future filing by the Applicant that the Applicant would cite the location within the specification showing support for that new limitation within the remarks. In addition, MPEP 2163.04(I)(B) states that a prima facie under 112(a)/first paragraph may be established if a claim has been added or amended, the support for the added limitation is not apparent, and applicant has not pointed out where added the limitation is supported. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID FABER whose telephone number is (571)272-2751. The examiner can normally be reached Monday - Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Please refer to MPEP 713.09 for scheduling interviews after the mailing of this office action. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at 5712724140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM M QUELER/ Supervisory Patent Examiner, Art Unit 2172 /D.F/ Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Dec 16, 2024
Non-Final Rejection — §103
Mar 20, 2025
Response after Non-Final Action
Mar 20, 2025
Response Filed
Apr 02, 2025
Response Filed
Apr 30, 2025
Final Rejection — §103
Jul 18, 2025
Examiner Interview Summary
Jul 18, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Request for Continued Examination
Aug 26, 2025
Response after Non-Final Action
Oct 24, 2025
Non-Final Rejection — §103
Feb 03, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12571650
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR UPDATING MAP
2y 5m to grant Granted Mar 10, 2026
Patent 12561512
METHODS AND SYSTEMS FOR PROMPTING LARGE LANGUAGE MODEL TO GENERATE FORMATTED OUTPUT
2y 5m to grant Granted Feb 24, 2026
Patent 12541296
FINANCIAL SERVICE PROVIDING METHOD USING VISUALIZED FINANCIAL RELATIONSHIP CONTENT-BASED UI, FINANCIAL SERVICE PROVIDING APPARATUS FOR PERFORMING SAME, AND RECORDING MEDIUM HAVING SAME RECORDED THEREIN
2y 5m to grant Granted Feb 03, 2026
Patent 12522242
MAP EVALUATION APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12497029
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
88%
With Interview (+36.7%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 531 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month