Prosecution Insights
Last updated: April 19, 2026
Application No. 18/953,984

APPARATUS AND METHOD FOR GENERATING SEMANTIC MAP-BASED ROBOT DRIVING ROUTE PLAN FOR TRANSPORTATION VULNERABLE

Non-Final OA §101§103
Filed
Nov 20, 2024
Examiner
SLOWIK, ELIZABETH J
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
64%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
30 granted / 65 resolved
-5.8% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
58.9%
+18.9% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 65 resolved cases

Office Action

§101 §103
DETAILED ACTION This is the first Office action on the merits. Claims 1-20 are currently pending and addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement submitted on 11/20/2024 has been received and considered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a semantic map generation unit” (claims 1, 2) “a safety zone generation unit” (claims 1, 3-6) “a driving route plan generation unit” (claims 1, 7-10) Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification ([0038]: “Terms, such as "... unit", "... er/or", and "module" used in the specification, may mean a unit capable of processing at least one function or operation described in the specification, which may be implemented as hardware or a circuit, software, or a combination of hardware or circuit and software.”) as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 and 11-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Regarding claims 1 and 11, these claims recite, when considered individually or as a whole, an apparatus and method for generating a semantic map-based robot driving route plan. Therefore, claims 1 and 11 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: An apparatus for generating a semantic map-based robot driving route plan for transportation vulnerable, the apparatus comprising: a semantic map generation unit configured to generate a semantic map for an area where a driving robot is driving based on real-time location tracking data of the driving robot and semantic data on a surrounding environment; a safety zone generation unit configured to: calculate heights for a plurality of objects recognized while the driving robot is driving on the generated semantic map; and generate a safety zone for a specific object determined to be the transportation vulnerable, among the plurality of objects, based on the calculated heights; and a driving route plan generation unit configured to generate a second driving route plan different from a first driving route plan in real time for the safety zone when the safety zone is generated while the driving robot is driving with the first driving route plan. The examiner submits that the foregoing bolded limitations constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind and/or “by a human using a pen and paper.” See MPEP § 2106.04(a)(2)(III). For example, the “calculate heights for a plurality of objects recognized while the driving robot is driving on the generated semantic map” step includes a human using pen and paper to calculate heights for objects based on observed points. The “generate a safety zone for a specific object determined to be the transportation vulnerable, among the plurality of objects, based on the calculated heights” step includes a human operator mentally determining a larger safety area is required around a person that has been identified as a child based on height. Further, the “generate a second driving route plan different from a first driving route plan in real time for the safety zone when the safety zone is generated while the driving robot is driving with the first driving route plan” step includes a human operator mentally determining and/or using pen and paper to determine an alternate route a robot can take in accordance with the identified safety zone. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): An apparatus for generating a semantic map-based robot driving route plan for transportation vulnerable, the apparatus comprising: a semantic map generation unit configured to generate a semantic map for an area where a driving robot is driving based on real-time location tracking data of the driving robot and semantic data on a surrounding environment; a safety zone generation unit configured to: calculate heights for a plurality of objects recognized while the driving robot is driving on the generated semantic map; and generate a safety zone for a specific object determined to be the transportation vulnerable, among the plurality of objects, based on the calculated heights; and a driving route plan generation unit configured to generate a second driving route plan different from a first driving route plan in real time for the safety zone when the safety zone is generated while the driving robot is driving with the first driving route plan. For the following reasons, the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitation of “generate a semantic map for an area where a driving robot is driving based on real-time location tracking data of the driving robot and semantic data on a surrounding environment,” this limitation recites mere data transmission and data display that is insignificant extra solution activity. See MPEP § 2106.05(g). The independent claims also recite the additional elements of a semantic map generation unit, a safety zone generation unit, and a driving route plan generation unit which are generic computing components merely used as a tool to perform the abstract idea. See MPEP § 2106.05(f). Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitations as an ordered combination or as a whole, the limitations add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception. See MPEP 2106.05. Accordingly, the additional limitations do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to nothing more than insignificant extra solution activity and generic computing components. Therefore, the additional limitations are not a “practical application.” Additionally, it is not “something more” because the limitations include a well-understood, routine, and conventional activity that cannot provide an inventive concept. See MPEP § 2106.05(d), and Deyle et al., U.S. Patent Application Publication No. 2020/0050206 A1 and Matsukawa et al., U.S. Patent Application Publication No. 2009/0043440 A1. Therefore, these claims are not patent eligible. 101 Analysis – Dependent Claims Regarding claims 2 and 12, these claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception, when considered individually or as a whole. These claims further define the types of sensors used and the data obtained in the data gathering, which is insignificant extra solution activity. See MPEP § 2106.05(g). Therefore, this is not a “practical application.” Additionally, this is not “something more” because it is a well-understood, routine, and conventional activity that cannot provide an inventive concept. See MPEP § 2106.05(d) and Deyle et al., U.S. Patent Application Publication No. 2020/0050206 A1 and Matsukawa et al., U.S. Patent Application Publication No. 2009/0043440 A1. Therefore, these claims are not patent eligible. Regarding claims 3 and 13, these claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception, when considered individually or as a whole. These claims recite calculating a height which can be performed by a human mentally or by using pen and paper. The claims also recite data transmission and data processing that is considered insignificant extra solution activity. See MPEP § 2106.05(g). Therefore, this is not a “practical application.” Additionally, this is not “something more” because it is a well-understood, routine, and conventional activity that cannot provide an inventive concept. See MPEP § 2106.05(d) and Deyle et al., U.S. Patent Application Publication No. 2020/0050206 A1, Matsukawa et al., U.S. Patent Application Publication No. 2009/0043440 A1, and Xiao et al., U.S. Patent Application Publication No. 2024/0077875 A1. Therefore, these claims are not patent eligible. Regarding claims 4-8 and 14-18, these claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception, when considered individually or as a whole. These claims further define the abstract idea by reciting determinations that can be performed by a human mentally and/or using pen and paper. For example, a human operator can observe an object with a small height, change a safety zone area to be larger or smaller depending on the identified object, determine a range that a collision occur based on the identified safety zone and height of the object, and determine a route that avoids a collision. Therefore, this is not a “practical application.” Additionally, this is not “something more” because it is a well-understood, routine, and conventional activity that cannot provide an inventive concept. See MPEP § 2106.05(d) and Deyle et al., U.S. Patent Application Publication No. 2020/0050206 A1, Matsukawa et al., U.S. Patent Application Publication No. 2009/0043440 A1, and Xiao et al., U.S. Patent Application Publication No. 2024/0077875 A1. Therefore, these claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 9-12, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Deyle et al., U.S. Patent Application Publication No. 2020/0050206 A1 (hereinafter Deyle), in view of Matsukawa et al., U.S. Patent Application Publication No. 2009/0043440 A1 (hereinafter Matsukawa). Regarding claim 1, Deyle discloses an apparatus for generating a semantic map-based robot driving route plan for transportation vulnerable (see at least Deyle Fig. 1), the apparatus comprising: a semantic map generation unit configured to generate a semantic map for an area where a driving robot is driving based on real-time location tracking data of the driving robot and semantic data on a surrounding environment (see at least Deyle [0120]: “The semantic mapping system 736 is configured to generate or update a semantic map associated with a location or setting in which the robot 100 is located. For instance, the semantic mapping system can generate a map associated with a patrol route through a building floor as the robot moves through the space. The location of obstructions, and paths within the building floor can be detected by the scanners 726 and recorded onto the semantic map.”; [0196]: “Likewise, the location of individuals or objects detected by robots, security cameras, or other location-tracking mechanisms (such as GPS tracking devices, mobile phones, or RFID readers) can be updated as the individuals or objects move in real-time.”); a safety zone generation unit configured to: calculate heights for a plurality of objects recognized while the driving robot is driving on the generated semantic map (see at least Deyle [0141]: “For instance, the robot can capture images or videos of the individuals using the cameras 722, and can perform facial recognition on the captured images or videos. Likewise, the robot can identify a height or size of the individual, or can scan a badge of the individual (for instance, using an RFID reader).”; [0266]: “In some embodiments, the robot 100 can determine a height of detected objects, and can generate a 3D semantic map based on the detected heights.”); and a driving route plan generation unit configured to generate a second driving route plan different from a first driving route plan in real time for the safety zone when the safety zone is generated while the driving robot is driving with the first driving route plan (see at least Deyle [0275]: “A robot 100 can navigate within an area using a semantic map or a generated floor map, for instance by selecting a route that avoids obstacles (e.g., by a threshold distance), by selecting routes that avoid high-trafficked areas, by selecting routes that maximize the robot's exposure or proximity to high-value assets or other objects, and the like. In some embodiments, the robot can plan a route through an area (such as a building floor) in advance using the semantic map, or can dynamically adjust a route by querying the semantic map to identify an alternative route to a location (for instance, in the event that a route is blocked or in the event that suspicious activity or a security violation is detected).”). Deyle fails to expressly disclose generating a safety zone for a specific object determined to be the transportation vulnerable based on the calculated height. However, Matsukawa teaches and generate a safety zone for a specific object determined to be the transportation vulnerable, among the plurality of objects, based on the calculated heights (see at least Matsukawa [0096]: “The height of the person sensed by the sixth sensing unit 516 contributes to expand the virtual obstacle region A if the height of the person is lower than the predetermined height. That is, in the present embodiment, if the sensed person is an adult or a child is determined based on the height. According to a method of determining if the sensed person is an adult or a child based on his/her height, it is expected that the determination between an adult and up to a 10-year old child who moves quickly and thus whose movement is hard to predict can be done precisely to a certain degree. And therefore, it is possible to set the virtual obstacle region A in accordance with the movement of the child if the obstacle is a child.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the apparatus disclosed by Deyle with Matsukawa with reasonable expectation of success. Matsukawa is directed towards the related field of controlling an autonomous mobile device based on an obstacle. Therefore, one of ordinary skill in the art would be motivated to modify Deyle with Matsukawa to perform a safe, easy, and smooth evasive action according to obstacle information (see at least Matsukawa [0007]: “The present invention is directed to provide an autonomous mobile device which can take a safe, easy, and smooth evasive action with regard to the movement of a person.”). Regarding claim 2, Deyle in view of Matsukawa teach all elements of the apparatus according to claim 1 as explained above. Deyle further teaches wherein the semantic map generation unit is further configured to: generate the semantic map based on depth images obtained from a plurality of cameras (see at least Deyle [0211]: “In some embodiments, the robot 100 can use one or more sensors, such as…3D depth cameras”; [0120]: “The location of obstructions, and paths within the building floor can be detected by the scanners 726 and recorded onto the semantic map. Likewise, objects can be detected during the robot's movement (for instance, by the cameras 722), and information describing the detected objects and the location of the detected objects can be included within a semantic map.”), odometry of the driving robot estimated in real time from an IMU sensor (see at least Deyle [0126]: “In embodiments where a robot arm is extended, the robot may reposition or balance itself to compensate for the shift in the center of gravity of the robot, for instance using inertial sensors (such as 3-axis gyroscopes, accelerometers, or magnetometers).”), and a semantic image estimated from RGB images obtained from the plurality of cameras (see at least Deyle [0112]: “In some embodiments, camera pairs can capture 3D video, and in some embodiments, images or video captured by multiple cameras can be stitched together using one or more stitching operations to produce a stitched image or video. In addition to capturing images or video in the visible light spectrum, the cameras can capture images within the IR spectrum or can capture thermal images.”; images in the visible light spectrum include RGB images). Regarding claim 9, Deyle in view of Matsukawa teach all elements of the apparatus according to claim 1 as explained above. Matsukawa further teaches wherein the driving route plan generation unit is configured to: accelerate a driving speed of the driving robot while the driving robot is driving with the first driving route plan (see at least Matsukawa [0176]: “With such a configuration, the autonomous mobile device can acquire the information as to the actual conditions of the movement of the moving object from the information relating to the velocity vector and the accelerated velocity vector and further acquire a basic pathway for obtaining the possible pathways of the moving object which are used for setting the virtual obstacle region, thereby enabling the autonomous mobile device to avoid the thus set virtual obstacle region.”; [0122]: “The autonomous mobile device 1 carries out at least one operation of a stop, a deceleration, an acceleration, and a change of direction such that the autonomous mobile device 1 can avoid a collision with the person 2 as a moving object or the stationary obstacle in accordance with a driving control of the collision avoidance control device 200.”), and decelerates the driving speed while the driving robot is driving with the second driving route plan (see at least Matsukawa [0145]: “FIG. 7 illustrates a case example of carrying out a preliminary operation to avoid a collision considering an acceleration of the person 2…Therefore, the autonomous mobile device 1 decelerates in order to avoid the collision.”). Regarding claim 10, Deyle in view of Matsukawa teach all elements of the apparatus according to claim 9 as explained above. Matsukawa further teaches wherein the driving route plan generation unit is configured to: variably determine a degree of deceleration of the driving speed of the driving robot in the second driving route plan based on a type of the safety zone that varies based on a height of the specific object (see at least Matsukawa [0145]: “FIG. 7 illustrates a case example of carrying out a preliminary operation to avoid a collision considering an acceleration of the person 2…In view of the above, in FIG. 7, the virtual obstacle region of the person 2 at the time t+1, the time t+2, and the time t+3 with regard to the time t can be represented by Z71, Z72, and Z73, respectively. If the autonomous mobile device 1 keeps moving at a speed of, for example, 1 m/sec., the autonomous mobile device 1 comes close to the person 2 after 2 seconds and they collide with each other immediately thereafter. Therefore, the autonomous mobile device 1 decelerates in order to avoid the collision.”; [0151]: “FIG. 9 illustrates a case example in which the autonomous mobile device 1 preliminary takes a collision avoidance operation with regard to the person 2 by using the height information of the person 2 in addition to the eyes or the face orientation of the person 2 as the attribute information (individual information) of the person 2.”; the virtual obstacle region of the person is based on the height, as evidenced by Matsukawa [0096]). Regarding claim 11, this claim recites a method performed by the apparatus of claim 1. The combination of Deyle in view of Matsukawa also teaches a method performed by the apparatus of claim 1 as outlined in the rejection to claim 1 above. Therefore, claim 11 is rejected for the same rationale as claim 1. Regarding claim 12, this claim recites a method performed by the apparatus of claim 2 as explained above. Therefore, claim 12 is rejected for the same rationale as claim 2. Regarding claim 19, this claim recites a method performed by the apparatus of claim 9 as explained above. Therefore, claim 19 is rejected for the same rationale as claim 9. Regarding claim 20, this claim recites a method performed by the apparatus of claim 10 as explained above. Therefore, claim 20 is rejected for the same rationale as claim 10. Claims 3-8 and 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over Deyle in view of Matsukawa, and further in view of Xiao et al., U.S. Patent Application Publication No. 2024/0077875 A1 (hereinafter Xiao). Regarding claim 3, Deyle in view of Matsukawa teach all elements of the apparatus according to claim 1 as explained above. Matsukawa further teaches wherein the safety zone generation unit is further configured to: calculate a height for a classified human class (see at least Matsukawa [0096]: “In the present embodiment, the control device 501 includes a sixth sensing unit 516 for sensing a height of a person as shown in FIG. 17. The height of the person sensed by the sixth sensing unit 516 contributes to expand the virtual obstacle region A if the height of the person is lower than the predetermined height. That is, in the present embodiment, if the sensed person is an adult or a child is determined based on the height.”). Deyle in view of Matsukawa fail to expressly disclose receiving a semantic cloud generated from the semantic map and classifying the semantic cloud through clustering. However, Xiao teaches receive a semantic cloud generated from the semantic map (see at least Xiao [0054]: “Block 016, the robot 100 establishes a current semantic map according to the current environment image and the current depth image, the current semantic map includes current point cloud information and second object type labels corresponding to the current point cloud information”) ; classify the semantic cloud into an instance unit through clustering (see at least Xiao [0055]: “Block 017, the robot 100 clusters the current point cloud information in the current semantic map according to the second object type labels, identifies each object, and obtains second bounding box of each object, the second bounding box is an independent space corresponding to point clouds with same second object type label after clustering the point clouds”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the apparatus disclosed by Deyle in view of Matsukawa with Xiao with reasonable expectation of success. Xiao is directed towards the related field of robot positioning. Therefore, one of ordinary skill in the art would be motivated to modify Deyle in view of Matsukawa with Xiao to quickly and accurately determine proper robot positioning (see at least Xiao [0129]: “The above robot 100 can quickly and accurately determine associated node pairs (i.e., associated object pairs) through topology map matching, therefore, the search branch with the highest matching degree can be quickly determined through the number of associated nodes with the largest number of associated node pairs in the current local topology map and the full topology map, and then the current pose of the robot 100 can be determined according to the search branch with the highest matching degree.”). Regarding claim 4, Deyle in view of Matsukawa and Xiao teach all elements of the apparatus according to claim 3 as explained above. Matsukawa further teaches wherein the safety zone generation unit is further configured to: determine an object, having a height smaller than a preset specific reference value, as the specific object (see at least Matsukawa [0111]: “When the height of the person sensed by the above-described sixth sensing unit is lower than the predetermined height, a function to expand the virtual obstacle region may be added to the virtual obstacle region setting unit.”). Regarding claim 5, Deyle in view of Matsukawa and Xiao teach all elements of the apparatus according to claim 4 as explained above. Matsukawa further teaches wherein the safety zone generation unit is further configured to: variably determine a type of the safety zone based on the height of the specific object (see at least Matsukawa [0153]: “When using the height information, the autonomous mobile device 1 may determine that a tall person is an adult, and therefore expands the virtual obstacle region in his/her traveling direction since an adult may walk faster, whereas the autonomous mobile device 1 may determine that a short person is a child, and therefore, expands the virtual obstacle region in a direction orthogonal to his/her traveling direction since a child may suddenly change his/her walking direction to any direction around him/her.”). Regarding claim 6, Deyle in view of Matsukawa and Xiao teach all elements of the apparatus according to claim 5 as explained above. Matsukawa further teaches wherein the safety zone generation unit is further configured to: generate a first safety zone for a first specific object having a height smaller than a first reference value among specific objects (see at least Matsukawa [0095]: “When the obstacle is a person, the control device may determine whether the obstacle is an adult or a child and may shift the virtual obstacle region A in accordance with the adult and the child.”; [0153]: “When using the height information, the autonomous mobile device 1 may determine that a tall person is an adult, and therefore expands the virtual obstacle region in his/her traveling direction since an adult may walk faster, whereas the autonomous mobile device 1 may determine that a short person is a child, and therefore, expands the virtual obstacle region in a direction orthogonal to his/her traveling direction since a child may suddenly change his/her walking direction to any direction around him/her.”; under broadest reasonable interpretation an adult height is smaller than a reference value for an obstacle that is not detected to be a person); and generate a second safety zone for a second specific object having a height smaller than a second reference value among the specific objects, and wherein the second reference value is smaller than the first reference value (see at least Matsukawa [0096]: “The height of the person sensed by the sixth sensing unit 516 contributes to expand the virtual obstacle region A if the height of the person is lower than the predetermined height. That is, in the present embodiment, if the sensed person is an adult or a child is determined based on the height.”; [0153]: “When using the height information, the autonomous mobile device 1 may determine that a tall person is an adult, and therefore expands the virtual obstacle region in his/her traveling direction since an adult may walk faster, whereas the autonomous mobile device 1 may determine that a short person is a child, and therefore, expands the virtual obstacle region in a direction orthogonal to his/her traveling direction since a child may suddenly change his/her walking direction to any direction around him/her.”). Regarding claim 7, Deyle in view of Matsukawa and Xiao teach all elements of the apparatus according to claim 5 as explained above. Matsukawa further teaches wherein the driving route plan generation unit is further configured to: set an expected collision range for the safety zone (see at least Matsukawa [0151]: “However, in the present case example, the autonomous mobile device 1, upon generating the possible transfer pathways of the person 2, calculates them based on the information as to the eyes of the person 2 together with the factors relating to the movement of the person 2 to thereby determine the virtual obstacle region of the person 2, resulting in preparing the operation control information necessary for the autonomous mobile device 1 to avoid the collision with the person 2.”); and generate the second driving route plan that detours the expected collision range (see at least Matsukawa [0149]: “In the above-described case example, the autonomous mobile device 1 predicts a plurality of transfer pathways as to the movement of the person 2, and sets its course to a predictable pathway having the least possibility of collision.”; [0180]: “With such a configuration, since the obstacle region generated based on at least one of the subsidiary information among the environment information (map or the like), the information of a person (sight line, face orientation, height, and the like), the event information, and the like is added to the obstacle regions generated based on the transfer information, the obstacle regions are set to the pathways or the moving directions which have a high possibility of collision with the moving object, such that the operation control to avoid the region can be carried out, which further enhances the safeness of the device.”). Regarding claim 8, Deyle in view of Matsukawa and Xiao teach all elements of the apparatus according to claim 7 as explained above. Matsukawa further teaches wherein the driving route plan generation unit is configured to: variably set the expected collision range according to the type of the safety zone, and wherein a size of the expected collision range is inversely proportional to the height of the specific object (see at least Matsukawa [0185]: “As described above, the individual information may be the height information. Accordingly, the autonomous mobile device can carry out the operation control to properly avoid a collision based on this height information and can set a larger virtual obstacle region for a child than that of an adult considering that it is hard to predict the pathway of a child, namely, can carry out a different avoidance control between a child and an adult when the child or the adult walks past the autonomous mobile device, thereby carrying out a safe and easy operation control of a collision avoidance even for a child according to the difference between the adult and the child without bearing an unnecessarily large detour.”). Regarding claim 13, this claim recites a method performed by the apparatus of claim 3 as explained above. Therefore, claim 13 is rejected for the same rationale as claim 3. Regarding claim 14, this claim recites a method performed by the apparatus of claim 4 as explained above. Therefore, claim 14 is rejected for the same rationale as claim 4. Regarding claim 15, this claim recites a method performed by the apparatus of claim 5 as explained above. Therefore, claim 15 is rejected for the same rationale as claim 5. Regarding claim 16, this claim recites a method performed by the apparatus of claim 6 as explained above. Therefore, claim 16 is rejected for the same rationale as claim 6. Regarding claim 17, this claim recites a method performed by the apparatus of claim 7 as explained above. Therefore, claim 17 is rejected for the same rationale as claim 7. Regarding claim 18, this claim recites a method performed by the apparatus of claim 8 as explained above. Therefore, claim 18 is rejected for the same rationale as claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Katsumata, U.S. Patent Application Publication No. 2025/0370460 A1, directed towards generating a semantic map according to room classification and determining the presence of a person in a room. Ling et al., U.S. Patent Application Publication No. 2022/0355821 A1, directed towards determining autonomous vehicle distance to objects based on ride comfort. Hieida et al., U.S. Patent Application Publication No. 2022/0169245 A1, directed towards tracking objects using semantic segmentation. Kobayashi et al., U.S. Patent Application Publication No. 2021/0101293 A1, directed towards determining a mobile robot route based on moving objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH J SLOWIK whose telephone number is (571)270-5608. The examiner can normally be reached MON - FRI: 0900-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELIZABETH J SLOWIK/Examiner, Art Unit 3662 /ANISS CHAD/Supervisory Patent Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Nov 20, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583434
METHOD OF CONTROLLING HYBRID ELECTRIC VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12559088
Driver Assistance Based on Pose Detection
2y 5m to grant Granted Feb 24, 2026
Patent 12545297
METHODS AND SYSTEMS FOR GENERATING A LONGITUDINAL PLAN FOR AN AUTONOMOUS VEHICLE BASED ON BEHAVIOR OF UNCERTAIN ROAD USERS
2y 5m to grant Granted Feb 10, 2026
Patent 12535318
DETERMINING SCANNER ERROR
2y 5m to grant Granted Jan 27, 2026
Patent 12499763
Reporting Road Event Data and Sharing with Other Vehicles
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
64%
With Interview (+18.3%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 65 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month