Prosecution Insights
Last updated: April 19, 2026
Application No. 18/928,441

APPARATUS AND METHOD FOR VISION CONTROL OF WEARABLE ROBOT

Non-Final OA §103
Filed
Oct 28, 2024
Examiner
KATZ, DYLAN MICHAEL
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
242 granted / 279 resolved
+34.7% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talebi et al (US 20240315910, hereinafter Talebi) in view of Huang et al (CN 116872179, hereinafter Huang). Regarding Claim 1, Talebi teaches: A vision control apparatus of a wearable robot (see at least "For instance, compute device 230 may be configured to determine terrain information associated with the environment of the exoskeleton device 200 to enable footstep planning and/or balance estimation, as described in more detail below. Although compute device 230 is shown in FIG. 2 as being located external to exoskeleton device 200" in par. 0033) , comprising: a transceiver configured to communicate with at least one controller of the wearable robot; (see at least "After performing footstep planning in process 328, one or more gait parameters output from the footstep planning process may be provided to a controller of exoskeleton device 330. For instance, information about the step size, step height, and/or step timing may be provided to exoskeleton device 330 via communications interface 332." in par. 0044) at least one processor (see at least "The one or more images captured by the perception system 210 may be provided to compute device 230 that includes one or more processors configured to process the image(s)." in par. 0033) ; and memory storing instructions that, when executed by the at least one processor, are configured to cause the vision control apparatus (see at least "The data storage 904 may exist as various types of storage media, such as a memory. For example, the data storage 904 may include or take the form of one or more computer-readable storage media that can be read or accessed by processor(s) 902. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with processor(s) 902." in par. 0064) to: receive, from the at least one controller of the wearable robot via a receiver of the transceiver, an indicator of a current robot foot position (see at least “As shown, a current location 410 of the exoskeleton device and a future location 420 of the exoskeleton device may be modeled based on one or more planned movements of the exoskeleton device and/or the crutches that the user of the exoskeleton device may use to improve stability. A terrain map generated based on images captured from the perception system may be used to identify footstep target locations and/or crutch target locations, as output of the footstep planning process.” In par. 0043 and "The sensors may be configured to measure properties of the robotic device, such as angles of the joints, pressures within the actuators, joint torques, and/or positions, velocities, and/or accelerations of members of the robotic limb(s) at a given point in time…The processing system of the robotic device may determine the angles of the joints of the robotic limb, either directly from angle sensor information or indirectly from other sensor information from which the joint angles can be calculated." in par. 0053) ; generate, based on the detected characteristic of terrain, point cloud-based geometric information associated with the terrain; (see at least "As shown in FIG. 3, the output of IMU module 322 and camera module 324 may be used to generate a terrain map 326 of the terrain sensed by perception components 310. For instance, the terrain map may include height information associated with detected objects in the captured images." in par. 0040) determine, based on the current robot foot position and the point cloud-based geometric information, a subsequent robot foot position (see at least "Terrain map 326 may be provided as input to a footstep planning and/or balance estimation process 328. As described above, to enable a user to walk with a lower body powered exoskeleton device, gait parameters (e.g., step length, step height, step timing, etc.) for the exoskeleton device are typically entered manually by a helper. In some embodiments, one or more (e.g., all) gait parameters for the exoskeleton device 330 may be determined automatically (e.g., without user input) based, at least in part, on terrain map 326. Some embodiments implement a footstep planning process that determines footstep target locations for a next step of the exoskeleton device based on the terrain map 326, and corresponding gait parameters may be determined based, at least in part, on the planned footstep target locations." in par. 0041 and “As shown, a current location 410 of the exoskeleton device and a future location 420 of the exoskeleton device may be modeled based on one or more planned movements of the exoskeleton device and/or the crutches that the user of the exoskeleton device may use to improve stability. A terrain map generated based on images captured from the perception system may be used to identify footstep target locations and/or crutch target locations, as output of the footstep planning process.” In par. 0043) ; and transmit, to the at least one controller of the wearable robot via a transmitter of the transceiver, the determined subsequent robot foot position (see at least "After performing footstep planning in process 328, one or more gait parameters output from the footstep planning process may be provided to a controller of exoskeleton device 330. For instance, information about the step size, step height, and/or step timing may be provided to exoskeleton device 330 via communications interface 332. In some embodiments, communications interface 332 is implemented as a wireless communications interface (e.g., a WiFi interface) between processing components 320 and a controller of exoskeleton device 330." in par. 0044) . Talebi does not appear to explicitly teach all of the following, but Huang does teach: detect, via a depth camera of the wearable robot, a characteristic of terrain around the wearable robot (see at least " The embodiment provides an exoskeleton robot feasible domain parameter analysis system based on depth camera, comprising: The invention claims a data collecting device and a computing platform. the data collecting device collects the RGB image and depth map in the advancing direction of the exoskeleton robot in real time and sends to the calculating platform" on page 6) ; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi to incorporate the teachings of Huang wherein the camera is a depth camera. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 2, Talebi as modified by Huang teaches: The vision control apparatus of claim 1, Talebi does not appear to explicitly teach all of the following, but Huang does teach: wherein the at least one processor comprises a red-green-blue-depth (RGB-Depth) pre-processor configured to convert depth information of the depth camera into a point cloud form to generate the point cloud-based geometric information associated with the terrain (see at least "step 5.1, according to the terrain type of the next landing point obtained in the step 4, performing terrain type screening on the walking area obtained in the step 3.2, obtaining the depth map area corresponding to the terrain type of the next landing point; then using the point cloud inverse projection algorithm to restore the three dimensional geometric structure of the area;" on page 4 and “based on the three-dimensional geometric structure obtained in the step 5.1, using the point cloud clustering method to convert the original stair point cloud model into a plane set, in the conversion process, the stair upper surface and the stair side surface should be included” on page 4 ) , wherein the depth camera comprises an RGB-Depth camera. (see at least " The embodiment provides an exoskeleton robot feasible domain parameter analysis system based on depth camera, comprising: The invention claims a data collecting device and a computing platform. the data collecting device collects the RGB image and depth map in the advancing direction of the exoskeleton robot in real time and sends to the calculating platform" on page 6) ; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi to incorporate the teachings of Huang wherein the RGB and depth data is processed into a point cloud from which surface models like those of a stair can be extracted. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 3, Talebi as modified by Huang teaches: The vision control apparatus of claim 2, Talebi does not appear to explicitly teach all of the following, but Huang does teach: wherein the RGB-Depth pre-processor is configured to perform point cloud filtering for removing noise from the converted the point cloud form and for adjusting a data size (see at least " Further, the step 3.1 further comprises filtering the error detection, and the method for filtering the error detection is as follows: calculating the surrounding area of each connecting area according to the outline of the connecting area in the obtained detection result, and deleting the connecting area whose area is less than the preset threshold." on page 5 ) . It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi to incorporate the teachings of Huang wherein the point cloud is filtered to reduce errors and areas smaller than a threshold are deleted (which reduces data size). The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 4, Talebi as modified by Huang teaches: 4. The vision control apparatus of claim 3, further comprising: Talebi further teaches: an inertia measurement unit (IMU)-based point cloud corrector configured to align, based on one or more measurements of at least one IMU sensor, a filtered point cloud in a gravitational direction. (see at least " In some embodiments, perception components 310 may be mounted on the pelvis of the exoskeleton device 330 to enable capture of images of the terrain in front of the exoskeleton device. In some embodiments, the IMU 314 may be used to determine the orientation of camera 312 relative to the ground." in par. 0039 and “Processing components 320 also includes camera module 324 configured to process information (e.g., images) received from camera(s) 312. As shown in FIG. 3, the output of IMU module 322 and camera module 324 may be used to generate a terrain map 326 of the terrain sensed by perception components 310. For instance, the terrain map may include height information associated with detected objects in the captured images.” In par. 0040). Regarding Claim 11, Talebi as modified by Huang also teaches: A method for implementing the apparatus of Claim 1 (see Claim 1 analysis) Regarding Claim 12, Talebi as modified by Huang also teaches: A method for implementing the apparatus of Claim 2 (see Claim 1 analysis) Regarding Claim 13, Talebi as modified by Huang also teaches: A method for implementing the apparatus of Claim 3 (see Claim 3 analysis) Regarding Claim 14, Talebi as modified by Huang also teaches: A method for implementing the apparatus of Claim 4 (see Claim 4 analysis) Claim(s) 5-7, 9-10, 15-17, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Talebi et al (US 20240315910, hereinafter Talebi) in view of Huang et al (CN 116872179, hereinafter Huang) and Wang et al (CN 116901036, hereinafter Wang) Regarding Claim 5, Talebi as modified by Huang teaches: 5. The vision control apparatus of claim 4, wherein the instructions, when executed by the at least one processor, are configured to cause the vision control apparatus to: Talebi further teaches: set a region-of-interest (see at least " The footstep planning may involve generating one or more models of the environment of the exoskeleton device 200" in par. 0036) ; generate an elevation map by combining the determined elevation values (see at least " the output of IMU module 322 and camera module 324 may be used to generate a terrain map 326 of the terrain sensed by perception components 310. For instance, the terrain map may include height information associated with detected objects in the captured images." in par. 0040) . Talebi and Huang do not appear to explicitly teach all of the following, but Wang does teach: align the filtered point cloud within the region-of-interest and split the point cloud aligned within the region-of-interest into a plurality of grids (see at least " The calculation method of the terrain size parameter detection module is as follows: Step B1: point cloud processing; Step B11: coordinate conversion; obtaining colour information and distance information on each pixel, converting all point cloud data into point cloud data under the same three-dimensional coordinate system; Step B12: removing noise by point cloud; The method comprises the following steps: firstly, performing voxel down-sampling to the point cloud, and then removing the outlier point by the statistical outlier point method;" on pages 3-4 ) ; determine an elevation value for each portion of terrain corresponding to a respective grid of the plurality of grids (see at least " then using the random sampling method to randomly extract the points with the same number for each depth image as the new point cloud data; Step B2: the processed point cloud data through two groups of feature extracting part, extracting the global and local features of the point cloud and combining them, then through a PointNet layer, further extracting the features obtained above, at last through the full connecting layer, at the same time, outputting the size parameter of the current terrain;" on page 4 ) ; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi as modified by Huang to incorporate the teachings of Wang wherein the sensor data is processed to populate terrain size parameters for the voxels in a three-dimensional grid coordinate system. The motivation to incorporate the teachings of Wang would be to improve the accuracy and efficiency of the terrain detection of the exoskeleton robot (see page 2) Regarding Claim 6, Talebi as modified by Huang and Wang teaches: 6. The vision control apparatus of claim 5, Talebi does not appear to explicitly teach all of the following, but Huang does teach: wherein each elevation value of the determined elevation values is determined based on an average of length values in the gravitational direction of point clouds, associated with the point cloud, input for the respective grid of the plurality of grids (see at least " calculating each point in the upper surface of two adjacent stairs, taking the distance median in the normal vector direction of each corresponding plane as the height parameter of the stair terrain; the ringing side surface between the two stairs is projected to the normal vector direction, respectively calculating the average coordinate of each plane projection point set, then according to the average coordinate of each plane projection point set, calculating the distance between the average coordinate of the two plane projection points, namely obtaining the distance between the stair side surface, it is used as the ladder length parameter of the stair; the upper surface of the stair is the upper surface of the stair parallel to the ground surface" on page 4 ) . It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi as modified by Huang and Wang to incorporate the teachings of Huang wherein the point cloud data is clustered to model upper surface of stairs using the average distance in the height direction between points on parallel planes. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 7, Talebi as modified by Huang and Wang teaches: 7. The vision control apparatus of claim 5, Talebi does not appear to explicitly teach all of the following, but Huang does teach: wherein the instructions, when executed by the at least one processor, are configured to cause the vision control apparatus to generate a plurality of clusters by clustering the point clouds based on a distance between the point clouds in the elevation map and a normal vector estimated for each point cloud of the point clouds (see at least " when the next step terrain type is stair, according to the following steps: based on the three-dimensional geometric structure obtained in the step 5.1, using the point cloud clustering method to convert the original stair point cloud model into a plane set, in the conversion process, the stair upper surface and the stair side surface should be included;" on page 4 and “calculating the normal vector of the upper surface of the stair and the normal vector of the side surface of the stair by the least square method; calculating each point in the upper surface of two adjacent stairs, taking the distance median in the normal vector direction of each corresponding plane as the height parameter of the stair terrain; the ringing side surface between the two stairs is projected to the normal vector direction, respectively calculating the average coordinate of each plane projection point set, then according to the average coordinate of each plane projection point set, calculating the distance between the average coordinate of the two plane projection points, namely obtaining the distance between the stair side surface, it is used as the ladder length parameter of the stair” on page 4) . It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi as modified by Huang and Wang to incorporate the teachings of Huang wherein the point cloud data is clustered to model upper surface and side surface of stairs with their normal vectors. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 9, Talebi as modified by Huang and Wang teaches: 9. The vision control apparatus of claim 7, wherein the instructions, when executed by the at least one processor, are configured to Talebi does not appear to explicitly teach all of the following, but Huang does teach: cause the vision control apparatus to extract geometric information of the terrain from the plurality of clusters, respectively. (see at least " based on the three-dimensional geometric structure obtained in the step 5.1, using the point cloud clustering method to convert the original stair point cloud model into a plane set, in the conversion process, the stair upper surface and the stair side surface should be included;" on page 4 ) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi as modified by Huang and Wang to incorporate the teachings of Huang wherein the point cloud data is clustered to model upper surface and side surface of stairs with their normal vectors. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 10, Talebi as modified by Huang and Wang teaches: 10. The vision control apparatus of claim 9, Talebi does not appear to explicitly teach all of the following, but Huang does teach: wherein the plurality of clusters comprise geometric information on at least one of a flat ground, stairs, an uphill slope, or a downhill slope (see at least " the output is the terrain type semantic segmentation result of each pixel; the terrain type comprises flat ground, slope, stair, wall body and barrier, wherein the flat ground, slope and stair are defined as walking area, the wall body and barrier are defined as non-walking area" on page 3 ) . It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus taught by Talebi as modified by Huang and Wang to incorporate the teachings of Huang wherein the point cloud data is clustered to model upper surface and side surface of stairs with their normal vectors. The motivation to incorporate the teachings of Huang would be to improve safety while walking on complex terrains (see page 9) and improve adaptability to different terrains (see page 10) Regarding Claim 15, Talebi as modified by Huang and Wang also teaches: A method for implementing the apparatus of Claim 5 (see Claim 5 analysis) Regarding Claim 16, Talebi as modified by Huang and Wang also teaches: A method for implementing the apparatus of Claim 6 (see Claim 6 analysis) Regarding Claim 17, Talebi as modified by Huang and Wang also teaches: A method for implementing the apparatus of Claim 7 (see Claim 7 analysis) Regarding Claim 19, Talebi as modified by Huang and Wang also teaches: A method for implementing the apparatus of Claim 9 (see Claim 9 analysis) Regarding Claim 20, Talebi as modified by Huang and Wang also teaches: A method for implementing the apparatus of Claim 10 (see Claim 10 analysis) Allowable Subject Matter Claims 8, 18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The closest prior art comes from Talebi, Huang, and Wang but the prior art does not appear to teach “based on an angular difference of the normal vectors between point clouds adjacent to each other by a distance within a specific criterion being determined to be within a threshold level, classify the corresponding point clouds whose angular difference is determined to be within the threshold level into a same cluster” in combination with all of the other limitations in the claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN M KATZ/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Oct 28, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596378
Autonomous Control and Navigation of Unmanned Vehicles
2y 5m to grant Granted Apr 07, 2026
Patent 12594663
ROBOT SYSTEM AND CART
2y 5m to grant Granted Apr 07, 2026
Patent 12589499
Mobile Construction Robot
2y 5m to grant Granted Mar 31, 2026
Patent 12589491
METHODS, SYSTEMS, AND DEVICES FOR MOTION CONTROL OF AT LEAST ONE WORKING HEAD
2y 5m to grant Granted Mar 31, 2026
Patent 12582491
CONTROL OF A SURGICAL INSTRUMENT HAVING BACKLASH, FRICTION, AND COMPLIANCE UNDER EXTERNAL LOAD IN A SURGICAL ROBOTIC SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+20.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month