Prosecution Insights
Last updated: April 19, 2026
Application No. 17/693,019

Systems and Methods for Robotic Manipulation Using Extended Reality

Non-Final OA §103§DP
Filed
Mar 11, 2022
Examiner
KARWAN, SIHAR A
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Boston Dynamics Inc.
OA Round
5 (Non-Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
82%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
215 granted / 385 resolved
+3.8% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
426
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
27.8%
-12.2% vs TC avg
§102
33.4%
-6.6% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Amendments to the claims have been recorded. Response to Arguments Applicant’s arguments filed on 6/11/2024 have been fully considered but they are not persuasive. Regarding Applicant’s Arguments Double Patenting Rejection has been withdrawn. Applicant’s arguments are based on the newly provided amendments and are fully addressed with the rejections provided to the newly provided claim amendments. Applicant argues that Mead does not teach “at least one image sensor located on the robot, wherein a field of view of the at least one image sensor … spans at least 150 degrees with respect to the ground surface” Examiner strongly disagrees, Fig.6 #66 shows a camera on the robot and the FOV of the camera. And Fig. 30B. Additionally; para 80 teaches FIG. 6, the robot 10 may have one or more "onboard" devices 60 that provide enhanced functionality and capability for operation of the robot in the work environment; capturing or recording audio-visual signals through a camera array. Fig.30B and 30 are both images recorded by the robot camera array 66, 30 is also at least 180 degrees in span with ground surface. This image is transmitted to the pilot 14 and is displayed. The cameras on the robot must have at least the same FOV to display to the pilot. This is made clear as Fig. 12 the pilot can see the 360 degrees of images from the robot of fig. 11 which captures 360 of images. Also see 85-87 for celerity; FIGS. 11 and 12, a pilot station may include a cockpit 82 which is configured for an "immersive" experience for the pilot 14 when combined with a robot 10 configured to provide an "immersive" experience using directional video and directional audio. The cockpit 82 conveys a life-like experience of presence in the work environment to the pilot 14 through the robot 10. The cockpit 82 need not include a physical structure and may instead include systems which provide surround sound and video surround. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-30, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Mead US 2014/0222206 as applied to claim above, and further in view of Ma WO 2021015868. 1. Mead and Ma teach a robot, comprising: a manipulator; Fig.5 #42 arms at least one image sensor located on the robot Fig.6 #66, wherein a field of view of the at least one image sensor with respect to the robot and spans at least 150 degrees with respect to the ground surface; (Para 85; visual indication of the pilot 14 should be visually perceptible within a 270 to 360 degree field of view. And Fig. 30B is 360 degrees view that intersect with the ground surface. Also, if there is only one sensor then each will refer to one. data processing hardware in communication with the at least one image sensor; and (para 80; capturing or recording audio-visual [recording visual is data processing based on image sensor] signals through a camera, a camera array) memory in communication with the data processing hardware, the memory storing instructions, wherein execution of the instructions by the data processing hardware, causes the data processing hardware to: (Para 77, robot includes a computing system) provide, to an extended reality (XR) display located separate from the robot, an output based on the image data (Fig.10 #12 pilot station) based on providing, to the XR display, the output, receive operator movement data indicating an operation to interact with an object located in an environment of the robot and indicating a movement by an operator of the robot; (Fig. 13 and para 90; the remote controller device includes a treadmill-operated [operator operation] controller 84 where the faster the pilot 14 walks, the faster the robot 10 moves [located in an environment, ground i.e. object].) based on the operator movement data, determine a combination of movement of the manipulator Fig.5 #46 and movement of the two or more legs #38 legs Also para 75 chassis supported by legs to interact with the object according to a position of the robot relative to the object as indicated by a map of the environment, 97; interact with the robot in the work environment include; (1) information about the work environment, such as maps, wherein the operator movement data and combination of movement of the manipulator and movement of the two or more legs comprises different movement data (Fig. 13 and para 90; the remote controller device includes a treadmill-operated controller 84 where the faster the pilot 14 walks, the faster the robot 10 moves [in the environment].) operator data and robot movement data are different. Walking vs. track rolling. Also (para 97; Information on the data channel may include; (1) information about the work environment, such as maps, office locations, directories and staffing) Also; Para 72; The robot 10 is connected to the network and configured to operate and interact in the work environment either autonomously or at the direction of the pilot 14. A person 16 or robot 10 must present a badge [combination of movement of the manipulator and movement of the legs] or device at the security access panel 24 to gain entry into the meeting room 18. Each robot 10 is configured to be capable of moving and interacting with people 16 or other robots 10 within the work environment under the direction of a control system and/or at the direction of the pilot 14 located at the pilot station 12. 75; FIG. 5, the torso portion 36 of the robot 10 may include shoulders 40 and arms 42 with "hands" 44 at each end of the arms. The shoulders 40, arms 42 or hands 44 may include devices 46 which can be manipulated. *The claims recited two different subjects that can perform the claimed invention. The robot and the operator. For example “determine a combination of movement of the manipulator and movement of the tow or more legs…..” can be performed “based on the operator movement data” i.e. remote control or can be done with robot. Broadest reasonable interpretation allows for ether, both, or a combination interpretation to be correct. instruct according to the combination of movement of the manipulator and movement of the two or more legs, the robot to move to a location associated with the object using the two or more legs and. (Fig. 13 and para 90; the remote controller device includes a treadmill-operated controller 84 where the faster the pilot 14 walks, the faster the robot 10 moves.) if the robot moves it is instruction for the robot to mover according to the robot movement data. Also, para 72,75, and Fig. 5 Mead teaches all of the limitations of claim 1 but does not explicitly teach two or more legs; at least one image sensor, wherein a field of view of each image sensor of the at least one image sensor intersects a ground surface with respect to the robot; receive, from the at least one image sensor, image data that reflects the at least a portion of the ground surface with respect to the robot and corresponds to the field of view; that reflects the at least a portion of the ground surface with respect to the robot and corresponding to the field of view; to grasp the object using the manipulator. However, Ma teaches at least one image sensor, wherein a field of view of each image sensor of the at least one image sensor intersects a ground surface; receive, from the at least one image sensor, image data that reflects the at least a portion of the ground surface with respect to the robot and corresponds to the field of view; that reflects the at least a portion of the ground surface with respect to the robot and corresponding to the field of view; (Ma Fig. 2B and para 35; The example of FIGURE 2B is based on the robot’s 200 point-of-view. The image from FIGURE 2B may be output to a remote operator to be displayed at a display device at a remote location. As shown in FIGURE 2B, the intended path 226 navigates around the table 220.) Path 226 is the ground surface. Also, if there is only one sensor then each will refer to one. two or more legs; Ma Fig.4 robot legs wherein the robot movement data indicates a combination of movement of the manipulator and movement of the two or more legs; and Ma para 99; The locomotion module 426 may be used to facilitate locomotion of the robotic device 428 and [combination]/or components (e.g., limbs, hands, etc.) of the robotic device 428. to grasp the object using the manipulator. Ma para 19; A parameterized behavior, such as opening a door having a rotating handle, may be implemented to execute opening any door handle (e.g., one that requires thirty (30) degrees of rotation or one that requires sixty (60) degrees or more of rotation). Also, para 23; pick up a bottle. Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to manipulate objects within the environment such that the claimed invention as a whole would have been obvious. 2. Mead and Ma teach all of the limitations of claim 1 and further teaches, wherein to provide, to the XR display, the output, the execution of the instructions by the data processing hardware further causes the data processing hardware to: (Fig. 10 XR display) provide the output to the XR display in a first time interval, and (Fig.10 and Para 79; the robot 10 may provide a video) Video is based on time intervals, first, second, …. nth frames of video wherein to instruct instruct movement by the robot in a second time interval, wherein the first time interval and the second time interval are separated by a planning period. (Fig. 10 and Para 93; the robot 10 may have a video monitor 50 that displays in real time the face 54 of a pilot 14 when occupied) Video is a first, second, nth time. 3. Mead and Ma teach all of the limitations of claim 1 and further teaches, the execution of the instructions by the data processing hardware further causes the data processing hardware to: generate one or more waypoints based on the operator movement data, wherein the one or more waypoints indicate at least one of a body positioning of a body of the robot or a manipulator positioning of the manipulator, and wherein the combination of movement of the manipulator and movement of the two or more legs results in at least one of the body assuming the body positioning or the manipulator assuming the manipulator positioning; and add the one or more waypoints to the map. (Fig.5 #46) (Para 75; hands 44 may include devices 46 which can be manipulated. The form of the robot 10 is adapted to the function or service to be performed.) 4. Mead and Ma teach all of the limitations of claim 1 and further teaches, wherein the manipulator comprises an arm portion and a joint portion. (Fig. 5 #42 arm and hands #44 are both joint portions) 5. Mead and Ma teach all of the limitations of claim 4, but does not teach, wherein the execution of the instruction by the data processing hardware further causes the data processing hardware to: identify, based on the operator movement data, a joint center of motion of the operator, and wherein to instruct the robot to move, the execution of the instructions by the data processing hardware further causes the data processing hardware to: instruct movement by the manipulator relative to a point on the manipulator that corresponds to the joint center of motion of the operator. However, regarding identify, based on the operator movement data, a joint center of motion of the operator, and wherein to instruct the robot to move, the execution of the instructions by the data processing hardware further causes the data processing hardware to: instruct movement by the manipulator relative to a point on the manipulator that corresponds to the joint center of motion of the operator. Ma teaches (Ma para 2; the robot has the capacity to change a position of end effectors as a function of joint configuration, the robot is equipped with automatic whole-body control and planning. This enables a person/human operator to seamlessly demonstrate task space end-effector motions in virtual reality (VR) with little or no concern about kinematic constraints or the robot’s posture.) Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to manipulate objects within the environment such that the claimed invention as a whole would have been obvious. 6. Mead and Ma teach all of the limitations of claim 4, but does not teach, wherein the execution of the instruction by the data processing hardware further causes the data processing hardware to: map a workspace of the operator to a workspace of the manipulator. However, regarding wherein controlling the robot to move comprises map a workspace of the operator to a workspace of the manipulator. Ma teaches (Ma para 2; This enables a person/human operator to seamlessly demonstrate task space end-effector motions in virtual reality (VR) with little or no concern about kinematic constraints or the robot’s posture.) Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to manipulate objects within the environment such that the claimed invention as a whole would have been obvious. 7. Mead and Ma teach all of the limitations of claim 6 and further teach, wherein the execution of the instruction by the data processing hardware further causes the data processing hardware to: generate a movement plan in the workspace of the manipulator based on a task-level result to be achieved, the movement plan reflecting a first aspect of motion that is different from a second aspect of motion that is reflected in the operator movement data. (Ma para 19; the robot may carry out the same task by updating the parameters of the behaviors that were taught during the virtual reality controlled sequence. The parameters may be updated based on a current pose and/or location of the robot, relative to the pose and/or location used during training. Also, para 3; providing a user interface for viewing and editing a path predicated for the robot) 8. Mead and Ma teach all of the limitations of claim 1 and further teaches, wherein the robot is a first robot; (Fig.3 10A) the computing device is in electronic communication with the first robot and a second robot; (Fig.3 10B) and the computing device is configured to control the first robot and the second robot to move in coordination. (Fig.3 12 A and B, also para 73, one or more robots 10) 9. Mead and Ma teach all of the limitations of claim 1 and further teaches, wherein the execution of the instructions by the data processing hardware further causes the data processing hardware to: generate a manipulation plan based on the operator movement data and generate a locomotion plan based on the manipulation plan. (Fig.3 10 A and B are moved based on locomotion manipulation plan of 12 A and B) 10. Mead and Ma teach all of the limitations of claim 1, but does not teach, wherein to instruct the robot to move, the execution of the instructions by the data processing hardware further causes the data processing hardware to: instructing utilization of a force control mode based on a determination that the object is detected to be in contact with the manipulator; or instructing utilization of a low-force mode or no-force mode based on a determination that an object is not detected to be in contact with the manipulator. However, regarding instructing utilization of a force control mode based on a determination that an object is detected to be in contact with the manipulator; 77; checking for objects in hand [contact] using force/torque sensors. Also see para 62 and 68-69) or instructing utilization of a low-force mode or no-force mode based on a determination that anther object is not detected to be in contact with the manipulator. Ma teaches (Ma para 64; These behaviors combine collision free motion planning and hybrid (position and force) Cartesian end-effector control, minimizing the taught parameters and providing robustness during execution. Also see para 77; checking for objects in hand [not detected] using force/torque sensors) Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to manipulate objects within the environment such that the claimed invention as a whole would have been obvious. Claims 11 and 21 are rejected using the same rejections as made to claim 1. Claims 12 and 22 are rejected using the same rejections as made to claim 2. Claims 13 and 23 are rejected using the same rejections as made to claim 3. Claims 14 and 24 are rejected using the same rejections as made to claim 4. Claims 15 and 25 are rejected using the same rejections as made to claim 5. Claims 16 and 26 are rejected using the same rejections as made to claim 6. Claims 17 and 27 are rejected using the same rejections as made to claim 7. Claims 18 and 28 are rejected using the same rejections as made to claim 8. Claims 19 and 29 are rejected using the same rejections as made to claim 9. Claims 20 and 30 are rejected using the same rejections as made to claim 10. 32. The robot of claim 1, wherein the operator movement data corresponds to a plurality of robots, the plurality of robots comprising the robot, Mead para 95; A robot or fleet of robots at a work environment wherein the plurality of robots are instructed to move according to the combination of movement of the manipulator and movement of the tow or more legs. Mead Fig.5 para 75; FIG. 5, the torso portion 36 of the robot 10 may include shoulders 40 and arms 42 with "hands" 44 at each end of the arms; legs, wheels, rollers, track, or tread. The shoulders 40, arms 42 or hands 44; legs, wheels, rollers, track, or tread may include devices 46 which can be manipulated. Claims 31; 33 are rejected under 35 U.S.C. 103 as being unpatentable over Mead and Ma as applied to claim above, and further in view of Abe US 9440353. 31. Mead and Ma teach all of the limitations of claim 1 but do not teach the limitations of claim 31 However Abe teaches, wherein the robot comprises four legs, Fig.4 wherein a bottom portion of a body of the robot faces the ground surface during traversal of the environment by the robot, Fig.4 direction of the legs to the ground wherein a top portion of the body faces away from the ground surface during traversal of the environment by the robot, Fig.4 top portion opposing direction of the legs to the ground wherein the manipulator is connected to the top portion of the body, PNG media_image1.png 228 231 media_image1.png Greyscale wherein the [Fig.4 manipulator and legs] combination of movement of the manipulator and movement of the tow or more legs further indicates a combination of the movement of the manipulator, a movement of the four legs, and (Col.1 line 25; motions of the robot based on sets of control parameters.) Also, Mead Fig.5 para 75; FIG. 5, the torso portion 36 of the robot 10 may include shoulders 40 and arms 42 with "hands" 44 at each end of the arms; legs, wheels, rollers, track, or tread. The shoulders 40, arms 42 or hands 44; legs, wheels, rollers, track, or tread may include devices 46 which can be manipulated. a non-movement operation, and wherein the non- movement operation comprises an operation to activate (Col.1 line 25; motions of the robot based on sets of control parameters.) or deactivate a system of the robot. Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to determine coordination motion for a quadruped robot such that the claimed invention as a whole would have been obvious. 33. Mead and Ma teach all of the limitations of claim 1 but do not teach the limitations of claim 33 However Abe teaches, wherein the operator movement data further indicates a movement onto the object, wherein the execution of the instructions by the data processing hardware further causes the data processing hardware to: determine that movement by the robot according to the operator movement data causes at least a portion of the robot to contact the object; and Col.7 line 37; the robotic device 100 may be configured to operate autonomously, semi-autonomously, and/or using directions provided by user(s), determine to grasp the object using the manipulator based on determining that the movement by the robot according to the operator movement data Col.7 line 37; the robotic device 100 may be configured to operate autonomously, semi-autonomously, and/or using directions provided by user(s), causes the at least a portion of the robot to contact the object, Col,12 line 53; (e.g., walk [onto the object], trot, run, gallop, and bound), wherein determining the combination of movement of the manipulator and movement of the two or more legs is based on determining to grasp the object. Col.15 line 35; the operation of the end effector 404 may be coordinated [movement to grasp] with operation of the manipulator arm 402 and other robotic mechanisms of the robotic device 400 in order to throw objects within its grasp. Also, Mead Fig.5 para 75; FIG. 5, the torso portion 36 of the robot 10 may include shoulders 40 and arms 42 with "hands" 44 at each end of the arms; legs, wheels, rollers, track, or tread. The shoulders 40, arms 42 or hands 44; legs, wheels, rollers, track, or tread may include devices 46 which can be manipulated. It is noted that if the robot can coordinate movement and other robotic mechanism to grasp an object to throw. Therefore, it was well known at the time the invention was filed and would have been obvious to one of ordinary skill in the art to combine the teachings with a reasonable expectation of success in order to determine coordination motion for a quadruped robot such that the claimed invention as a whole would have been obvious. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIHAR A KARWAN whose telephone number is (571)272-2747. The examiner can normally be reached on M-F 11am.-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIHAR A KARWAN/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Mar 11, 2022
Application Filed
Mar 06, 2024
Non-Final Rejection — §103, §DP
Jun 04, 2024
Applicant Interview (Telephonic)
Jun 04, 2024
Examiner Interview Summary
Jun 11, 2024
Response Filed
Oct 02, 2024
Final Rejection — §103, §DP
Dec 19, 2024
Applicant Interview (Telephonic)
Dec 19, 2024
Examiner Interview Summary
Jan 06, 2025
Request for Continued Examination
Jan 10, 2025
Response after Non-Final Action
Feb 12, 2025
Non-Final Rejection — §103, §DP
May 06, 2025
Interview Requested
May 15, 2025
Applicant Interview (Telephonic)
May 16, 2025
Examiner Interview Summary
May 16, 2025
Response Filed
Aug 22, 2025
Final Rejection — §103, §DP
Sep 24, 2025
Interview Requested
Oct 02, 2025
Interview Requested
Jan 07, 2026
Interview Requested
Jan 23, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Feb 26, 2026
Applicant Interview (Telephonic)
Mar 02, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589502
CARGO-HANDLING APPARATUS, CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12589750
VEHICULAR CONTROL SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12589504
SYSTEM AND METHOD FOR COGNITIVE SURVEILLANCE ROBOT FOR SECURING INDOOR SPACES
2y 5m to grant Granted Mar 31, 2026
Patent 12583100
ROBOT TO WHICH DIRECT TEACHING IS APPLIED
2y 5m to grant Granted Mar 24, 2026
Patent 12576516
HUMAN SKILL BASED PATH GENERATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
82%
With Interview (+25.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month