Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,547

DOCKING SYSTEM, AUTONOMOUS MOBILE ROBOT FOR USE WITH SAME, AND ASSOCIATED METHOD

Final Rejection §103
Filed
Jun 13, 2023
Examiner
LEVY, MERRITT E
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ford Global Technologies LLC
OA Round
4 (Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
26 granted / 78 resolved
-18.7% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
56 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to the application filed on September 11, 2025. Claims 1-3, 8-13, and 15-25 are currently pending, with Claims 1, 10, 15, and 24 being amended. Response to Amendments In response to Applicant’s amendments, filed September 11, 2025, the Examiner maintains the previous 35 U.S.C. 103 rejections. Response to Arguments Applicant's arguments filed September 11, 2025, have been fully considered but they are not persuasive. Regarding Applicant’s arguments pertaining to the teachings of Webster pertaining to the limitation of a stationary camera (see page 9 of instant arguments), the Examiner is unpersuaded. Webster teaches that the robot has multiple cameras and sensors, some of which move or rotate, and others which are located on the robot body and are stationary (see at least Figure 8 of Webster). As such, Webster teaches the features of a stationary camera as currently claimed in the instant application. The Examiner is unpersuaded and maintains the corresponding 35 U.S.C. 103 rejections. Regarding Applicant’s arguments pertaining to the teachings of Webster pertaining to using a stationary camera to determine relative position information (see page 9 of instant arguments), the Examiner is unpersuaded. Webster teaches that the robot has multiple cameras and sensors, some of which move or rotate, which are used for object recognition, navigation, and collision avoidance to determine a relative location, estimate a path to a destination, or determine a distance to an object (see at least Col. 12 lines 14-20; Col. 13 lines 59-61; Col. 19 lines 3-7; Col. 21 lines 16-18 of Webster). As such, Webster teaches the features of a stationary camera which can provide relative information to the robot processing center, as currently claimed in the instant application. As such, The Examiner is unpersuaded and maintains the corresponding 35 U.S.C. 103 rejections. The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for essentially the same reasons. Therefore, the corresponding rejections are maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 10, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11,372,408 B1, to Webster, et al (hereinafter referred to as Webster; newly of record), in view of “Large View Visual Servoing of a Mobile Robot with a Pan-Tilt Camera”, to Nierobisch, et al (hereinafter referred to as Nierobisch; previously of record). As per Claim 1, Webster discloses the features of a docking system (e.g. Col. 9 lines 5-12; Figure 1; where the robot (104) may be configured to dock or connect to a docking station (146)), comprising: a docking station (e.g. Col. 9 lines 5-12; Figure 1; where the robot (104) may be configured to dock or connect to a docking station (146)); and an autonomous mobile robot (AMR) (e.g. Col. 9 lines 35-38; Figure 1; where the autonomous mobile device may comprise an autonomous ground vehicle) comprising: a body (e.g. Figures 1, 4; where the robot (104) has a body); a first mount movably coupled to the body (e.g. Figure 4; where the robot (104) has a moveable component (138) mounted on the robot (104) body); a first sensor coupled to the first mount (e.g. Col. 7 lines 28-30; Col. 16 lines 39-54; Col. 25 lines 30-34; Figure 3; where the moveable component (138) may comprise one or more sensors (114); and the robot (104) may use one or more sensors for localization) and having a first field-of-view (e.g. Col. 18 lines 63-67; Col. 27 lines 53-55; where the one or more sensors may have a field of view (FOV)); a stationary camera coupled to the body and configured not to move with respect to the body (e.g. Figure 8; where the robot has stationary cameras (344) which are mounted on the vehicle body); a processor; and a memory comprising instructions (e.g. Col. 3 lines 42-49; where the robot (104) may include a hardware processor(s) (108), a memory(s) (112), sensors (114), where one or more task modules (118) are stored in the memory (112), and comprise instructions to perform a task when executed by the processor) that, when executed by the processor, cause the processor to perform operations comprising: ‘…’ cause the first mount to move independently with respect to the body (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling) in order to center the first field-of-view of the first sensor on the docking station (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling); cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor (e.g. Col. 24 lines 51-55; Figures 4, 5, 7; where the moveable component (138) can be dynamically oriented toward a target point along the path, where the target point can be toward other devices, the user, or the docking station; and the robot changes the planned path to avoid an obstacle or move to the target point) ‘…’; employ the stationary camera to determine relative position data for the AMR (e.g. Col. 12 lines 14-20; Col. 13 lines 59-61; Col. 19 lines 3-7; Col. 21 lines 16-18; where the image data acquired by the camera (344) may be used for object recognition, navigation, collision avoidance, etc.; and where the navigational data obtained by the camera can be used to access a map of the environment during operation to determine a relative location, estimate a path to a destination, etc.; and where the robot (104) may use data from one or more sensors (114) to determine a location of a user (102) relative to the robot (104), and where the sensor (114) may be an image sensor or camera (344) to determine distance to an object); ‘…’. Webster fails to disclose every feature of employ the first sensor to scan the docking station; center the first field-of-view of the first sensor on the docking station; cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor of the docking station; and cause the AMR to dock at the docking station using the centered field-of-view on the docking station and the relative position data. However, Nierobisch, in a similar field of endeavor, teaches the features of employ the first sensor to scan the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Nierobisch further teaches the features of cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor of the docking station; and cause the AMR to dock at the docking station using the centered field-of-view on the docking station and the relative position data. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot uses gaze control to center its field of view on a landmark, and maintains its view of the landmark as the vehicle robot moves forward (i.e. rotates interpedently of the robot body), changes the view to be centered on the docking station when the docking station is in range, and then rotates the wheels to center on the docking station (e.g. Figures 6, 7). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of changing direction while centered on a landmark in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). As per Claim 10, Webster discloses the features of an autonomous mobile robot (AMR) (e.g. Col. 9 lines 35-38; Figure 1; where the autonomous mobile device may comprise an autonomous ground vehicle) configured to dock at a docking station (e.g. Col. 9 lines 5-12; Figure 1; where the robot (104) may be configured to dock or connect to a docking station (146)), the AMR comprising: a body (e.g. Figures 1, 4; where the robot (104) has a body); a first mount movably coupled to the body (e.g. Figure 4; where the robot (104) has a moveable component (138) mounted on the robot (104) body); a first sensor coupled to the first mount (e.g. Col. 7 lines 28-30; Col. 16 lines 39-54; Col. 25 lines 30-34; Figure 3; where the moveable component (138) may comprise one or more sensors (114); and the robot (104) may use one or more sensors for localization) and having a first field-of-view (e.g. Col. 18 lines 63-67; Col. 27 lines 53-55; where the one or more sensors may have a field of view (FOV)); a stationary camera coupled to the body and configured not to move with respect to the body (e.g. Figure 8; where the robot has stationary cameras (344) which are mounted on the vehicle body); a processor; and a memory comprising instructions (e.g. Col. 3 lines 42-49; where the robot (104) may include a hardware processor(s) (108), a memory(s) (112), sensors (114), where one or more task modules (118) are stored in the memory (112), and comprise instructions to perform a task when executed by the processor) that, when executed by the processor, cause the processor to perform operations comprising: ‘…’ cause the first mount to move independently with respect to the body (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling) in order to center the first field-of-view of the first sensor on the docking station (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling); cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor ‘…’ (e.g. Col. 24 lines 51-55; Figures 4, 5, 7; where the moveable component (138) can be dynamically oriented toward a target point along the path, where the target point can be toward other devices, the user, or the docking station; and the robot changes the planned path to avoid an obstacle or move to the target point); employ the stationary camera to determine relative position data for the AMR (e.g. Col. 12 lines 14-20; Col. 13 lines 59-61; Col. 19 lines 3-7; Col. 21 lines 16-18; where the image data acquired by the camera (344) may be used for object recognition, navigation, collision avoidance, etc.; and where the navigational data obtained by the camera can be used to access a map of the environment during operation to determine a relative location, estimate a path to a destination, etc.; and where the robot (104) may use data from one or more sensors (114) to determine a location of a user (102) relative to the robot (104), and where the sensor (114) may be an image sensor or camera (344) to determine distance to an object); ‘…’. Benson fails to disclose every feature of employ the first sensor to scan the docking station; center the first field-of-view of the first sensor on the docking station; cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor of the docking station; and cause the AMR to dock at the docking station using the centered field-of-view on the docking station and relative position data. However, Nierobisch, in a similar field of endeavor, teaches the features of employ the first sensor to scan the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Nierobisch further teaches the features of cause the AMR to change a course of direction relative to the docking station while maintaining the centered field-of-view of the first sensor of the docking station; and cause the AMR to dock at the docking station using the centered field-of-view on the docking station and relative position data. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot uses gaze control to center its field of view on a landmark, and maintains its view of the landmark as the vehicle robot moves forward (i.e. rotates interpedently of the robot body), changes the view to be centered on the docking station when the docking station is in range, and then rotates the wheels to center on the docking station (e.g. Figures 6, 7). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of changing direction while centered on a landmark in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). As per Claim 15, Webster discloses the features of a method of docking an autonomous mobile robot (AMR) (e.g. Col. 9 lines 35-38; Figure 1; where the autonomous mobile device may comprise an autonomous ground vehicle) at a docking station (e.g. Col. 9 lines 5-12; Figure 1; where the robot (104) may be configured to dock or connect to a docking station (146)), the method comprising: providing the AMR with a body (e.g. Figures 1, 4; where the robot (104) has a body), a first mount movably coupled to the body (e.g. Figure 4; where the robot (104) has a moveable component (138) mounted on the robot (104) body); a first sensor coupled to the first mount (e.g. Col. 7 lines 28-30; Col. 16 lines 39-54; Col. 25 lines 30-34; Figure 3; where the moveable component (138) may comprise one or more sensors (114); and the robot (104) may use one or more sensors for localization) and having a first field-of-view (e.g. Col. 18 lines 63-67; Col. 27 lines 53-55; where the one or more sensors may have a field of view (FOV)); ‘…’ a stationary camera coupled to the body and configured not to move with respect to the body (e.g. Figure 8; where the robot has stationary cameras (344) which are mounted on the vehicle body); moving the first mount independently with respect to the body (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling) in order to center the first field-of-view on the docking station (e.g. Col. 8 lines 27-33; Figure 1; where the moveable component (138) may move independent of the direction that that robot (104) is traveling); changing a course of direction of the AMR relative to the docking station while maintaining the centered field-of-view of the first sensor ‘…’ (e.g. Col. 24 lines 51-55; Figures 4, 5, 7; where the moveable component (138) can be dynamically oriented toward a target point along the path, where the target point can be toward other devices, the user, or the docking station; and the robot changes the planned path to avoid an obstacle or move to the target point; determining relative position data for the AMR using the stationary camera (e.g. Col. 12 lines 14-20; Col. 13 lines 59-61; Col. 19 lines 3-7; Col. 21 lines 16-18; where the image data acquired by the camera (344) may be used for object recognition, navigation, collision avoidance, etc.; and where the navigational data obtained by the camera can be used to access a map of the environment during operation to determine a relative location, estimate a path to a destination, etc.; and where the robot (104) may use data from one or more sensors (114) to determine a location of a user (102) relative to the robot (104), and where the sensor (114) may be an image sensor or camera (344) to determine distance to an object); ‘…’. Webster fails to disclose every feature of scanning the docking station with the first sensor; and changing a course of direction of the AMR relative to the docking station while maintaining the centered field-of-view of the first sensor on the docking station; and docking the AMR at the docking station using the centered field-of-view and the relative position data. However, Nierobisch, in a similar field of endeavor, teaches the features of scanning the docking station with the first sensor. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Nierobisch further teaches the features of changing a course of direction of the AMR relative to the docking station while maintaining the centered field-of-view of the first sensor on the docking station; and docking the AMR at the docking station using the centered field-of-view and the relative position data. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot uses gaze control to center its field of view on a landmark, and maintains its view of the landmark as the vehicle robot moves forward (i.e. rotates interpedently of the robot body), changes the view to be centered on the docking station when the docking station is in range, and then rotates the wheels to center on the docking station (e.g. Figures 6, 7). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of changing direction while centered on a landmark in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Claims 2, 8-9, 16-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Webster, in view of Nierobisch, as applied to Claims 1 and 15 above, and further in view of U.S. Patent Publication No. 2018/0004219 A1, to Aldred, et al (hereinafter referred to as Aldred; previously of record). As per Claim 2, Webster, in view of Nierobisch, teaches the features of Claims 1, and Webster further discloses the features of wherein the first sensor is a first camera (e.g. Col. 7 lines 28-34; Figure 8; where the moveable component (138) may comprise a frame that supports one or more cameras), wherein the memory further comprises instructions that, when executed by the processor, cause the processor to perform operations comprising employing the first camera to determine an offset position and a final position of the AMR with respect to the docking station based on the centered field-of-view (e.g. Col. 4 lines 16-22; Col. 5 lines 18-39; Figures 4, 5; where the robot (104) may determine the target point (132) of the planned path, and adjust the moveable component (138) (i.e. determines the offset from center) such that the moveable component (138) is centered on the target point (132)). Webster fails to teach every feature of causing the AMR to dock at the docking station is performed using the offset position and the final position. However, Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the robot includes a navigational sensor which is a camera-based system; and where the robot is determined to be positioned to the left of the docking station (6) and lies in zone “A”, and the robot enters a maneuvering step to search for the docking station, and when the ‘target search’ of the robot identifies the targets (80, 82), the robot maneuvers from position (P1) to positions (P2, P3, P4) (i.e. determines the robot is offset) to align itself and commence docking (i.e. final position) (e.g. Paragraphs [0037], [0063]-[0066]; Figure 9). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch, with the feature of determining an offset of Aldred, in order to infer guidance and alignment information for the robot (see at least Paragraph [0011] of Aldred). As per Claim 8, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 2, and Aldred further teaches the features of wherein the docking station comprises a structure having a barcode, and wherein causing the AMR to dock at the docking station comprises: determining a relative position of the AMR with respect to the docking station based on centering the first camera upon the barcode and pose information provided by the first camera. Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the docking station has targets which fiducial markers, where when the targets (80, 82) are in front of the robot (4), it is determined whether the center points (C) of each target are substantially aligned in the image (i.e. centered) to prepare the robot for docking;, and the robot system evaluates the relative positions and spacing between the fiducial markers to determine guidance information, and where when the targets (80, 82) are in front of the robot (4), it is determined whether the center points (C) of each target are substantially aligned in the image to prepare the robot for docking; and docking is complete when the targets are aligned and a charging signal on the docking station has been detected (e.g. Paragraphs [0052], [0067]; Figures 5, 6, 8). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch, with the feature of centering the field of view on the docking station in the system of Aldred, in order to infer guidance and alignment information for the robot (see at least Paragraphs [0007] and [0011] of Aldred). As per Claim 9, Webster, in view of Nierobisch and Aldred, teaches the features of Claims 2, and Aldred further teaches the features of wherein the docking station comprises an element having a unique geometry, wherein the unique geometry is stored in the memory of the AMR, and wherein causing the AMR to dock at the docking station is based on recognizing the unique geometry. Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the docking station has targets with fiducial markers (i.e. unique geometry) that identify the docking station, based on identifying a pair of targets provided on the docking station; where when the targets (80, 82) are in front of the robot (4), it is determined whether the center points (C) of each target are substantially aligned in the image to prepare the robot for docking; and where the robot comprises a memory module (71) for storage of data generated and used by the navigation control module and the docking control module, and serves to store mapping and route data for (e.g. Paragraphs [0045]-[0048], [0067], [0073]; Figures 5, 6, 9). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch, with the feature of centering the field of view on the docking station in the system of Aldred, in order to infer guidance and alignment information for the robot (see at least Paragraphs [0011] and [0055] of Aldred). As per Claim 16, Webster, in view of Nierobisch, teaches the features of Claim 15, and Webster further discloses the features of further comprising employing the first camera to determine an offset position and a final position of the AMR with respect to the docking station based on the centered field-of-view (e.g. Col. 4 lines 16-22; Col. 5 lines 18-39; Figures 4, 5; where the robot (104) may determine the target point (132) of the planned path, and adjust the moveable component (138) (i.e. determines the offset from center) such that the moveable component (138) is centered on the target point (132)). The combination of Webster, in view of Nierobisch, fails to teach every feature of causing the AMR to dock at the docking station is performed using the offset position and the final position. However, Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the robot includes a navigational sensor which is a camera-based system; and where the robot is determined to be positioned to the left of the docking station (6) and lies in zone “A”, and the robot enters a maneuvering step to search for the docking station, and when the ‘target search’ of the robot identifies the targets (80, 82), the robot maneuvers from position (P1) to positions (P2, P3, P4) (i.e. determines the robot is offset) to align itself and commence docking (i.e. final position) (e.g. Paragraphs [0037], [0063]-[0066]; Figure 9). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch, with the feature of determining an offset of Aldred, in order to infer guidance and alignment information for the robot (see at least Paragraph [0011] of Aldred). As per Claim 17, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 16, and Webster further teaches the features of wherein the AMR further comprises a LiDar sensor coupled to a second mount and having a second field-of-view (e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138); and where lidar sensors are provided on the robot for determining the distance to an object), wherein the method further comprises scanning the ‘…’ with the second LiDar sensor (e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138); and where lidar sensors are provided on the robot for determining the distance to an object; and where the lidar sensor may be used to scan objects). Benson fails to disclose the features of scanning the docking station; moving the second mount independently with respect to the body is further performed in order to center the second field-of-view of the second LiDar sensor on the docking station. However, Nierobisch, in a similar field of endeavor, teaches the features of scanning the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Nierobisch further teaches the features of moving the second mount independently with respect to the body is further performed in order to center the second field-of-view of the second ‘…’ sensor on the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot uses gaze control to center its field of view on a landmark, and maintains its view of the landmark as the vehicle robot moves forward (i.e. rotates interpedently of the robot body), changes the view to be centered on the docking station when the docking station is in range, and then rotates the wheels to center on the docking station (e.g. Figures 6, 7). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of changing direction while centered on a landmark in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). As per Claim 18, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 17, and Webster further discloses the features of wherein the first mount is spaced from the second mount (e.g. Col. 2 lines 29-31; Col. 7 lines 37-47; Col. 25 lines 30-35; Figure 8; where the sensors, cameras, etc. are spaced apart on various parts of the robot body; and where the moveable component (138) may include more than one moveable component (138), and each moveable component (138) may be mounted so as to be moved with respect to the chassis of the robot, and may move along one or more degrees of freedom, such as pan left and right, tilt up and down, or rotate along any axis, by way of one or more moveable component actuators; where the sensors are located on the lower edge and upper location on the robot (i.e. first and second mounts)). As per Claim 20, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 15, and Webster further teaches the features of wherein the first sensor is a LiDar sensor camera (e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138); and where lidar sensors are provided on the robot for determining the distance to an object) Webster fails to teach every feature of wherein the docking station comprises an element having a unique geometry, wherein the unique geometry is stored in the memory of the AMR, and wherein causing the AMR to dock at the docking station is based on recognizing the unique geometry. Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the docking station has targets with fiducial markers (i.e. unique geometry) that identify the docking station, based on identifying a pair of targets provided on the docking station; where when the targets (80, 82) are in front of the robot (4), it is determined whether the center points (C) of each target are substantially aligned in the image to prepare the robot for docking; and where the robot comprises a memory module (71) for storage of data generated and used by the navigation control module and the docking control module, and serves to store mapping and route data for (e.g. Paragraphs [0045]-[0048], [0067], [0073]; Figures 5, 6, 9). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch, with the feature of centering the field of view on the docking station in the system of Aldred, in order to infer guidance and alignment information for the robot (see at least Paragraphs [0011] and [0055] of Aldred). Claims 3 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Webster, in view of Nierobisch and Aldred, as applied to Claim 2 above, and further in view of U.S. Patent Publication No. 2019/0155295 A1, to Moore, et al (hereinafter referred to as Moore; newly of record). As per Claim 3, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 2, and Webster further teaches the features of wherein the AMR further comprises a second mount movably coupled to the body (e.g. Col. 2 lines 29-31; Col. 7 lines 37-47; Col. 25 lines 30-35; Figure 8; where the moveable component (138) may include more than one moveable component (138); where each moveable component (138) may be mounted so as to be moved with respect to the chassis of the robot, and may move along one or more degrees of freedom, such as pan left and right, tilt up and down, or rotate along any axis, by way of one or more moveable component actuators; where the sensors are located on the lower edge and upper location on the robot (i.e. first and second mounts)); and a second camera coupled to the second mount and having a second field-of-view (e.g. Col. 18 lines 63-67; Col. 25 lines 40-51; where the one or more sensors may have a field of view (FOV); and where cameras are separated by a distance and mounted to the front of the robot). The combination of Webster, in view of Nierobisch and Aldred, fails to teach every feature of cause the processor to perform operations comprising causing the AMR to dock at the docking station based on relative position data determined from the centered field-of-view of the first camera and the second field-of-view of the second camera. However, Moore in a similar field of endeavor, teaches a method for controlling robot docking, where the field of view of the camera (24a) is centered on a fiducial marker of the docking station, and the field of view of the second camera (24b) slightly overlaps the field of view of the first camera (24a) so as to orient the field of view relative to the docking station (e.g. Paragraphs [0120]-[0122]). The Examiner will note that the use of two image elements for centering the field of view rather than the use of a single image element for centering the field of view is an obvious variant, as both will result in the centering the field of view of the sensor on the docking station and properly aligning the robot for docking. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch and Aldred, with the feature of using the combined view of two sensors to locate the docking station in the system of Moore, in order to determine the best scan match between the sensed environment and the robot’s actual pose (see at least Paragraph [0151] of Moore). As per Claim 11, Webster, in view of Nierobisch and Aldred, teaches the features of Claim 10, and Webster further teaches the features of wherein the AMR further comprises a second mount movably coupled to the body (e.g. Col. 2 lines 29-31; Col. 7 lines 37-47; Col. 25 lines 30-35; Figure 8; where the moveable component (138) may include more than one moveable component (138); where each moveable component (138) may be mounted so as to be moved with respect to the chassis of the robot, and may move along one or more degrees of freedom, such as pan left and right, tilt up and down, or rotate along any axis, by way of one or more moveable component actuators; where the sensors are located on the lower edge and upper location on the robot (i.e. first and second mounts)); and a second camera coupled to the second mount and having a second field-of-view (e.g. Col. 18 lines 63-67; Col. 25 lines 40-51; where the one or more sensors may have a field of view (FOV); and where cameras are separated by a distance and mounted to the front of the robot). Webster fails to teach every feature of employ the first sensor to scan the docking station; and cause the processor to perform operations comprising causing the AMR to dock at the docking station based on relative position data determined from the centered field-of-view of the first camera and the second field-of-view of the second camera. However, Nierobisch, teaches the features of employ the first sensor to scan the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Moore teaches the features of cause the processor to perform operations comprising causing the AMR to dock at the docking station based on relative position data determined from the centered field-of-view of the first camera and the second field-of-view of the second camera. Moore in a similar field of endeavor, teaches a method for controlling robot docking, where the field of view of the camera (24a) is centered on a fiducial marker of the docking station, and the field of view of the second camera (24b) slightly overlaps the field of view of the first camera (24a) so as to orient the field of view relative to the docking station (e.g. Paragraphs [0120]-[0122]). The Examiner will note that the use of two image elements for centering the field of view rather than the use of a single image element for centering the field of view is an obvious variant, as both will result in the centering the field of view of the sensor on the docking station and properly aligning the robot for docking. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to further modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, in view of Nierobisch and Aldred, with the feature of using the combined view of two sensors to locate the docking station in the system of Moore, in order to determine the best scan match between the sensed environment and the robot’s actual pose (see at least Paragraph [0151] of Moore). As per Claim 12, Webster, in view of Nierobisch, Aldred, and Moore, teaches the features of Claim 11, and Webster further teaches the features of wherein the first sensor is a camera, and wherein the second sensor is a LiDar sensor (e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138); and where lidar sensors are provided on the robot for determining the distance to an object). As per Claim 13, Webster, in view of Nierobisch, Aldred, and Moore, teaches the features of Claim 11, and Webster further discloses the features of wherein the memory further comprises instructions that, when executed by the processor, cause the processor to perform operations comprising employ the LiDar sensor to scan ‘…’ (e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138); and where lidar sensors are provided on the robot for determining the distance to an object; and where the lidar sensor may be used to scan objects), and cause the second mount to move independently with respect to the body ‘…’ (e.g. Col. 2 lines 29-31; Col. 7 lines 37-47; Col. 8 lines 27-33; Col. 25 lines 30-35; Figures 1, 8; where the moveable component (138) may include more than one moveable component (138); where each moveable component (138) may be mounted so as to be moved with respect to the chassis of the robot, and may move along one or more degrees of freedom, such as pan left and right, tilt up and down, or rotate along any axis, by way of one or more moveable component actuators; where the sensors are located on the lower edge and upper location on the robot (i.e. first and second mounts); and where the moveable component (138) may move independent of the direction that that robot (104) is traveling). Benson fails to disclose every feature of employ the sensor to scan the docking station; cause the second mount to move independently with respect to the body in order to center the second field-of-view of the LiDar sensor on the docking station. However, Nierobisch teaches the features of employ the ‘…’ sensor to scan the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot tracks a landmark and the camera conducts a pan motion in order to scan for the landmark and distinguish between the docking station and other landmarks in order to navigate toward the docking station (e.g. Page 3308, Paragraph beginning with “The objective of the predictive control …”; Page 3309, Paragraph beginning with “The camera pan controller …”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of scanning for a landmark such as a docking station in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Nierobisch further teaches the features of cause the second mount to move independently with respect to the body in order to center the second field-of-view of the ‘…’ sensor on the docking station. Nierobisch teaches the use of a pan-tilt camera to track landmarks, where the robot uses gaze control to center its field of view on a landmark, and maintains its view of the landmark as the vehicle robot moves forward (i.e. rotates interpedently of the robot body), changes the view to be centered on the docking station when the docking station is in range, and then rotates the wheels to center on the docking station (e.g. Figures 6, 7). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the dynamic trajectory orientation of an autonomous robot in the system of Webster, with the feature of changing direction while centered on a landmark in the system of Nierobisch, in order to optimize the trajectory of the robot (see at least Page 6, paragraph beginning with “For service robotic tasks …” of Nierobisch). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Webster, in view of Nierobisch, as applied to Claims 15 above, and further in view of U.S. Patent Publication No. 2018/0004219 A1, to Aldred, et al (hereinafter referred to as Aldred; previously of record), in view of U.S. Patent Publication No. 2012/0197464 A1, to Wang, et al (hereinafter referred to as Wang; previously of record). As per Claim 19, Webster in view of Nierobisch and Aldred, teaches the features of Claim 15, and Webster further teaches the features of wherein the sensor is a camera (e.g. e.g. Col. 12 lines 26-29; Col. 19 lines 27-38; Col. 21 line 8-26; Figures 3, 8; where image data is obtained from cameras on the moveable component (138)). Webster fails to teach every feature of wherein the docking station comprises a structure having a barcode, and wherein causing the AMR to dock at the docking station comprises: determining a relative position of the AMR with respect to the docking station based on evaluating a matrix barcode image and pose information provided by the first camera. Aldred teaches the features of wherein the docking station comprises a structure having a barcode, and wherein causing the AMR to dock at the docking station comprises: determining a relative position of the AMR with respect to the docking station based on evaluating a ‘…’ barcode image and pose information provided by the first camera. Aldred teaches an apparatus for guiding an autonomous vehicle toward a docking station, where the docking station has targets which fiducial markers, where when the targets (80, 82) are in front of the robot (4), it is determined whether the center points (C) of each target are substantially aligned in the image (i.e. centered) to prepare the robot for docking;, and the robot system evaluates the relative positions and spacing betwee
Read full office action

Prosecution Timeline

Jun 13, 2023
Application Filed
Jan 03, 2025
Non-Final Rejection — §103
Apr 04, 2025
Response Filed
Apr 21, 2025
Final Rejection — §103
Jul 02, 2025
Request for Continued Examination
Jul 07, 2025
Response after Non-Final Action
Aug 11, 2025
Non-Final Rejection — §103
Sep 11, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601596
Estimation of Target Location and Sensor Misalignment Angles
2y 5m to grant Granted Apr 14, 2026
Patent 12603005
DRIVER ASSISTANCE MODULE FOR A MOTOR VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594944
METHOD AND SYSTEM FOR VEHICLE DRIVE MODE SELECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12594960
NAVIGATIONAL CONSTRAINT CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583382
SYNCHRONIZED LIGHTING FOR ELECTRIC VEHICLES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+36.6%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month