DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the Application Number 18/085,469 filed on 12/20/2022.
Claims 2, 3, 9, 10, 16, 17 have been cancelled.
Claims 1, 4-8, 11-15, 18-20 are currently pending and have been examined.
This action is made NON-FINAL in response to the “Amendment” and “Remarks” filed on
12/8/2025.
This action is made NON-FINAL in response to the “Request for Continued Examination” filed on 12/8/2025.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 8, 12, 15, 19 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Qian (U.S. Patent Publication 2019/0096069 A1) in view of Kang (U.S. Patent Publication 2013/0203448 A1).
In regard to Claim 1, Qian teaches a following control method for a robot, comprising (see Abstract, Paragraph 41 lines 7-9, Paragraph 221 lines 9-11 teaching a method for controlling a moveable object, such as a robot, to follow a target):
Controlling the robot to follow a target object (see Paragraph 41 lines 7-9, Paragraph 58 lines 1-11, Paragraph 221 lines 9-11 teaching that the following robot includes electrical components);
In a process of following the target object, acquiring a relative pose relationship between the target object and the robot (see Paragraph 50 lines 6-11 teaching that the robot’s tracking capabilities can enable autonomous tracking of a moving target object while the robot is at different spatial dispositions, such as heights, distances, and/or orientations, relative to the target object); and
Adjusting a visual field range of the robot, wherein the target object is located in the adjusted visual field range, according to the relative pose relationship (see Figure 1, Paragraph 102 lines 4-11 teaching that in order to maintain the target object in a field-of-view of the robot’s imaging device, the imaging device may be rotated by an angle) and visual field parameter information of the robot (see Paragraph 242 lines 7-16 teaching that the robot’s spatial orientation, velocity, or orientation can be adjusted based on sensor data pertaining to the robot’s environment),
Wherein adjusting a visual field range of the robot according to the relative pose relationship and visual field parameter information of the robot comprises: determining relative angle information of the target object and a current visual field center line of the robot according to the relative pose relationship and the visual field parameter information; determining target rotation information of the robot based on the relative angle information, and adjusting the visual field range of the robot based on the target rotation information (see Figure 2, Paragraphs 101-103 teaching, Paragraph 108 lines 1-8 teaching that the robot may adjust the angle of an imaging device in order to keep the target object in a field of view along optical axis 212-0);
Wherein the relative pose relationship comprises a first pose of the target object and a second pose of the robot, and determining relative angle information of the target object and a current visual field center line of the robot according to the relative pose relationship and the visual field parameter information comprises:
Acquiring the current visual field center line of the robot according to the second pose and the visual field parameter information (see Figure 2, Paragraphs 101, 102, Paragraph 121 lines 1-3 teaching that the robot includes an imaging device with an optical axis 212 that extends from the center of the imaging device to an object or target, and that a clear visual path may be provided to the object or target);
Acquiring a connecting line between the target object and the robot according to the first pose and the second pose (see Figure 2, Paragraph 101 teaching that the robot may extend an optical axis 212 from the imaging device to the target); and
Determining the relative angle information according to the current visual field center line and the connecting line (see Figure 2, Paragraphs 101-103, Paragraph 108 lines 1-8 teaching that the robot may adjust the angle of the imaging device in order to keep the target object in a field of view along optical axis 212-0).
Here, the Examiner is interpreting the correction of an imaging device in order to keep a target object within the imagine device’s field of view as substantially similar to determining a relative angle between the imaging device’s field of view and the imaging device’s bearing to the target, since such a correction could not be possible without determining a correction angle.
Qian fails to teach wherein a magnitude and a direction of an angle formed by the current visual field center line and the connecting line are acquired, and the magnitude and the direction of the angle are taken as the relative angle information; and
A pose describes a position and a posture of the target object or the robot in a specified coordinate system, the position refers to a location of the target object or the robot in a space, and the posture refers to an orientation of the target object or the robot in the space.
However, Kang teaches wherein a magnitude and a direction of an angle formed by the current visual field center line and the connecting line are acquired, and the magnitude and the direction of the angle are taken as the relative angle information (see Figure 4, Paragraph 9, Paragraph 64 teaching a method for recognizing a target within a visible range wherein the method includes calculating the angle between the direction of a terminal’s 1 field of view λ centerline 110 and the direction to a target object A, B, and the angle’s relationship to a reference direction 100 such as east or north); and
A pose describes a position and a posture of the target object or the robot in a specified coordinate system, the position refers to a location of the target object or the robot in a space (see Figure 3-5, Paragraphs 57, 71, 72 teaching that the system maps the coordinates of detected devices A and B using equations that utilize an angle θ and a distance r from the detection device), and the posture refers to an orientation of the target object or the robot in the space (see Figure 3, Paragraph 66 teaching that the calculation of the coordinates of the devices A and B utilizes a direction 110 to which the detection device 1 is directed).
Qian and Kang are both considered to be analogous to the claimed invention because they are in the same field of devices that determine angles to target objects. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a feature wherein the system can determine the angle between a field of view’s centerline and the line to a target as taught by Kang, the position of a detected device within a space, and the orientation direction of the detecting device. Doing so could improve a device capable of identifying or tracking a target by enabling the device to determine how much the device’s field of view needs to be adjusted in order to maintain the target within the center of the field of view, and by locating the target object within an environment according to a coordinate system, and using the detection device’s angle to the target to determine the detection device’s location in the coordinate system as well, improving the device’s ability to track the target as it, or the device, move through the environment.
In regard to Claim 5, Qian further teaches wherein controlling the robot to follow a target object comprises:
Acquiring a real-time image of the target object (see Figure 19, Paragraph 187 lines 1-3 teaching that the imaging device can capture images of the object 1508);
Acquiring a motion track of the target object based on the real-time image (see Paragraph 6 lines 7-12 teaching that once a target has been identified, it’s movement can be tracked in real-time); and
Controlling the robot to follow the target object according to the motion track (see Figure 16, Paragraph 32, Paragraph 200 lines 1-6 teaching that the movement of the tracking device 1502 may be adjusted by the velocity vector ve while tracking the target 1508, which moves in the same direction from time t1 to time t2).
In regard to Claim 8, Qian further teaches an electronic device for controlling a robot, comprising:
A processor (see Paragraph 248 lines 1-7 teaching that the system includes a processor); and
A memory communicatively connected with the processor (see Paragraph 248 lines 1-7 teaching that the processor can execute code stored in a memory),
Wherein the memory is configured to store instructions executable by the processor (see Paragraph 248 lines 1-7 teaching that the processor can execute code stored in a memory).
The rest of Claim 8 is substantially similar to Claim 1 (the bulk of both claims). Please refer to the rejection of Claim 1 above for analysis.
Claim 12 is substantially similar to Claim 5 (the bulk of both claims). Please refer to the rejection of Claim 5 above for analysis.
In regard to Claim 15, Qian further teaches a non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions are configured to cause a computer to perform a following control method for a robot (see Paragraph 248 lines 1-7 teaching that the robot includes a processor that can execute code stored in a memory).
The rest of Claim 15 is substantially similar to Claim 1 (the bulk of both claims). Please refer to the rejection of Claim 1 above for analysis.
Claim 19 is substantially similar to Claim 5 (the bulk of both claims). Please refer to the rejection of Claim 5 above for analysis.
Claims 4, 11, 18 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Qian (U.S. Patent Publication 2019/0096069 A1) in view of Kang (U.S. Patent Publication 2013/0203448 A1), in further view of Wang (U.S. Patent Publication 2017/0002976 A1), in further view of Suzuki (U.S. Patent Publication 2013/0238128 A1).
In regard to Claim 4, Qian further teaches determining the relative angle information as the target rotation information of the robot (see Figure 2, Paragraphs 101-103 teaching, Paragraph 108 lines 1-8 teaching that the robot may adjust the angle of the imaging device in order to keep the target object in a field of view along optical axis 212-0).
Qian fails to teach determining target rotation information of the robot based on whether the relative angle is higher or lower than a preset angle threshold.
However, Wang teaches determining target rotation information of the robot based on whether the relative angle is higher or lower than a preset angle threshold (see Paragraph 33 teaching a gimbal control system which controls the gimbal via two distinct methods, based on whether a rotation angle exceeds, or does not exceed a threshold angle).
Qian and Wang are both considered to be analogous to the claimed invention because they are in the same field of devices with rotatable imagery sensors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a threshold angle wherein a camera is rotated by a distinct method based on whether the rotation angle is greater than or less than the threshold as taught by Wang. Doing so could set distinct control priorities that best suit the type of rotation required, such as short, controlled rotations for small deviations, and faster, less controlled rotations for larger rotation angles. This could improve a camera’s ability to keep a target within a field of view, regardless of the target’s speed, distance, or direction of travel.
Qian further fails to teach determining the target rotation information of the robot to be preset rotation information.
However, Suzuki teaches determining the target rotation information of the robot to be preset rotation information (see Abstract lines 1-3, Figure 1, Paragraph 23 lines 1-2, Paragraph 103 lines 1-5 teaching a sensor information processing system in which a camera 102 is rotated by a predetermined angle in order to maintain a straight line from the camera to a target object 103).
Qian and Suzuki are both considered to be analogous to the claimed invention because they are in the same field of devices with rotatable imagery sensors. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a control method in which a camera is rotated about an axis by a predetermined angle as taught by Suzuki. Doing so could improve a camera’s target tracking system by ensuring that only small rotational inputs are applied to the device. This could ensure that the system would not lose visual contact with the target by overcorrecting with rotational inputs while attempting to refocus on the target, as the target drifts from the camera’s field of view. This could be especially useful in scenarios where the target is very far away from the camera.
Claims 11, 18 are substantially similar to Claim 4 (the bulk of both claims). Please refer to the rejection of Claim 4 above for analysis.
Claims 6, 13, 20 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Qian (U.S. Patent Publication 2019/0096069 A1) in view of Kang (U.S. Patent Publication 2013/0203448 A1), in further view of Han (U.S. Patent 10,399,229 B2), in further view of Shen (U.S. Patent Publication 2020/0228720 A1).
In regard to Claim 6, Qian fails to teach wherein controlling the robot to follow the target object according to the motion track comprises:
Acquiring a preset relative distance between the target object and the robot in a rear following scenario;
Acquiring a second real-time speed of the robot according to the first real-time speed and the preset relative distance; and
Controlling the robot to follow the target object along a tangential direction of the motion track at the second real-time speed.
However, Han teaches wherein controlling the robot to follow the target object according to the motion track comprises:
Acquiring a preset relative distance between the target object and the robot in a rear following scenario (see Column 6 lines 26-28, Column 10 lines 59-67 teaching a target tracking system wherein, while the target object is in front of the robot and moving continuously, the robot loads a default distance to the target object);
Acquiring a second real-time speed of the robot according to a first real-time speed of the target and the preset relative distance (see Column 6 lines 26-28, Column 10 lines 59-67 teaching that in a scenario in which the target object is in front of the robot and moving continuously, the robot loads a default distance to the target object, and maneuvers in a way to maintain the default distance between the robot and the target, including speeding up or decelerating); and
Controlling the robot to follow the target object along a tangential direction of the motion track at the second real-time speed (see Figure 3, Column 7 lines 14-18, 51-55 teaching that the system calculates a predicted position of the target in step S106, and controls the following robot to move toward the predicted position in step S108).
Here, the Examiner is interpreting the phrase “tangential direction,” to mean the direction of travel of the target from a specific geographic point.
Qian and Han are both considered to be analogous to the claimed invention because they are in the same field of robots that track and follow moving targets. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a system that obtains an image of the target, sets a control distance between the target and the robot, and maneuvers the robot from a rear position to follow the target and maintain the control distance as taught by Han. Doing so could improve a following robot by implementing an optimal control distance and maintaining that distance while following. This could increase the accuracy of the tracking robot, since following too closely could lead to collisions, but following from too far away could lead to the following robot losing track of the target.
Qian further fails to teach acquiring a first real-time speed of the target object.
However, Shen teaches acquiring a first real-time speed of the target object (see Abstract lines 1-7 teaching a target capturing system that acquires speed information of a target object).
Qian and Shen are both considered to be analogous to the claimed invention because they are in the same field of systems that track target objects. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a feature that could determine a target’s speed as taught by Shen. Doing so could improve a following robot system by using the target’s speed as a baseline speed to initially set to follow the target and maintain a specific distance.
Claims 13, 20 are substantially similar to Claim 6 (the bulk of both claims). Please refer to the rejection of Claim 4 above for analysis.
Claims 7, 14 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Qian (U.S. Patent Publication 2019/0096069 A1) in view of Kang (U.S. Patent Publication 2013/0203448 A1), in further view of Lee (U.S. Patent Publication 2022/0019237 A1).
In regard to Claim 7, Qian fails to teach wherein controlling the robot to follow a target object comprises:
Acquiring a preset relative pose relationship between the target object and the robot in a side following scenario; and
Controlling the robot to follow the target object according to the preset relative pose relationship, a first real-time motion direction of the robot and a second real-time motion direction of the target object being kept the same.
However, Lee teaches wherein controlling the robot to follow a target object comprises:
Acquiring a preset relative pose relationship between the target object and the robot in a side following scenario (see Abstract, Figure 1 teaching a mobile robot platooning system that can move in a synchronized form); and
Controlling the robot to follow the target object according to the preset relative pose relationship, a first real-time motion direction of the robot and a second real-time motion direction of the target object being kept the same (see Figure 4A, Paragraph 90 teaching that the platooning robots, such as 1 and 2, can move along a path with sharp corners, while still maintaining lateral offset positions from each other).
Qian and Lee are both considered to be analogous to the claimed invention because they are in the same field of mobile robots that move based on their orientation to other objects. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a side-profile following protocol as taught by Lee. Doing so could improve a robot following system by ensuring that the robot could maintain a side-profile view of the target, even while in motion. This could be useful in multiple ways, such as if the robot and the target object are jointly towing or supporting another object.
Claim 14 is substantially similar to Claim 7 (the bulk of both claims). Please refer to the rejection of Claim 7 above for analysis.
Response to Arguments
The Applicant’s arguments and remarks with regard to the 35 USC § 102 rejections of Claims 1, 8, 15 have been fully considered, but are not persuasive.
The Applicant argues that the Kang reference does not disclose that “a magnitude and a direction of an angle formed by the current visual field center line and the connecting line are acquired, and the magnitude and the direction of the angle are taken as the relative angle information.” The Examiner disagrees. While the reference does not explicitly teach solely utilizing a center line and a connecting line to calculate angle information, it does teach that “the relative angles
θ
a
'
and
θ
b
'
of the respective devices A and B 2-A and 2-B are calculated by subtracting a difference between an orientation angle γ between a reference direction 100 of the terminal 1 and a direction 110 to which the terminal 1 is directed and an orientation angle λ/2 between the direction 110 to which the terminal 1 is directed and a boundary 120-2 of the field of view of the user from the respective orientation angles
θ
a
and
θ
b
between the reference direction 100 and directions to which the respective devices A and B 2-A and 2-B are directed.” Essentially, the process utilizes parameters defined by the centerline of terminal 1 in order to complete well-known, simple mathematical calculations to determine relative angles from terminal 1 to devices A and B.
The Applicant further argues that neither the Qian nor the Kang references teach that “a pose describes a position and a posture of the target object or the robot in a specified coordinate system, the position refers to a location of the target object or the robot in a space, and the posture refers to an orientation of the target object or the robot in the space.” As now-stated in the rejection of Claim 1 above, Kang teaches wherein a pose describes a position and a posture of the target object or the robot in a specified coordinate system, the position refers to a location of the target object or the robot in a space (see Figure 3-5, Paragraphs 57, 71, 72 teaching that the system maps the coordinates of detected devices A and B using equations that utilize an angle θ and a distance r from the detection device), and the posture refers to an orientation of the target object or the robot in the space (see Figure 3, Paragraph 66 teaching that the calculation of the coordinates of the devices A and B utilizes a direction 110 to which the detection device 1 is directed).
Qian and Kang are both considered to be analogous to the claimed invention because they are in the same field of devices that determine angles to target objects. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qian’s invention to incorporate a feature wherein the system can determine the position of a detected device within a space, and the orientation direction of the detecting device. Doing so could improve a device capable of identifying or tracking a target by locating the target object within an environment according to a coordinate system, and using the detection device’s angle to the target to determine the detection device’s location in the coordinate system as well, improving the device’s ability to track the target as it, or the device, move through the environment.
Claims 4-7, 11-14, and 18-20 remain rejected under the rationales provided in the previous office action.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure:
Nguyen (U.S. Patent Publication 2022/0011780 A1) teaches a mobile body controlling system that enables multiple robots to travel along the same track (see Abstract).
Song (U.S. Patent Publication 2022/0180090 A1) teaches a mobile robot that can move according to a tracked object’s motion (see Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL W ARELLANO whose telephone number is (571)270-0102. The examiner can normally be reached M-F 7:30-4:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Koppikar can be reached on (571) 272-5109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000.
/PAUL W ARELLANO/Examiner, Art Unit 3667B
/VIVEK D KOPPIKAR/Supervisory Patent Examiner, Art Unit 3667 January 30, 2026