DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Application
Claims 1-12 are currently pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/05/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 06/09/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4 and 6-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al. (Hereafter, “Zhu”) [US 11,048,277 B1] in view of Kishi [US 2015/0294176 A1].
In regards to claim 1, Zhu discloses a control device [Fig. 2] comprising: a memory storing a program; and a processor configured to, when executing the program ([Col. 4] For example, in some embodiments, the navigation system 120 and associated subsystems, may be implemented as instructions stored in memory and executable by one or more processors.), cause the control device to: calculate, if a tracking target can be detected from a captured image ([Col. 9] capturing images of a human subject 102), a direction toward the tracking target as a tracking subject direction based on a result of the detection ([Col. 9] As shown in FIG. 7, at a current time, human subject 102 is located on an opposite side of object 730 from UAV 100; however, as indicated by dotted line 710, a view of human subject 102 from an image capture device onboard UAV 100 is not occluded by object 730.), and control, if no tracking target can be detected from the captured image ([Col. 9] If the human subject 102 moves to a different position behind the object 730, the view of the human subject 102 from the image capture device onboard the UAV 100 may be occluded, as indicated by dotted line 712.), a control speed in a camera direction based on a previously calculated tracking subject direction ([Col. 9] Accordingly, to satisfy a subject-relative objective to maintain line of sight, a navigation system 120 may cause the UAV 100 to maneuver (e.g., along trajectory 706 or 704) to a different position such that the view of the human subject 102 is no longer occluded. Based on a predicted trajectory of human subject 102 (as indicated by arrow 716), and measured or estimated positions of the UAV 100 and object 730, a navigation system 120 may determine that the view of the human subject 102 may become occluded by the object 730 (assuming UAV 100 remains stationary) as indicated by the obstructed line of sight line 712. Based on this predicted future state and a standing objective to maintain line of sight with subject 102, the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).).
Kishi discloses a control device ([Abstract] an automatic tracking image pickup system) comprising: a memory ([0059] As for the template memory 44, a memory included in the CPU may be used.) ([0059] the tracking processing apparatus CPU 45) configured to, tracking target can be detected from a captured image, a direction toward the tracking target as a tracking subject direction based on a result of the detection ([0030] The tracking processing apparatus CPU 45 is a controller for controlling a speed of the driving unit (for panning, tilting, and zooming) based on a difference between the position of the object to be tracked in the pickup image and a target position in the pickup image (position at which the object is desired to be displayed ultimately).), and control, if no tracking target can be detected from the captured image ([0029] In this manner, the image processor 43 determines whether or not an object to be tracked, which is a target object to be tracked, is present.),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
In regards to claim 2, the teachings of claim 1 have been addressed. Zhu discloses wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate the tracking subject direction based on a difference between a position of the tracking target detected from the captured image and a target position, a current camera direction, and a zoom ([Col. 5] In some embodiments, the motion planning system 130, operating separately or in conjunction with the tracking system 140, is configured to generate a planned trajectory through a three-dimensional (3D) space of a physical environment based, for example, on images received from image capture devices 114 and/or 115, data from other sensors 112 (e.g., IMU, GPS, proximity sensors, etc.), one or more control inputs 170 from external sources (e.g., from a remote user, navigation application, etc.), and/or one or more specified navigation objectives. In some embodiments, the navigation system 120 may generate control commands configured to cause the UAV 100 to maneuver along the planned trajectory generated by the motion planning system 130. For example, the control commands may be configured to control one or more control actuators 110 (e.g., rotors and/or control surfaces) to cause the UAV 100 to maneuver along the planned 3D trajectory.).
Kishi discloses wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate the tracking subject direction based on a difference between a position of the tracking target detected from the captured image and a target position, a current camera direction, and a zoom ([0030] For example, FIG. 2 is a monitor screen immediately after the object to be tracked enters an angle of view photographed by the camera 20 from the right side and is recognized. The tracking processing apparatus CPU 45 is a controller for controlling a speed of the driving unit (for panning, tilting, and zooming) based on a difference between the position of the object to be tracked in the pickup image and a target position in the pickup image (position at which the object is desired to be displayed ultimately). [0038] After recognizing the object to be tracked for the first time, the tracking processing apparatus CPU 45 controls the speed of panning, tilting, and zooming (driving unit) in the initial mode until the object to be tracked reaches the predetermined position in the pickup image.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
In regards to claim 3, the limitations of claim 1 have been addressed. Zhu fails to explicitly disclose wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate the control speed in the camera direction based on a difference between a position of the tracking target detected from the captured image and a target position.
Kishi discloses wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate the control speed in the camera direction based on a difference between a position of the tracking target detected from the captured image and a target position ([0030] For example, FIG. 2 is a monitor screen immediately after the object to be tracked enters an angle of view photographed by the camera 20 from the right side and is recognized. The tracking processing apparatus CPU 45 is a controller for controlling a speed of the driving unit (for panning, tilting, and zooming) based on a difference between the position of the object to be tracked in the pickup image and a target position in the pickup image (position at which the object is desired to be displayed ultimately).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
In regards to claim 4, the limitations of claim 1 have been addressed. Zhu fails to explicitly disclose wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate a control speed of a zoom based on a difference between a size of the tracking target detected from the captured image and a defined size.
Kishi discloses wherein the processor is further configured to cause the control device to, if a tracking target can be detected from the captured image, calculate a control speed of a zoom based on a difference between a size of the tracking target detected from the captured image and a defined size ([0037] Specifically, the driving speed is determined based on the difference between the position of the object in the pickup image and the target position in the pickup image (desired position, range, and size of object, which is to be displayed ultimately) so that the image pickup is continued at the position or within the range on a predetermined image pickup screen and in a predetermined image pickup size depending on a necessity. Then, the camera platform is driven to perform panning, tilting, and zooming at the thus determined driving speed. [0038] After recognizing the object to be tracked for the first time, the tracking processing apparatus CPU 45 controls the speed of panning, tilting, and zooming (driving unit) in the initial mode until the object to be tracked reaches the predetermined position in the pickup image. [0052] During the processing in the normal mode according to the fourth embodiment, the zoom control is performed so that an object size (diagonal length of the object region shown in FIG. 10) in the pickup image is kept to a predetermined constant value (target position).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
In regards to claim 6, the limitations of claim 1 have been addressed. Zhu discloses wherein the processor is further configured to cause the control device to, if no tracking target can be detected from the captured image, calculate the control speed in the camera direction based on a difference between a current camera direction and the previously calculated tracking subject direction ([Col. 9] Accordingly, to satisfy a subject-relative objective to maintain line of sight, a navigation system 120 may cause the UAV 100 to maneuver (e.g., along trajectory 706 or 704) to a different position such that the view of the human subject 102 is no longer occluded. Based on a predicted trajectory of human subject 102 (as indicated by arrow 716), and measured or estimated positions of the UAV 100 and object 730, a navigation system 120 may determine that the view of the human subject 102 may become occluded by the object 730 (assuming UAV 100 remains stationary) as indicated by the obstructed line of sight line 712. Based on this predicted future state and a standing objective to maintain line of sight with subject 102, the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).).
In regards to claim 7, the limitations of claim 1 have been addressed. Zhu discloses wherein the processor is further configured to cause the control device to, if no tracking target can be detected from the captured image, calculate the control speed in the camera direction based on a difference between a current camera direction and a tracking subject direction predicted based on the previously calculated tracking subject direction ([Col. 9] Accordingly, to satisfy a subject-relative objective to maintain line of sight, a navigation system 120 may cause the UAV 100 to maneuver (e.g., along trajectory 706 or 704) to a different position such that the view of the human subject 102 is no longer occluded. Based on a predicted trajectory of human subject 102 (as indicated by arrow 716), and measured or estimated positions of the UAV 100 and object 730, a navigation system 120 may determine that the view of the human subject 102 may become occluded by the object 730 (assuming UAV 100 remains stationary) as indicated by the obstructed line of sight line 712. Based on this predicted future state and a standing objective to maintain line of sight with subject 102, the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).).
In regards to claim 8, the limitations of claim 1 have been addressed. Zhu discloses wherein the processor is further configured to cause the control device to, if no tracking target can be detected from the captured image, calculate the control speed in the camera direction based on a difference between a current camera direction and a tracking subject direction obtained by correcting the previously calculated tracking subject direction ([Col. 9] Accordingly, to satisfy a subject-relative objective to maintain line of sight, a navigation system 120 may cause the UAV 100 to maneuver (e.g., along trajectory 706 or 704) to a different position such that the view of the human subject 102 is no longer occluded. Based on a predicted trajectory of human subject 102 (as indicated by arrow 716), and measured or estimated positions of the UAV 100 and object 730, a navigation system 120 may determine that the view of the human subject 102 may become occluded by the object 730 (assuming UAV 100 remains stationary) as indicated by the obstructed line of sight line 712. Based on this predicted future state and a standing objective to maintain line of sight with subject 102, the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).).
Kishi discloses calculate the control speed in the camera direction based on a difference between a current camera direction and a tracking subject direction obtained by correcting the previously calculated tracking subject direction based on a size of the tracking target ([0037] Specifically, the driving speed is determined based on the difference between the position of the object in the pickup image and the target position in the pickup image (desired position, range, and size of object, which is to be displayed ultimately) so that the image pickup is continued at the position or within the range on a predetermined image pickup screen and in a predetermined image pickup size depending on a necessity. Then, the camera platform is driven to perform panning, tilting, and zooming at the thus determined driving speed.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the use of the size of the tracked object to determine the driving speed of the camera platform as taught by Kishi in order to improve the tracking imaging system [See Kishi].
In regards to claim 9, the limitations of claim 1 have been addressed. Zhu discloses wherein the processor is further configured to cause the control device to: acquire a captured image shot by an image capture apparatus ([Col. 3] The image capture devices 114 and/or 115 are depicted capturing an object 102 in the physical environment. The image capture devices 114 may be configured to capture images for use by a visual navigation system in guiding autonomous flight by the UAV 100 and/or a tracking system for tracking other objects in the physical environment (e.g., as described with respect to FIG. 2).), and generate a command to change the camera direction at the controlled control speed and to transmit the generated command to the image capture apparatus ([Col. 9] the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).
Kishi discloses wherein the processor is further configured to cause the control device to: acquire a captured image shot by an image capture apparatus ([0025] The camera 20 receives object light from the lens 30 to pick up an object image.), and generate a command to change the camera direction at the controlled control speed and to transmit the generated command to the image capture apparatus ([0027] The camera platform 10 first receives an operation signal from the tracking processing apparatus 40 through the tracking processing apparatus communication unit 12. The received operation signal is read and interpreted by the platform CPU 11 so that an operation in accordance with the operation signal is performed. When the operation signal is a driving instruction for panning and tilting, the platform CPU 11 controls the panning controller 13a and the tilting controller 13b to drive the panning motor 14a and the tilting motor 14b. When the operation signal is a driving instruction for zooming and focusing, the platform CPU 11 controls the zoom controller 15a and the focus controller 15b to drive the lens 30. When the operation signal is any one of various control instructions to the camera 20, the platform CPU 11 controls the camera controller 16 to control the camera 20.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
In regards to claim 10, the limitations of claim 1 have been addressed. Zhu discloses further comprising: an image capture unit configured to capture an image ([Col. 3] The image capture devices 114 and/or 115 are depicted capturing an object 102 in the physical environment. The image capture devices 114 may be configured to capture images for use by a visual navigation system in guiding autonomous flight by the UAV 100 and/or a tracking system for tracking other objects in the physical environment (e.g., as described with respect to FIG. 2).); and a drive unit configured to change the camera direction at the controlled control speed ([Col. 9] the navigation system 120 may generate outputs (e.g., a predicted trajectory and/or control commands) configured to cause the UAV 100 to maneuver to the UAV 100 to satisfy the subject-relative objective. [Col. 8] In some embodiments, an objective may be expressed in terms relative to the vehicle itself (e.g., UAV 100). For example, a vehicle-relative objective may include a target to move forward, backward, left, right, up, down, and/or rotate about one or more axes (e.g., yaw, pitch, roll, etc.) at some defined speed or acceleration (angular speed or acceleration in the case of rotation objectives).
Kishi discloses an image capture unit ([0025] a camera 20) configured to capture an image ([0025] The camera 20 receives object light from the lens 30 to pick up an object image.); and a drive unit ([0025] a camera platform (driving unit) 10) configured to change the camera direction at the controlled control speed ([0027] The camera platform 10 first receives an operation signal from the tracking processing apparatus 40 through the tracking processing apparatus communication unit 12. The received operation signal is read and interpreted by the platform CPU 11 so that an operation in accordance with the operation signal is performed. When the operation signal is a driving instruction for panning and tilting, the platform CPU 11 controls the panning controller 13a and the tilting controller 13b to drive the panning motor 14a and the tilting motor 14b. When the operation signal is a driving instruction for zooming and focusing, the platform CPU 11 controls the zoom controller 15a and the focus controller 15b to drive the lens 30. When the operation signal is any one of various control instructions to the camera 20, the platform CPU 11 controls the camera controller 16 to control the camera 20.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Kishi in order to improve the automatic tracking image pickup system [See Kishi].
Claim 11 lists all the same elements of claim 1, but in method form rather than device form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 11.
Claim 12 lists all the same elements of claim 1, but in non-transitory computer-readable storage medium form rather than device form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 12.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Kishi in further view of Stanard et al. (Hereafter, “Stanard”) [US 10,917,557 B2].
In regards to claim 5, the limitations of claim 1 have been addressed. Zhu fails to explicitly disclose wherein the processor is further configured to cause the control device to, if no tracking target can be detected from the captured image, change the camera direction in response to a period of time in which no tracking target can be detected after the control speed in the camera direction has been zero for a predetermined period of time.
Stanard discloses wherein the processor is further configured to cause the control device to, if no tracking target can be detected from the captured image, change the camera direction in response to a period of time in which no tracking target can be detected after the control speed in the camera direction has been zero for a predetermined period of time ([Col. 18] An occlusion that happens in a shorter duration can still be tracked, but, but as the time grows longer, the object is more considered a lost track. The characterization of an occlusion situation is shown in the TABLE 6 below. FIG. 7 is a video depiction of an image 700 of a full occlusion of the object by trees. The feature and motion confidence dips below the threshold and cannot be detected. [Col. 19] Lost Track: The lost track mode is diagnosed when the features and motion models of the object are unable to find the object after some time. Both confidence values will be low. What differs from the occlusion mode is the duration of the lost track. This threshold between occlusion and lost track can be determined by the user as a temporal threshold. In some cases, even though the tracker may lose the features and motion of the object, the tracker has not permanently lost the track of the object and may still reacquire it. The characterization of the mode is shown in the TABLE 7 below.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhu with the teachings of Stanard in order to improve the accuracy of the moving object tracking system [See Stanard].
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270-3841. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAITLIN A RETALLICK/Primary Examiner, Art Unit 2482