DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1,148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating
obviousness or nonobviousness.
4. Claims 1, 3-5, 9, and 12-15 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Tsuji (US Publication 2014/0185875) in view of Idaka (US Publication 2019/0158745), and further in view of Appia (US Publication 2015/0003684).
Regarding claim 1, Tsuji discloses an object tracking apparatus comprising:
at least one processor or circuit and a memory storing instructions to cause the at least one processor or circuit to perform operations of the following units (Tsuji, fig. 7, computer 700):
an object detection unit configured to be capable of detecting an object in a captured image by using a plurality of detection methods (Tsuji, para. 0008, an object area tracking apparatus for detecting object using a plurality of different detecting methods);
Tsuji does not explicitly disclose:
a tracking unit configured to change an imaging range based on a target position within a captured image, and track the object; and
a control unit configured to control a degree of tracking at which the tracking unit tracks the object, in accordance with a detection method whose detection result is used by the tracking unit and by which the object detection unit detects the object.
Idaka discloses a tracking unit configured to change an imaging range based on a target position within a captured image, and track the object (Idaka, para. 0018, an imaging apparatus having a driving mechanism for panning and tilting for automatic tracking a detected object. The imaging apparatus has a preset registration function that registers a place where a user wants to shoot. During preset registration, a process is performed that stores, for example, position information of a panning driver or a tilting driver and zoom position information; para. 0029, network camera 100 can shoot a wide range by changing an imaging direction by rotating the camera head 204 in the horizontal direction and the vertical direction; see also Yachida, US 2021/0256711, para’s 0089-0090, as adjusting zoom factor and changing range in object monitoring is well known in the art).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Idaka’s features into Tsuji’s invention for enhancing object tracking system by providing control for adjusting imaging range.
Tsuji-Idaka does not explicitly disclose but Appia discloses a control unit configured to control a degree of tracking at which the tracking unit tracks the object, in accordance with a detection method whose detection result is used by the tracking unit and by which the object detection unit detects the object (Appia, para. 0038, a filter is applied 400 to the rectified image to generate local threshold values for each pixel in the rectified image, i.e., to generate a local threshold image in which each location in the local threshold image contains a local threshold value for a pixel in the corresponding location in the rectified image. The particular filter used may be empirically determined and may depend on the distribution of ones and zeros in the projected binary pattern image. For example, if the binary pattern has an equal distribution of ones and zeros, a local mean is a good candidate for a local threshold value. Thus, the filter used may be, for example, a local circular averaging filter or local box-average filter, e.g., a 5.times.5 box averaging filter. The filter kernel sizes may vary depending on the information (density) in the binary pattern; a bilateral filter or a suitable 2D low-pass filter such as a 2D Gaussian filter may be used).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Appia’s features into Tsuji-Idaka’s invention for enhancing object tracking system by providing control for image tracking quality.
Regarding claim 3, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1, wherein the control unit includes a low-pass filter for extracting a low-frequency component of information related to a position of the object in order to adjust the degree of tracking, and changes a cut-off frequency of the low-pass filter in accordance with an accuracy of the detection method by which the object detection unit detects the object (Appia, para. 0038, a filter is applied 400 to the rectified image to generate local threshold values for each pixel in the rectified image, i.e., to generate a local threshold image in which each location in the local threshold image contains a local threshold value for a pixel in the corresponding location in the rectified image. The particular filter used may be empirically determined and may depend on the distribution of ones and zeros in the projected binary pattern image. For example, if the binary pattern has an equal distribution of ones and zeros, a local mean is a good candidate for a local threshold value. Thus, the filter used may be, for example, a local circular averaging filter or local box-average filter, e.g., a 5.times.5 box averaging filter. The filter kernel sizes may vary depending on the information (density) in the binary pattern; a bilateral filter or a suitable 2D low-pass filter such as a 2D Gaussian filter may be used).
The motivation to combine the references and obviousness arguments are the same as claim 1.
Regarding claim 4, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 3, wherein the higher the accuracy of the detection method for detecting the object, the higher the cut-off frequency of the low-pass filter is set to be by the control unit (Appia, para. 0038, a filter is applied 400 to the rectified image to generate local threshold values for each pixel in the rectified image, i.e., to generate a local threshold image in which each location in the local threshold image contains a local threshold value for a pixel in the corresponding location in the rectified image. The particular filter used may be empirically determined and may depend on the distribution of ones and zeros in the projected binary pattern image. For example, if the binary pattern has an equal distribution of ones and zeros, a local mean is a good candidate for a local threshold value. Thus, the filter used may be, for example, a local circular averaging filter or local box-average filter, e.g., a 5.times.5 box averaging filter. The filter kernel sizes may vary depending on the information (density) in the binary pattern; a bilateral filter or a suitable 2D low-pass filter such as a 2D Gaussian filter may be used; cut-off frequency of low-pass filter can be set to a higher cut-off value for higher accuracy of detection as known in the art).
The motivation to combine the references and obviousness arguments are the same as claim 1.
Regarding claim 5, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 3, wherein the control unit sets a target cut-off frequency of the low-pass filter in accordance with the detection method for detecting the object, and gradually brings the cut-off frequency of the low-pass filter closer to the target cut-off frequency by using a predetermined amount of change (Appia, para. 0038, a filter is applied 400 to the rectified image to generate local threshold values for each pixel in the rectified image, i.e., to generate a local threshold image in which each location in the local threshold image contains a local threshold value for a pixel in the corresponding location in the rectified image. The particular filter used may be empirically determined and may depend on the distribution of ones and zeros in the projected binary pattern image. For example, if the binary pattern has an equal distribution of ones and zeros, a local mean is a good candidate for a local threshold value. Thus, the filter used may be, for example, a local circular averaging filter or local box-average filter, e.g., a 5.times.5 box averaging filter. The filter kernel sizes may vary depending on the information (density) in the binary pattern; a bilateral filter or a suitable 2D low-pass filter such as a 2D Gaussian filter may be used; cut-off frequency of low-pass filter can be set by gradually adjusting using a pre-determine amount of change as known in the art).
The motivation to combine the references and obviousness arguments are the same as claim 1.
Regarding claim 9, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1, wherein the plurality of detection methods include any of template matching, deep learning, and pattern matching (Tsuji, para’s 027-0028, object detection with machine learning).
Regarding claim 12, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1, wherein the tracking unit changes the imaging range by moving an imaging element (Tsuji, para. 0022, changes the imaging range by moving image sensor is well known in the art).
Regarding claim 13, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1, wherein the tracking unit changes the imaging range by panning/tilting an imaging apparatus (--Idaka, para. 0018, position information of a panning driver or a tilting driver and zoom position information).
The motivation to combine the references and obviousness arguments are the same as claim 1.
Claims 14 and 15 are rejected for the same reasons set forth in claim 1. Tsuji-Idaka-Appia further discloses computer readable medium (see Tsuji, claim 12).
The motivation to combine the references and obviousness arguments are the same as claim 1.
5. Claims 2 and 6-7 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Tsuji-Idaka-Appia, as applied to claim 1 above, in view of Wakamatsu (US Publication 2016/0316123).
Regarding claim 2 and 6, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1.
Tsuji-Idaka-Appia does not explicitly disclose but Wakamatsu discloses:
a setting unit configured to set the target position; the tracking unit adjusts a position of the object in accordance with the detection method for detecting the object, and calculates an amount of adjustment of the imaging range based on the adjusted position of the object (Wakamatsu, para. 0081, moving the subject to a target position; an amount of adjustment of the imaging range based on the adjusted position of the object can be calculated as known in the art).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wakamatsu’s features into Tsuji-Idaka-Appia’s invention for enhancing object tracking system by providing control for adjusting imaging range.
Regarding claim 7, Tsuji-Idaka-Appia-Wakamatsu discloses the object tracking apparatus according to claim 6, wherein the tracking unit adjusts the position of the object in accordance with an amount of change in the position of the object at a timing at which the detection method for detecting the object has switched (Tsuji, para. 0008, an object area tracking apparatus for detecting object can be switched to a different detecting method. It is obvious and/or well known in the art that imaging range can be adjusted to accommodate the new detecting method; Wakamatsu, para. 0081, moving the subject to a target position).
The motivation to combine the references and obviousness arguments are the same as claims 1 and 6.
6. Claim 8 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Tsuji-Idaka-Appia, as applied to claim 1 above, in view of Marlow et al. (US Publication 2024/0169734).
Regarding claim 8, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1.
Tsuji-Idaka-Appia does not explicitly disclose but Marlow discloses wherein the plurality of detection methods include any of a method for detecting a particular object in the captured image and a method for identifying an object by comparison of images among a plurality of frames (Marlow, para. 0008, to determine a phase from the obtained video, the surgical tracking server compares positions of identified objects and people in frames and the states determined for the identified objects and people of the obtained video to stored images corresponding to different phases. In various embodiments, the surgical tracking server applies one or more models that determine measures of similarity of frames of the obtained video data to stored images corresponding to phases by comparing positions of identified people and objects in frames of video data to positions of corresponding objects and people in images corresponding to phases and determines a phase of the operating room based on the measures of similarity. An image corresponding to a phase identifies locations within the image of one or more objects in the image and a state corresponding to each of at least a set of identified objects).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Marlow’s features into Tsuji-Idaka-Appia’s invention for enhancing object detecting based on comparison of images among a plurality of frames.
7. Claim 10 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Tsuji-Idaka-Appia, as applied to claim 1 above, in view of Yonezawa (US Publication 2023/0274522).
Regarding claim 10, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1.
Tsuji-Idaka-Appia does not explicitly disclose but Yonezawa discloses wherein the tracking unit changes the imaging range by cutting out an image from the captured image (Yonezawa, para. 0002, generating a cut-out image by cutting out a cutting-out region, which is a partial region to be cut out from an image, from the image. By gradually changing the position (or size) of the cutting-out region in the image, the imaging range of an image capturing apparatus can be changed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Yonezawa’s features into Tsuji-Idaka-Appia’s invention for enhancing object tracking by cutting out an image from the captured image to change imaging range.
8. Claim 11 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Tsuji-Idaka-Appia, as applied to claim 1 above, in view of Wakamatsu (US Publication 2016/0316123).
Note: (Secondary reference Tanaka (US Publication 2025/0260888) is the same application 19/038,901)
Regarding claim 11, Tsuji-Idaka-Appia discloses the object tracking apparatus according to claim 1.
Tsuji-Idaka-Appia does not explicitly disclose but Wakamatsu discloses wherein the tracking unit changes the imaging range by moving a lens (Wakamatsu, para. 0075, the configuration of a subject tracking device that moves an image sensor within a plane perpendicular to an optical axis).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wakamatsu’s features into Tsuji-Idaka-Appia’s invention for enhancing object tracking by moving a lens of the imaging unit to change imaging range.
Conclusion
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LOI H TRAN/ Primary Examiner, Art Unit 2484