Prosecution Insights
Last updated: April 19, 2026
Application No. 18/492,579

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §102§103§112
Filed
Oct 23, 2023
Examiner
CHANG, DANIEL CHEOLJIN
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
117 granted / 132 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This communication is in response to the amendment filed on 12/30/2025. Claims 1-6, 8-11 and 15-21 are pending. Claims 7 and 14 have been withdrawn, claims 12 and 13 have been cancelled, and claims 16-21 have been newly added. Election/Restrictions Applicant’s election without traverse of Species I in the reply filed on 12/30/2025 is acknowledged. Claims 7 and 14 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/30/2025. Claim Objections Claim 3, 4 and 6 are objected to because of the following informalities: “execute” should be changed to “executes”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 4 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The limitations - “… acquire a predicted position of the subject from an external apparatus … and wherein, in a case where the predicted position is acquired from the external apparatus, the control to track the subject is performed based on the predicted position.” in Claim 4 are not supported by the specification. The closest paragraphs [0067] and [0094] in the specification states that “the target acquisition position is a target position which is set to acquire the subject concerned by subject automatic tracking, and, in the case of the first exemplary embodiment, the target acquisition position is set as the position of the center of an image plane of the image capturing apparatus 301.” This passage merely describes setting a target acquisition position within the image plane of the image capturing apparatus for automatic subject tracking. This disclosure does not describe acquiring a predicted position of the subject from an external apparatus, nor does it describe any external device that provides predicted position information to the claimed system. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-6, 8-11 and 15-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (Claim 11 and 15) recites “a first speed” and “a second speed different from the first speed.” However, the claim does not clearly define what these speeds represent. It is unclear whether the recited speeds correspond to a tracking speed of the image capturing apparatus, a movement speed of the subject, or something else. With respect to claims 2, 5, 6 ,11, 15, arguments analogous to those presented for claim 1, are applicable. Claim 4 recites the limitations "acquire a predicted position of the subject from an external apparatus … and wherein, in a case where the predicted position is acquired from the external apparatus, the control to track the subject is performed based on the predicted position.” (line 6-7 and 10-12) However, the term “predicted position” and “external apparatus” render the scope of the claim unclear because the specification and drawings do not describe what constitutes the external apparatus or how the predicted position is generated and obtained from the external apparatus. Claim 19 recites the limitations “the image in which the movement amount of the subject is larger” and “the image in which the movement amount of the subject is smaller.” (line 4-6) However, it is unclear whether these references correspond to the previous recited “the first image” or “the second image.” (line 2-3) Because the claim does not clearly identify which image is associated with the larger velocity and which image is associated with the smaller velocity, the scope of the limitation cannot be determined. Additionally, the claim recites “an imaging apparatus that captures the image” (line 4) and “an imaging apparatus that captures the image.” (line 5-6) It is unclear if each of “an imaging apparatus” is referring back to “a first image capturing apparatus” (line 5-6 in claim 1), “a second image capturing apparatus” (line 7 in claim 1) or something else. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 1, 2, 11, 15 and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA"). Regarding claim 1, HA teaches an information processing apparatus comprising: at least one memory storing instructions; and at least one processor executing the stored instructions to ([0065] The processor 210 may be implemented as one or more processors. For example, the processor 210 may perform a preset operation by executing an instruction or command stored in a memory; [0066] The processor 210 may generate an output image by receiving image data from the first camera module 110 or the second camera module 120): acquire a first image captured by a first image capturing apparatus ([0050] a mode of generating an output image from second image data of the second camera module 120 is referred to as a second mode; [0080] In the second mode of generating an output image from second image data of the second camera module 120, the first camera module 110 may be deactivated) which tracks a subject at a first speed ([0048] when the first object 130 is expected to leave the first FOV 112 because the first camera module 110 has failed to follow a movement of the first object 130, the electronic device 100 may perform driving preparation for switching an image input from the first camera module 110 to the second camera module 120, and, when the driving preparation of the second camera module 120 is completed, the electronic device 100 may switch an input to the second camera module 120 ... when a moving velocity Vobj of the first object 130 is higher than a moving velocity Vt of the first angle A1 of view of the first camera module 110, the electronic device 100 may determine that the first object 130 will leave the first FOV 112 of the first camera module 110); acquire a second image captured by a second image capturing apparatus ([0050] a mode of generating an output image from first image data of the first camera module 110 is referred to as a first mode; [0077] In the first mode of generating an output image from first image data of the first camera module 110, the second camera module 120 may be deactivated) which tracks the subject at a second speed different from the first speed ([0120] a case in which it is determined that the first camera module 110 is capable of tracking the first object 130 may be a case in which a velocity of the first object 130 becomes lower than a highest moving velocity of the FOV of the first camera module 110); and output an image selected from the first image and the second image according to a movement speed of the subject ([0067] The processor 210 may generate an output image from at least one of the first image data or the second image data; [0069] When the processor 210 uses image data of one camera module of the first camera module 110 and the second camera module 120, the processor 210 may switch an input of image data to the selected camera module and receive image data from the selected camera module; [0022] an electronic device using two cameras and configured to perform quick and smooth switching between the cameras by automatically performing switching of the cameras based on an object's movement; [0098] the processor 210 may calculate the moving velocity Vobj of the first object 130 based on first image data or second image data). Regarding claim 2, HA teaches all the limitations of claim 1 above. HA teaches wherein the first speed is higher than the second speed, and wherein, ([0120] a case in which a velocity of the first object 130 becomes lower than a highest moving velocity of the FOV of the first camera module 110 ... when a velocity of the first object 130 is higher than the highest moving velocity of the first FOV of the first camera module 110) in a case where the movement speed of the subject is a speed which allows tracking of the subject at the second speed ([0120] a case in which it is determined that the first camera module 110 is capable of tracking the first object 130 may be a case in which a velocity of the first object 130 becomes lower than a highest moving velocity of the FOV of the first camera module 110 … when a velocity of the first object 130 becomes lower than the highest moving velocity of the first FOV of the first camera module 110, the electronic device 100 may determine that the first camera module 110 is capable of tracking the first object 130), the second image is output ([0050] a mode of generating an output image from first image data of the first camera module 110; [0077] In the first mode of generating an output image from first image data of the first camera module 110, the second camera module 120 may be deactivated), and, in a case where the movement speed of the subject is a speed which does not allow tracking of the subject at the second speed ([0048] when the first object 130 is expected to leave the first FOV 112 because the first camera module 110 has failed to follow a movement of the first object 130, the electronic device 100 may perform driving preparation for switching an image input from the first camera module 110 to the second camera module 120, and, when the driving preparation of the second camera module 120 is completed, the electronic device 100 may switch an input to the second camera module 120), the first image is output ([0050] generating an output image from second image data of the second camera module 120; [0048] switching an image input from the first camera module 110 to the second camera module 120). Regarding claim 16, HA teaches all the limitations of claim 1 above. HA teaches wherein the at least one processor further executes the stored instructions ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory) to acquire information regarding a position and motion of the subject from at least one of the first image capturing apparatus and the second image capturing apparatus, and ([0098] the processor 210 may calculate the moving velocity Vobj of the first object 130 based on first image data or second image data. When the processor 210 operates in a first mode, the processor 210 may detect the first object 130 from first image data, and calculate a location of the first object 130 over time. The processor 210 may determine the location of the first object 130 over time, based on the moving speed Vt of the first angle A1 of view and a location of the first object 130 in the first image data. When the processor 210 operates in a second mode, the processor 210 may detect the first object 130 from the second image data, and calculate a location of the first object 130 over time. The processor 210 may calculate the moving velocity Vobj of the first object 130, based on the location of the first object 130 over time, calculated based on the first image data or the second image data) wherein one of the first image and the second image is output based on the acquired information regarding the position and motion of the subject ([0069] When the processor 210 uses image data of one camera module of the first camera module 110 and the second camera module 120, the processor 210 may switch an input of image data to the selected camera module and receive image data from the selected camera module; [0050] a mode of generating an output image from first image data of the first camera module 110 is referred to as a first mode, and a mode of generating an output image from second image data of the second camera module 120 is referred to as a second mode; [0022] an electronic device using two cameras and configured to perform quick and smooth switching between the cameras by automatically performing switching of the cameras based on an object's movement; [0098] the processor 210 may calculate the moving velocity Vobj of the first object 130 based on first image data or second image data). With respect to claim 11, arguments analogous to those presented for claim 1 and 2, are applicable. With respect to claim 15, arguments analogous to those presented for claim 1 and 2, are applicable. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3, 5, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA") in view of TSUNASHIMA et al. (U.S Publication No. 2024/0056686) (hereafter, "TSUNASHIMA"). Regarding claim 3, HA teaches all the limitations of claim 2 above. HA teaches wherein the at least one processor further execute the stored instructions ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory) to calculate a predicted position of a subject from an image captured by the first image capturing apparatus ([0072] when the processor 210 predicts that the first object will again enter the first FOV of the first camera module 110 while tracking and photographing the first object by the second camera module 120, the processor 210 may switch an input of image data from the second camera module 120 to the first camera module 110), wherein, in a case where the first image is output ([0050] generating an output image from second image data of the second camera module 120; [0048] switching an image input from the first camera module 110 to the second camera module 120). HA does not expressly teach information about the predicted position of the subject of which the first image capturing apparatus performs image capturing is output to the second image capturing apparatus. However, TSUNASHIMA teaches information about the predicted position of the subject of which the first image capturing apparatus performs image capturing is output to the second image capturing apparatus ([0157] the wide-angle camera 2 may include the movement direction estimation unit 14. The movement direction estimation unit 14 calculates the movement direction and the movement speed for the tracking target object T1 as described above. The tracking control unit 17 of the telephoto camera 3 receives the information of the movement direction and the movement speed for the tracking target object T1, and performs processing for predicting the movement destination of the tracking target object T1 based on that information). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of HA to incorporate the step/system of transmitting the determined the trajectory and velocity of a tracking target by a first camera to a second camera taught by TSUNASHIMA. The suggestion/motivation for doing so would have been to prevent tracking failures by consistently transmitting location and motion data from one imaging device to another ([0307] continuing to transmit the information on the tracking target object T1 and the information on the movement direction from the image processing device having this configuration to the other image processing device prevents a situation where the tracking target object T1 cannot be captured within the angle of view of the second image capturing unit 6 from continuing). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine HA and TSUNASHIMA to obtain the invention as specified in claim 3. Regarding claim 5, HA teaches all the limitations of claim 1 above. TSUNASHIMA teaches wherein the first speed is a first tracking sensitivity determined ([0229] This is because when the tracking target object T1 is moving slowly or regularly, the tracking of the object by the second image capturing unit 6 is likely to be performed normally) from at least one of: (a) an interval for calculating a movement amount and movement speed of the subject; (b) a threshold value concerning a movement amount of the subject; and (c) a speed coefficient concerning a control speed of control concerning an image capturing direction ([0225] the movement speed of the tracking target object T1 is estimated using information on past frames of the first image GC1 and the like. When the movement speed is less than a predetermined speed, the movement speed of the tracking target object T1 is determined to be low). Regarding claim 10, the combination of HA and TSUNASHIMA teaches all the limitations of claim 5 above. HA teaches wherein the first tracking sensitivity is a maximum sensitivity capable of tracking the subject ([0120] a case in which it is determined that the first camera module 110 is capable of tracking the first object 130 may be a case in which a velocity of the first object 130 becomes lower than a highest moving velocity of the FOV of the first camera module 110 ... when a velocity of the first object 130 becomes lower than the highest moving velocity of the first FOV of the first camera module 110, the electronic device 100 may determine that the first camera module 110 is capable of tracking the first object 130). Regarding claim 17, HA teaches all the limitations of claim 1 above. TSUNASHIMA teaches wherein information regarding a position and motion of the subject is appended to at least one of the first image and the second image ([0039] when the position of the tracking target object in the first image and the information pertaining to the optical axis direction of the second image capturing unit can be associated with each other in the other image processing device, the other image processing device can identify the optical axis direction of the second image capturing unit by having information pertaining to the optical axis direction, such as that described above, transmitted to the other image processing device; [0042] the reception processing unit may receive information on a movement direction of the tracking target object from the other image processing device, and the tracking control unit may perform control for the tracking using a predicted position of the tracking target object calculated based on a capture time of the first image and the movement direction received). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA") in view of Omari et al. (U.S Publication No. 2018/0025498) (hereafter, "Omari"). Regarding claim 4, HA teaches all the limitations of claim 1 above. HA teaches wherein the at least one processor further execute the stored instructions to ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory): perform control to track a subject in an image captured by an image capturing apparatus; and ([0020] generating, in a first mode, an output image from first image data generated by a first camera module of which an angle of view is movable, controlling a direction of the angle of view of the first camera module while tracking a first object to change a field of view (FOV); [0128] the processor 210 may control the first camera module 110 to track a first object based on the first image data) … wherein the control to track the subject is performed based on a position and motion of the subject in the image captured by the image capturing apparatus ([0106] the processor 210 may move the first FOV to a location 602 and a location 604 sequentially while tracking the first object 130. However, when the first object 130 is expected to reach a boundary of the FOV movable range 114 of the first camera module 110 within a reference time period by considering a moving path and velocity of the first object 130, the processor 210 may predict that the first object 130 will leave the first FOV of the first camera module 110). HA does not expressly teach acquire a predicted position of the subject from an external apparatus … and wherein, in a case where the predicted position is acquired from the external apparatus, the control to track the subject is performed based on the predicted position. However, Omari teaches acquire a predicted position of the subject from an external apparatus ([0081] the motion sensors 314a may be sensors of the external device 50 being held or attached to the subject S. The subject device motion estimate is determined from the information received from the sensors 314a) … and wherein, in a case where the predicted position is acquired from the external apparatus ([0081] the motion sensors 314a may be sensors of the external device 50 being held or attached to the subject S. The subject device motion estimate is determined from the information received from the sensors 314a), the control to track the subject is performed based on the predicted position ([0162] the tracking efficiency and robustness may be improved by using the motion estimates of the MIA 20 and the estimated position and velocity of the target T; [0148]). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of HA to incorporate the step/system of obtaining a motion estimate of the subject from an external device and tracking the subject by using the motion estimate of the subject taught by Omari. The suggestion/motivation for doing so would have been to improve the accuracy of tracking a subject ([0116] Once a subject or a target has been determined as present in a video stream as captured by an aerial subject tracking system or MIA 20, it is desirable to automatically or semi-automatically accurately frame the subject within the video image frames; [0005] A tracking system works best when locations of the movable imaging platform and subject can be accurately known. Global Positioning System receivers can be utilized to provide a reasonable degree of accuracy). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine HA and Omari to obtain the invention as specified in claim 4. Claim 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA") in view of SUK et al. (U.S Publication No. 2020/0134842) (hereafter, "SUK"). Regarding claim 18, HA teaches all the limitations of claim 1 above. HA teaches wherein the at least one processor further executes the stored instructions to ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory). HA does not expressly teach determine a tracking speed of the subject based on at least one of an interval for calculating a movement amount and movement speed of the subject, a threshold value concerning a movement amount of the subject, and a speed coefficient concerning a control speed of control concerning an image capturing direction of an imaging capturing apparatus. However, SUK teaches determine a tracking speed of the subject based on at least one of an interval for calculating a movement amount and movement speed of the subject, a threshold value concerning a movement amount of the subject, and a speed coefficient concerning a control speed of control concerning an image capturing direction of an imaging capturing apparatus ([0047] the trajectory calculation unit 220 may calculate a direction and/or a speed of the movement of the subject with reference to the calculated motion trajectory of the subject and the time intervals of photographing of the imaging module 300. For example, the trajectory calculation unit 220 may easily calculate a speed of the movement of the subject by dividing a distance between coordinates specified in the motion trajectory of the subject by the time interval of photographing of the imaging module 300 corresponding to the specified coordinates). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of HA to incorporate the step/system of calculating a speed of the movement by dividing the distance traveled between two points on its trajectory by the time interval between the corresponding images taught by SUK. The suggestion/motivation for doing so would have been to improve the accuracy of motion trajectory of a subject ([0008] Another object of the invention is to accurately calculate a motion trajectory of a subject using only one camera; [0009] Yet another object of the invention is to accurately calculate a motion trajectory of a subject at low cost). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine HA and SUK to obtain the invention as specified in claim 18. Regarding claim 19, the combination of HA and SUK teaches all the limitations of claim 18 above. HA teaches wherein in a case there is a difference between a movement amount of the subject in the first image and a movement amount of the subject in the second image ([0120] a case in which a velocity of the first object 130 becomes lower than a highest moving velocity of the FOV of the first camera module 110 … when a velocity of the first object 130 is higher than the highest moving velocity of the first FOV of the first camera module 110), a tracking speed of the subject in an imaging apparatus that captures the image in which the movement amount of the subject is larger is set to be greater than a tracking speed of the subject in an imaging apparatus that captures the image in which the movement amount of the subject is smaller ([0120] when a velocity of the first object 130 is higher than the highest moving velocity of the first FOV of the first camera module 110, the electronic device 100 may determine that the first camera module 110 is incapable of tracking the first object 130, and, when a velocity of the first object 130 becomes lower than the highest moving velocity of the first FOV of the first camera module 110, the electronic device 100 may determine that the first camera module 110 is capable of tracking the first object 130; [0072] when the processor 210 predicts that the first object will leave the first FOV of the first camera module 110 while the processor 210 tracks and photographs the first object by the first camera module 110, the processor 210 may switch an input of image data from the first camera module 110 to the second camera module 120; [0068] the processor 210 may set a frame rate value of the second camera module 120 to a greater value in the second mode using the second camera module 120 for a main input; [0080] in the second mode, the first camera module 110 may have a low frame rate; [0050] a mode of generating an output image from first image data of the first camera module 110 is referred to as a first mode, and a mode of generating an output image from second image data of the second camera module 120 is referred to as a second mode). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA") in view of SUK et al. (U.S Publication No. 2020/0134842) (hereafter, "SUK") and further in view of SATO (U.S Publication No. 2021/0295540). Regarding claim 20, the combination of HA and SUK teaches all the limitations of claim 19 above. HA teaches wherein the at least one processor further executes the stored instructions to ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory). HA does not expressly teach acquire the difference in movement amount of the subject based on information representing a position and motion of the subject acquired from each of the first imaging apparatus and the second imaging apparatus. However, SATO teaches acquire the difference in movement amount of the subject ([0039] The calculation unit 120 calculates a displacement of the measurement object) based on information representing a position and motion of the subject acquired from each of the first imaging apparatus and the second imaging apparatus ([0038] the acquiring unit 110 may use, as the first image and second image, still images captured by a plurality of cameras which are controlled to perform imaging at the same timing; [0039] The calculation unit 120 calculates a displacement of the second position, the displacement being based on the first position, by using the images acquired by the acquiring unit 110, and correlation information calculated based on configuration information of the imaging unit; [0112] The displacement measurement device 420 may calculate a displacement of the measurement object, based on the difference between the position of the movable point which is estimated from the position of the steady point, and the position of the movable point which was actually measured. Thereby, the displacement measurement device 420 can calculate the displacement of the movable point in real time). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of the combination of HA and SUK to incorporate the step/system of calculating displacement of the measurement object based on displacement of different positions acquired from plurality of cameras taught by SATO. The suggestion/motivation for doing so would have been to improve the accuracy for measuring the displacement of a measurement object ([0008] An illustrative object of the present disclosure is to provide technology for measuring a displacement of a measurement object with high precision). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine HA and SUK with SATO to obtain the invention as specified in claim 20. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over HA et al. (U.S. Publication No. 2022/0294986) (hereafter, "HA") in view of SUK et al. (U.S Publication No. 2020/0134842) (hereafter, "SUK") and further in view of SATO (U.S Publication No. 2021/0295540) and Ishikawa (U.S Publication No. 2012/0020524). Regarding claim 21, the combination of HA, SUK and SATO teaches all the limitations of claim 20 above. HA teaches wherein the at least one processor further executes the stored instructions to ([0065] the processor 210 may perform a preset operation by executing an instruction or command stored in a memory). HA does not expressly teach acquire the difference in the movement amount of the subject, when it is detected that the subject has started to move after stopping a movement thereof, based on the information representing the position and motion of the subject acquired from each of the first imaging apparatus and the second imaging apparatus. However, SATO teaches acquire the difference in the movement amount of the subject ([0039] The calculation unit 120 calculates a displacement of the measurement object) … based on the information representing the position and motion of the subject acquired from each of the first imaging apparatus and the second imaging apparatus ([0038] the acquiring unit 110 may use, as the first image and second image, still images captured by a plurality of cameras which are controlled to perform imaging at the same timing; [0039] The calculation unit 120 calculates a displacement of the second position, the displacement being based on the first position, by using the images acquired by the acquiring unit 110, and correlation information calculated based on configuration information of the imaging unit; [0112] The displacement measurement device 420 may calculate a displacement of the measurement object, based on the difference between the position of the movable point which is estimated from the position of the steady point, and the position of the movable point which was actually measured. Thereby, the displacement measurement device 420 can calculate the displacement of the movable point in real time). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of the combination of HA to incorporate the step/system of calculating displacement of the measurement object based on displacement of different positions acquired from plurality of cameras taught by SATO. Motivation for this combination has been stated in claim 20. The combination of HA and SATO does not expressly teach when it is detected that the subject has started to move after stopping a movement thereof. However, Ishikawa teaches when it is detected that the subject has started to move after stopping a movement thereof ([0192] the tracked object determination device of the present invention is applied to video obtained by tracking and shooting a state where the person A walking rightward suddenly stops and starts running rightward; [0193] The person A existing at the center of the screen starts walking at time t0. After stopping and standing still at time t3, the person A starts running toward the right direction at t6). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of combination of HA and SATO to incorporate the step/system of tracking a state where the subject moving suddenly stops and starts moving taught by Ishikawa. The suggestion/motivation for doing so would have been to improve the tracking determination ([0244] The present invention is applicable to a tracked object determination device which determines whether a moving object appearing in video is a tracked object or not, for the generation of a list of tracked objects appearing in video, in order to generate summarized video by which the tracked object can be seized by specifying a section in which the tracked object appears or in order to extract a representative image group in which the tracked object can be seized by selecting a frame casting the tracked object clearly from among sections in which the tracked object appears, and a program for realizing the tracking target determination device on a computer). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine HA and SATO with Ishikawa to obtain the invention as specified in claim 21. Allowable Subject Matter Claim 6 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action. Claim 8 and 9 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C CHANG/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Oct 23, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592097
REAL-TIME, FINE-RESOLUTION HUMAN INTRA-GAIT PATTERN RECOGNITION BASED ON DEEP LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579672
STEREO VISION-BASED HEIGHT CLEARANCE DETECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573047
Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
2y 5m to grant Granted Mar 10, 2026
Patent 12548296
Spatially Preserving Flattening in Deep Learning Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12541868
Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month