Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,812

CONTROL APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Aug 28, 2023
Examiner
KRAYNAK, JACK PETER
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
75 granted / 96 resolved
+16.1% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
114
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 96 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 7-9, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saito et al (US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1). Regarding claim 1, Saito et al teaches a control apparatus comprising: an acceleration detection unit configured to detect an acceleration (See Fig 3 302 and Para 31 regarding detected acceleration by gyro sensor, which allows for: "the motion information acquisition unit 105 acquires detection data of the gyro sensor or the acceleration sensor which is mounted in the external device as the motion information of the external device. In another example related to the second motion information, it is possible to acquire and use detection information of a traveling operation state such as the steering angle of a vehicle or depression amounts of an accelerator and a brake. " i.e. acceleration detection unit configured to detect an acceleration is a gyro sensor); at least one processor; and at least one memory in communication with the at least one processor (Para 88 teaches processor and memory), the at least one memory storing instructions that, when executed by the processor, cause the processor to: determine a motion vector from video data within a vehicle obtained by an image capturing unit (Fig 11 and Para 77-79, an image 1101 captured inside the vehicle 402 by the imaging device 401. A rectangular region 1102 within the image 1101 represents a vehicle window, and a region other than the rectangular region 1102 represents a region inside the vehicle such as an interior part. i.e. determine motion vector from video data within a vehicle); and determine, based on a result of the determination for each position, as an out-of-vehicle region, a region in which an object outside of the vehicle within the video data appears (Para 77-79, the image analysis unit 801 determines whether each of motion vectors detected from a plurality of regions in an image is detected from the region inside or outside the vehicle. The image analysis unit 801 determines that a region in which more vectors are detected is a region (region to be imaged) which is imaged by the photographer 403 with the imaging device 401. i.e. determine a region (outside of the vehicle) in which motion vectors occur by comparing change in vehicle gyro (accelerometer) and image motion vectors). Regarding the limitation: determine whether there is a correlation between an Saito et al teaches determining correlations between a motion vector magnitude (Fig 11 and Para 77-79, the image analysis unit 801 calculates a difference between the amount of motion acquired from the gyro sensor 406 of the vehicle 402 and the magnitude of the motion vector detected from the captured image. i.e. determining correlation between motion acquired (acceleration information, see Fig 3 302 and Para 31) and magnitude of the motion vector detected from the captured image) and a detected acceleration (See Fig 3 302 and Para 31 regarding detected acceleration by gyro sensor, which allows for: "the motion information acquisition unit 105 acquires detection data of the gyro sensor or the acceleration sensor which is mounted in the external device as the motion information of the external device. In another example related to the second motion information, it is possible to acquire and use detection information of a traveling operation state such as the steering angle of a vehicle or depression amounts of an accelerator and a brake.") during a predetermined period (Fig 11 and Para 77-79, there is a difference in motion vectors detected in the regions inside and outside the vehicle 402. If pieces of motion information are compared with each other using this, it can be determined whether the imaging device 401 captures an image of the inside or the outside of the vehicle. i.e. because motion is being captured using images, preset period is time between frames of captured video discussed in Para 29 "sequentially generates frame images."). However, Saito et al does not specifically teach an amount of change of a determined motion vector, as can be seen by the strikethrough above. Furthermore, Saito et al does not teach a "fixed" camera, as can be seen by the strikethrough above. In a similar field of endeavor, Shamshiri et al teaches using an amount of change of a determined motion vector (Para 157, the image processing module 8a is configured to track the movements of each of the image components IMC(n) in the image IMG1a. In particular, the image processing module 8a determines a movement vector V(n) for each discrete image component IMC(n). The movement vectors V(n) each comprise a magnitude and a direction. The image processing module 8a may optionally also determine a rate of change of the magnitude and/or the direction of the movement vectors V(n) (representative of linear acceleration and/or rotational acceleration). In accordance with an embodiment, the image processing module 8a applies a correction factor to the movement vector V(n) of the target image component in dependence on the movement vector(s) V(n) of one or more non-target image components. […] the image processing module 8a considers these non-target image components as having a fixed geospatial location (i.e. they are a static or stationary feature) and that their movement in the image IMG1a is due to local movement of the optical sensor 11a, for example as a result of movement of the host vehicle 2a. i.e. an amount of change of a motion vector which is correlated (associated with) movement of the vehicle detected by an accelerometer, see Fig 4c) detected in an image and correlating it with a detected acceleration (Fig 4c, measured movement of target vehicle from camera and movement of host vehicle from IMU are compared) for each position of features in the video data. In regards to the fixed camera, Shamshiri et al teaches a fixed camera (Para 152 and Fig 2a, the sensing means 7a is mounted in a forward-facing orientation to establish a detection region in front of the host vehicle 2a. The sensing means 7a comprises at least one optical sensor 11a mounted to the host vehicle 2a. i.e. a mounted (fixed) camera inside of the vehicle facing out). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Saito et al (US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) so that the system includes correlation a change in amount of motion vectors detected in an image during a time period and a fixed camera. Doing so would allow for improvement the object detection system, for example over a rough surface. The modified movement vector may provide more accurate positioning information of the target object relative to the vehicle (Para 17, Shamshiri et al). Regarding claim 2, Saito et al teaches the control apparatus according to claim 1, wherein the processor is further caused to determine, as a motion vector of a moving object within the vehicle, a motion vector lacking correlation to a motion vector of the out-of-vehicle region from among determined motion vectors, in relation to the video data obtained by the image capturing unit (Para 78 and Fig 11, in this manner, there is a difference in motion vectors detected in the regions inside and outside the vehicle 402. i.e. determine a difference (lack of correlation) to a motion vector inside of and outside of the vehicle from among determined motion vectors). Regarding claim 3, Saito et al teaches the control apparatus according to claim 1, wherein the processor is further caused to determine, as a motion vector of a moving object within the vehicle, a motion vector that deviates from the out-of-vehicle region from among the determined motion vectors, in relation to the video data obtained by the image capturing unit (Para 78 and Fig 11, the calculated difference is equal to or less than the threshold, the image analysis unit 801 determines that the detected motion vector is a motion vector detected from the region inside the vehicle in the image. In addition, if the calculated difference is larger than the threshold, the image analysis unit 801 determines that the detected motion vector is a motion vector detected from the region outside the vehicle in the image. i.e. inside/outside vehicle motion vector deviates (is different than threshold compared to opposite inside/outside vehicle motion vector)). Regarding claims 7-9, claims 7-9 rejected for the same reasons as claims 1-3 in the combination above, respectively. Regarding claim 13, claim 13 rejected for the same reason as claim 1 in the combination above. Claim(s) 4 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Nishida et al (US 5210566 A). Regarding claim 4, Saito et al and Shamshiri et al in the combination do not teach, the control apparatus according to claim 1, where the processor is further caused to control the image capturing unit to perform an appropriate exposure in relation to a region indicated by a motion vector for which it is determined that there is a correlation in the preset period. In a similar field of endeavor, Nishida et al teaches, the control apparatus according to claim 1, where the processor is further caused to control the image capturing unit to perform an appropriate exposure in relation to a region indicated by a motion vector for which it is determined that there is a correlation in the preset period (Col 3 line 61 - col 4 line 18, a photographic optical controlling apparatus for controlling exposure based on an image signal obtained at a photometric area designated on a photographic screen, said apparatus comprising: a motion vector detecting means for detecting the motion of an image from the correlation of two time continuous image data; and a photometric area controlling means which detects the motion of the main object from the output of said motion vector detecting means, and causes the photometric area to follow the motion of the main object. i.e. moving vector detecting means detects region indicated by a motion vector for which it is determined there is a correlation (between the motion in the images, the time period is the time period between the two images depending on the framerate) and changes the photometric area, or exposure (perform an appropriate exposure), based on this correlation). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Nishida et al (US 5210566 A) so that the processor is further caused to control the image capturing unit to perform an appropriate exposure in relation to a region indicated by a motion vector for which it is determined that there is a correlation in the preset period. Doing so would allow for focusing or the exposure controlling operation to be carried out accurately following the motion of a main object, irrespective of the positional change of the object the screen (Abst, Nishida et al). Regarding claim 10, claim 10 rejected for the same reasons as claim 4 in the combination above. Claim(s) 5 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Hamano et al (US 20220242316 A1). Regarding claim 5, Saito et al and Shamshiri et al in the combination do not teach, the control apparatus according to claim 1, wherein the processor is further caused to store the determined out-of-vehicle region until a predetermined condition is satisfied, and to execute processing for determining a new out-of-vehicle region in a case where the predetermined condition is satisfied. In a similar field of endeavor, Hamano et al teaches the control apparatus according to claim 1, wherein the processor is further caused to store the determined out-of-vehicle region until a predetermined condition is satisfied, and to execute processing for determining a new out-of-vehicle region in a case where the predetermined condition is satisfied (Para 7, based on one or more of an output from a front-view camera that captures an image of an outside area ahead of the vehicle, an output from a vehicle speed sensor, and an output from an in-vehicle camera that captures an image of a driver. When the environment determination unit predicts an occurrence of a backward movement of the vehicle, the rear-view camera is switched from a stopped state to an activated state, and the image processing unit starts the predetermined image processing on the data of the image received from the rear-view camera in the activated state. i.e. store (capture and process - the image is sent to the display) the determined out-of-vehicle region (the area ahead of the vehicle) until a predetermined condition is satisfied (occurrence of backward movement of the vehicle), and executing processing for a new out-of-vehicle region (rear-view camera instead of from view camera) when a predetermined condition is satisfied (rear-view camera switched from a stopped state to an activated state when backward movement occurrence occurs)). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Hamano et al (US 20220242316 A1) so that the processor is further caused to store the determined out-of-vehicle region until a predetermined condition is satisfied, and to execute processing for determining a new out-of-vehicle region in a case where the predetermined condition is satisfied. Doing so would allow prompt switching of the display to the output image from the image processing unit when the user performs a backward movement operation (Hamano et al, Para 13). Regarding claim 11, claim 11 rejected for the same reasons as claim 5 in the combination above. Claim(s) 6 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Jain et al (US 20200348136 A1). Regarding claim 6, Saito et al and Shamshiri et al in the combination do not teach, the control apparatus according to claim 1, wherein the process is further caused to, in a case of detecting acceleration of at least two or more different axes, output the acceleration of the axis for which an amount of change is largest in relation to a movement of the vehicle. In a similar field of endeavor, Jain et al teaches, the control apparatus according to claim 1, wherein the process is further caused to, in a case of detecting acceleration of at least two or more different axes, output the acceleration of the axis for which an amount of change is largest in relation to a movement of the vehicle (Para 79, the orthogonal transformation minimizes the correlation between the longitudinal and lateral axis acceleration. The orthogonal transformation method is used to identify the axis in which most significant motion occurs (vehicle's motion in the longitudinal direction in a straight line) while minimizing the correlation between two horizontal acceleration vectors. i.e. determine acceleration for the axis for which an amount of change is largest (axis in which most significant motion occurs) in relation to the other (or more) axes). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Saito et al US 20210258490 A1) in view of Shamshiri et al (US 20210012119 A1) and Jain et al (US 20200348136 A1) so that the process is further caused to, in a case of detecting acceleration of at least two or more different axes, output the acceleration of the axis for which an amount of change is largest in relation to a movement of the vehicle. Doing so would allow for an improved device and method of accurately analyzing the location of the user is provided, which is especially beneficial when GPS/GNSS signals are not strong enough or available (Para 4, Jain et al). Regarding claim 12, claim 12 rejected for the same reasons as claim 6 in the combination above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US-20220292686-A1 US-20180307914-A1 US-20180012368-A1 US-20170278252-A1 US-20150317834-A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK PETER KRAYNAK whose telephone number is (703)756-1713. The examiner can normally be reached Monday - Friday 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK PETER KRAYNAK/Examiner, Art Unit 2668 /UTPAL D SHAH/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Nov 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602819
IMAGE PROCESSING APPARATUS, FEATURE MAP GENERATING APPARATUS, LEARNING MODEL GENERATION APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592065
SYSTEMS AND METHODS FOR OBJECT DETECTION IN EXTREME LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586210
BIDIRECTIONAL OPTICAL FLOW ESTIMATION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579720
METHOD OF GENERATING TRAINED MODEL, MACHINE LEARNING SYSTEM, PROGRAM, AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12568314
IMAGE SIGNAL PROCESSOR, METHOD OF OPERATING THE IMAGE SIGNAL PROCESSOR, AND APPLICATION PROCESSOR INCLUDING THE IMAGE SIGNAL PROCESSOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 96 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month