Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,842

INFORMATION PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §102§112
Filed
Dec 19, 2023
Examiner
BEATTY, TY MITCHELL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
19 granted / 27 resolved
+8.4% vs TC avg
Strong +42% interview lift
Without
With
+42.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
15 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 8 December, 2025 (hereinafter “the Amendment’) has been entered and considered. Claims 1, 7, 9, 10, 12, 13, and 14-16 have been amended. Claims 1-16, all the claims pending in the application, are rejected. All modifications to the rejection set forth in the present action were necessitated by Applicants’ claim amendments; accordingly, this action is made final. Response to Amendment 2. In view of the amendments to independent claims 1 and 14-16, the rejections below have been clarified to address the new claim language. 112(b) Rejections The rejection to claim 10 under 35 U.S.C. §112(b) is withdrawn in view of the amendment. 112(f) Claim Interpretation The 35 U.S.C. §112(f) interpretations for claims 1, 7, 9, 12, and 14 are withdrawn in view of the amendments. The 35 U.S.C. §112(f) interpretations for claims 10, 11, and 13 are maintained. See “determination unit” Prior Art Rejections On page 11 of the Amendment, the Applicant contends that Yoo does not teach or suggest “determine whether a subject has made a predetermined movement based on images from at least one of a plurality of image capturing units by determining whether at least one of the plurality of image capturing units is capturing subject part necessary for detecting a predetermined movement, and by determining the predetermined movement from an image captured by the image capturing unit which is capturing the subject part,”. The Examiner respectfully disagrees and contends that when the camera is capturing the whole subject it is also capturing the “subject part necessary for predicting a predetermined movement.”, as disclosed by Yoo in P[0088]: “movement of an object may be recognized in the analyzed image data, and the main camera may be selected based on the recognized movement of the object.”, and furthermore in P[0151]: “a camera located in a gesture direction based on a front camera which the user faces may be switched to the main camera. For example, when the user waves his/her hand (or arm) to the left”, where the hands/arms are specific parts of the subjects captured necessary for detecting a predetermined movement/motion. In view of the foregoing, Yoo does indeed teach the newly added features of independent claim 1. Accordingly, the prior art rejections based on Yoo are maintained. Applicants assert the same arguments for independent claims 14-16 and and all the dependent claims. These arguments are addressed above. In view of the amendments to independent claims 1 and 14-16, the rejections below have been clarified to address the new claim language. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claims 10, 11, and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 10, 11, and 13 recites the limitation “the determination unit”. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20210076080 A1: Eunkyung Yoo et al., (herein after “Yoo”). Regarding claim 1, An information processing apparatus comprising at least one processor, and a memory coupled to the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to (Yoo, Fig. 2, and §Abstract: “a processor electrically connected to the communication circuit”): determine whether a subject has made a predetermined movement based on images from at least one of a plurality of image capturing units by determining whether at least one of the plurality of image capturing units is capturing subject part necessary for detecting a predetermined movement, and by determining the predetermined movement from an image captured by the image capturing unit which is capturing the subject part, (Yoo, P[0088]: “movement of an object may be recognized in the analyzed image data, and the main camera may be selected based on the recognized movement of the object.”, and P[0151]: “a camera located in a gesture direction based on a front camera which the user faces may be switched to the main camera. For example, when the user waves his/her hand (or arm) to the left”, where the hands/arms are specific parts of the subjects captured necessary for detecting a predetermined movement/motion.) and select an image from a predetermined image capturing unit that has been associated with the predetermined movement from among images from the plurality of the image capturing units and output the image to a display unit or a recording unit, in a case in which it has been determined that the predetermined movement has been performed units (Yoo, P[0086]: “image data captured by each camera may be analyzed and a camera having analyzed image data that satisfies a preset condition may be selected as the main camera.”, and P[0084]: “A server … the processor may be configured to receive image data of a first camera selected as a main camera among image data captured by a plurality of cameras through the communication circuit, transmit the received image data of the first camera to a real-time broadcasting server through the communication circuit, and when a second camera of the plurality of cameras satisfies a selection condition of a predetermined main camera, transmit image data of the second camera to the real-time broadcasting server through the communication circuit.”. Regarding claim 2, wherein the predetermined movement includes a movement related to the orientation of a predetermined subject (Yoo, P[0087]: “a direction of a face may be recognized in the analyzed image data”). Regarding claim 3, wherein the predetermined movement includes a movement related to the orientation of the face or the orientation of the gaze of a predetermined subject (Yoo, P[0087]: “a direction of a face may be recognized in the analyzed image data”). Regarding claim 4, wherein the predetermined movement includes a movement related to a hand gesture of a predetermined subject (Yoo, Fig. 13A, P[0151]: “For example, when the user waves his/her hand (or arm) to the left”), and P[0092]: “a user gesture may be determined by each camera, and the main camera may be selected based on the determined user gesture.”). Regarding claim 5, wherein the predetermined movement includes a movement related to a facial expression of a predetermined subject (Yoo, P[0075]: “The face recognizer 313a may perform a function of recognizing a face in image data captured by the camera”, and P[0116]: “to track a user face direction (user's eyes)”, where the direction of the users eyes are related to facial expression.). See attached Wikipedia Article, showing a definition of “facial expression” where the eyes are related to a facial expression and the direction of the gaze may indicate various emotions/feelings. This is not proposed as a modification of Yoo, but rather as showing that Yoo describes this feature as properly construed. Regarding claim 6, wherein the predetermined movement (Yoo, P[0088]: “movement of an object may be recognized in the analyzed image data”) includes a motion or pose of a predetermined subject (Yoo, P[0134]: “the cloud server may analyze motion of the user or an object on the basis of image data captured by each camera”). Regarding claim 7, wherein the predetermined movement includes sign language, and the memory stores instructions that, when executed by the at least one processor, cause the at least one processor to output an image from the predetermined image capturing unit that has been designated by the content of the sign language to a display unit or a recording unit (Yoo, Fig. 13B-13C and P[0152]: “when the user points at the front camera which the user faces with his/her finger, the corresponding camera may be switched to the main camera. Referring to FIG. 13C, according to various embodiments, when the user takes a pinch-in gesture or a pinch-out gesture with his/her finger toward a front camera which the user faces, the corresponding camera may perform a zoom-in function or a zoom-out function.”). Regarding claim 8, wherein the predetermined movement includes a motion of a predetermined subject (Yoo, P[0134]: “the cloud server may analyze motion of the user or an object on the basis of image data captured by each camera”). Regarding claim 9, wherein the memory stores instructions that, when executed by the at least one processor, cause the at least one processor to select an image from the image capturing unit that is present in the direction (Yoo, P[0151]: “a camera located in a gesture direction based on a front camera which the user faces may be switched to the main camera. For example, when the user waves his/her hand (or arm) to the left, a camera on the left may be switched to the main camera.”) of the predetermined movement and to output the image to a display unit or a recording unit (Yoo, Fig. 1). Regarding claim 10, as best understood, wherein the determination unit is configured to determine the predetermined movement made by at least one subject (Yoo, P[0088]: “movement of an object may be recognized in the analyzed image data”). Regarding claim 11, as best understood, wherein the determination unit is configured to perform a determination based on an image from one image capturing unit or images from a plurality of image capturing units (Yoo, P[0153]: “Real-time event data which is transmitted by each camera and received through a data receiver of the cloud server may be transferred to a condition determiner of the cloud server. The condition determiner may compare a type of the received event data with an interaction item defined in a template selected by the user.”). Regarding claim 12, wherein the at least one processor executes instructions that cause the at least one processor to register the predetermined image capturing unit that has been associated with the predetermined movement (Yoo, P[0079]: “The device register 434 may register at least one camera to be connected for real-time broadcasting with respect to each user account for the real-time broadcasting, and the device storage unit 435 may store information on the registered device in the memory.”, therefore the camera which captures the movement is registered.). Regarding claim 13, as best understood, wherein the memory stores instructions that, when executed by the at least one processor, cause the at least one processor to display for a predetermined time an image from the predetermined image capturing unit that has been associated with the predetermined movement among images from a plurality of image capturing units, in a case in which it has been determined by the determination unit that a predetermined movement has been performed (Yoo, Fig. 15 shows multiple interactions from detected movements being broadcasted live for predetermined amounts of time.). Regarding claim 14, An image capturing apparatus comprising: an image capturing unit (Yoo, Fig. 1 and Fig. 2), and at least one processor (Yoo, Fig. 1 and Fig. 2); and a memory coupled to the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to (Yoo, P[0067]: “processor 201, a Graphic Processing Unit (GPU) 202, a memory 203”): determine whether a subject has made a predetermined movement based on images from at least one of a plurality of image capturing units by determining whether at least one of the plurality of image capturing units is capturing subject part necessary for detecting a predetermined movement, and by determining the predetermined movement from an image captured by the image capturing unit which is capturing the subject part (Yoo, P[0088]: “movement of an object may be recognized in the analyzed image data, and the main camera may be selected based on the recognized movement of the object.”, and P[0151]: “a camera located in a gesture direction based on a front camera which the user faces may be switched to the main camera. For example, when the user waves his/her hand (or arm) to the left”, where the hands/arms are specific parts of the subjects captured necessary for detecting a predetermined movement/motion.), and output a determination result to an external apparatus in a case in which it has been determined that the predetermined movement has been performed (Yoo, Fig. 1, Smart Phone (output)), and perform a predetermined operation based on a signal received from the external apparatus in response to the output of the determination result to the external apparatus (Yoo, Fig. 1, Smart Phone and P[0059]: “According to various embodiments, the electronic device 123 may perform at least some of the functions of the cloud server 110. For example, the electronic device 123 may select the main camera from the plurality of cameras 121 and 122 according to a preset condition.”). Claims 15 and 16 recite features nearly identical to those recited in claim 1. Claims 15 and 16 are rejected for reasons analogous to those discussed above in conjunction with claim 1. Conclusion 5. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TY M BEATTY whose telephone number is (703) 756-5370. The examiner can normally be reached Mon-Fri: 8AM-4PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached on (571) 272 - 3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TY MITCHELL BEATTY/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §102, §112
Dec 08, 2025
Response Filed
Mar 04, 2026
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597275
VEHICLE INTERIOR MONITORING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579653
AUTOMATED METHOD FOR TOOTH SEGMENTATION OF THREE DIMENSIONAL SCAN DATA USING TOOTH BOUNDARY CURVE AND COMPUTER READABLE MEDIUM HAVING PROGRAM FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12555212
OBJECT DETECTION DEVICE AND METHOD FOR DETECTING MALFUNCTION OF OBJECT DETECTION DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12511787
METHOD, DEVICE AND SYSTEM OF POINT CLOUD COMPRESSION FOR INTELLIGENT COOPERATIVE PERCEPTION SYSTEM
2y 5m to grant Granted Dec 30, 2025
Patent 12511750
IMAGE PROCESSING METHOD AND APPARATUS BASED ON IMAGE PROCESSING MODEL, ELECTRONIC DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month