Prosecution Insights
Last updated: April 19, 2026
Application No. 18/500,185

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Nov 02, 2023
Examiner
HYTREK, ASHLEY LYNN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
74 granted / 83 resolved
+27.2% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
95
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
51.0%
+11.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 83 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 11/02/2023 and 11/20/2023 have been made record of and considered by the examiner. Claim Objections Claim 12 is objected to because of the following minor informalities: ‘information indicting a position’ is recited, ‘indicting’ is assumed to be a typo for ‘indicating.’ Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 13, and 14 recite the limitation "an imaging apparatus … the part," which leads to ambiguity regarding whether the captured images/camera parameters are from one or multiple imaging apparatuses. The Examiner respectfully recommends a fix along the lines of: ‘and generating the three-dimensional shape data corresponding to the foreground object in the second generation space based on image capturing parameters of one or more one of the plurality of imaging apparatuses, a captured image obtained by image capturing by the one or more of the imaging apparatuses, and information indicating the second generation space.’ OR ‘and generating the three-dimensional shape data corresponding to the foreground object in the second generation space based on image capturing parameters of an imaging apparatus, which is at least one of the plurality of imaging apparatuses, a captured image obtained by image capturing by the one or more of the imaging apparatuses, and information indicating the second generation space.’ Appropriate correction is required. Regarding claim 8, the claim is generally narrative and indefinite, failing to conform with current U.S. practice. See ‘capable of’ and ‘assumed.’ Appropriate correction is required. Regarding claim 11, the claim is generally narrative and indefinite, failing to conform with current U.S. practice. Claim 11 recites ‘an image area in which the foreground object is captured in part of a captured image,’ ‘in part’ leads to ambiguity regarding the scope of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5-6, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Mora (US 2015/0178988 A1), in further view of Ito (JP2020187528A). Consider claims 1, 13, and 14, Mora discloses an image processing apparatus/method/(non-transitory computer readable storage medium storing a program for causing a computer to perform a control method of an apparatus) generating three-dimensional shape data corresponding to a foreground object for which synchronous image capturing is performed by a plurality of imaging apparatuses, the image processing apparatus/method (FIGs. 2, 24; ¶98-99; “local and peripheral cameras are synchronized using a common trigger source”; ¶100; foreground segmentation; ¶108; Volumetric Shape from Silhouette, ¶125; Local High Accuracy Mesh Generation) comprising: one or more hardware processors (¶169-170); and one or more memories storing one or more programs configured to be executed by the one or more hardware processors, the one or more programs including instructions for (¶169-170): identifying an unnecessary space, which is a space unnecessary in a case where three-dimensional shape data is generated, from a first generation space in a virtual space corresponding to an image capturing-target space (¶100-107; shadow removal; ¶109-116, Volumetric shape from silhouette; “Check occupancy for each voxel… If the projection is outside at least one silhouette the voxel is empty”; ¶149; “For each image, its input foreground mask is eroded in order to remove any remaining background pixels from the silhouette contour.”); generating information indicating a second generation space after the unnecessary space is deleted (¶172; “6. The obtained subset of global masks is used to extract the visual hull of the RHM. A 3D scalar field expressed in voxels is obtained”); and generating the three-dimensional shape data corresponding to the foreground object in the second generation space based on image capturing parameters of an imaging apparatus (¶98-101), which is at least part of the plurality of imaging apparatuses, a captured image obtained by image capturing by the part of the imaging apparatuses, and information indicating the second generation space (¶120, 171-172; “The obtained subset of global masks is used to extract the visual hull of the RHM. A 3D scalar field expressed in voxels is obtained. Then a global 3D polygonal mesh is obtained by applying the marching cubes algorithm to this volume.”). While stating, “If the projection is outside at least one silhouette the voxel is empty,” and “only the cameras where the specific voxel is projected inside the image are taken into account in the projection test,” which correspond to identifying/not utilizing an unnecessary space, Mora fails to explicitly disclose deleting the identified unnecessary space from the first generation space. In related art, Ito discloses an image processing apparatus/method/(non-transitory computer readable storage medium storing a program for causing a computer to perform a control method of an apparatus) generating three-dimensional shape data corresponding to a foreground object for which synchronous image capturing is performed by a plurality of imaging apparatuses, the image processing apparatus/method (Ito ¶11-21) comprising: one or more hardware processors (Ito ¶11); and one or more memories storing one or more programs configured to be executed by the one or more hardware processors, the one or more programs including instructions for (Ito ¶11): generating information indicating a second generation space after the unnecessary space is deleted by deleting the identified unnecessary space from the first generation space (Ito ¶6, the purpose is to generate a virtual viewpoint image using appropriate shape data, ¶21; “The 3D shape generation unit 206 projects the voxels contained inside the silhouettes in the silhouette images P1 and P2 onto the 3D space for the voxels in the 3D space. At this time, if there is at least one silhouette image in which the voxel is not included in the silhouette, the voxel is deleted. By performing the above voxel projection, a 3D model VH1 is generated.”); and generating the three-dimensional shape data corresponding to the foreground object in the second generation space based on image capturing parameters of an imaging apparatus, which is at least part of the plurality of imaging apparatuses, a captured image obtained by image capturing by the part of the imaging apparatuses, and information indicating the second generation space (Ito ¶16-21; “By performing the above voxel projection, a three-dimensional model VH1 is generated. As for the method of generating the three-dimensional model, a method other than the visual volume crossing method may be used”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the explicit deletion of Ito into the image processing method of Mora to improve shape generation (Mora ¶110-116, 172-174) by deleting unnecessary space. As stated by Ito, “if there is even one silhouette image in which a voxel is not included inside the silhouette, that voxel is deleted. By performing the above voxel projection, a three-dimensional model VH1 is generated (Ito ¶21).” Consider claim 5, Mora, as modified by Ito, discloses the claimed invention wherein the unnecessary space is identified by using image capturing parameters of an imaging apparatus belonging to a first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and a silhouette image indicating a foreground area, which is an image area in which the foreground object is captured, in a captured image obtained by image capturing by an imaging apparatus belonging to the first imaging apparatus group (Mora ¶98-99, 110-116, 171-172; Ito ¶19) and the three-dimensional shape data is generated based on image capturing parameters of an imaging apparatus belonging to a second imaging apparatus group different from the first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and a captured image obtained by image capturing by an imaging apparatus belonging to the second imaging apparatus group (Mora ¶98-99, 173-174). Consider claim 6, Mora, as modified by Ito, discloses the claimed invention wherein an area in the first generation space is identified as the unnecessary space, which is shielded by a background area, which is an area indicating a background in the silhouette image arranged in the virtual space, in a case where the silhouette image is arranged in the virtual space by using image capturing parameters of an imaging apparatus belonging to the first imaging apparatus group and the virtual space is captured based on image capturing parameters of an imaging apparatus belonging to the first imaging apparatus group from a position in the virtual space, which corresponds to an imaging apparatus belonging to the first imaging apparatus group (Mora ¶110-116; Ito ¶19-21). Consider claim 12, Mora, as modified by Ito, discloses the claimed invention wherein the one or more programs further include an instruction for: generating a virtual viewpoint image based on the generated three-dimensional shape data, data of a captured image obtained by image capturing by an imaging apparatus, which is at least part of the plurality of imaging apparatuses (Ito ¶42), and virtual viewpoint information including information indicting a position of a virtual viewpoint and a direction of a line-of-sight from the virtual viewpoint (Mora ¶93, Ito ¶41). Claims 2-4 and 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Mora, in view of Ito, as applied to claims 1, 5-6, and 12-14 above, and further in view of Konuma (JP2020126393A). Consider claim 2, Mora, as modified by Ito, discloses the claimed invention wherein the unnecessary space is identified by using image capturing parameters of an imaging apparatus belonging to a first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and a mask image for masking an image area in which an actually existing space corresponding to the unnecessary space is captured from a captured image obtained by image capturing by an imaging apparatus belonging to the first imaging apparatus group (Mora ¶98-99, 110-116, 149, 171-172; Ito ¶21) and the three-dimensional shape data is generated based on image capturing parameters of an imaging apparatus belonging to a second imaging apparatus group different from the first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and a captured image obtained by image capturing by an imaging apparatus belonging to the second imaging apparatus group (Mora ¶172-174, FIG. 24). Mora, as modified by Ito, fails to explicitly disclose a mask image for masking an image area in which an actually existing space corresponding to the unnecessary space is captured from a captured image. In related art, Konuma discloses wherein the unnecessary space is identified by … a mask image for masking an image area in which an actually existing space corresponding to the unnecessary space is captured from a captured image (Konuma ¶25-29, 34). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the masking technique of Konuma into the shape generation of Mora, as modified by Ito, to accurately delete unnecessary space and therefore reduce computational cost (Mora ¶110-116). As stated by Konuma, “When performing the visual volume intersection method, the background subtraction method is generally used as a method of extracting the silhouette and the image of the subject from the captured image. The background difference method is a method of determining a difference between an image registered as a background and a subject. However, in the background subtraction method, the audience seats and other image noises are also extracted as subjects when shooting a stadium competition or the like. The images of these audience seats and other image noises are data that are not substantially used unless they are converted into three-dimensional shape data. It is wasteful to process, transmit, and store such data that has no use destination (Konuma ¶6).” Consider claim 3, Mora, as modified by Ito and Konuma, discloses the claimed invention wherein an area in the first generation space is identified as the unnecessary space, which is shielded by a mask area in the mask image arranged in the virtual space in a case where the mask image is arranged in the virtual space by using image capturing parameters of an imaging apparatus belonging to the first imaging apparatus group and the virtual space is captured based on image capturing parameters of an imaging apparatus belonging to the first imaging apparatus group from a position in the virtual space, which corresponds to an imaging apparatus belonging to the first imaging apparatus group (Mora ¶98-99, 110-116, ¶171-172; Konuma ¶25-29). Consider claim 4, Mora, as modified by Ito and Konuma, discloses the claimed invention wherein the three-dimensional shape data is generated based on image capturing parameters of an imaging apparatus belonging to the first imaging apparatus group and a captured image obtained by image capturing by an imaging apparatus belonging to the first imaging apparatus group, in addition to image capturing parameters of an imaging apparatus belonging to the second imaging apparatus group and a captured image obtained by image capturing by an imaging apparatus belonging to the second imaging apparatus group (Mora ¶98-99, 110-116, ¶171-172). Consider claim 7, Mora, as modified by Ito, discloses obtaining distance information indicating a distance between each of a plurality of points on a boundary surface (Mora ¶14, 98-99, 131, 133, 136), the three-dimensional shape data is generated based on image capturing parameters of an imaging apparatus belonging to a second imaging apparatus group different from the first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and a captured image obtained by image capturing by an imaging apparatus belonging to the second imaging apparatus group (Mora ¶98-99, 136, 171-176). However, Mora, as modified by Ito, fails to specifically disclose obtaining distance information indicating a distance between each of a plurality of points on a boundary surface between an actually existing space corresponding to the second generation space and an actually existing space corresponding to the unnecessary space, and an imaging apparatus belonging to a first imaging apparatus group, which is at least part of the plurality of imaging apparatuses, and wherein the unnecessary space is identified by using the distance information. In related art, Konuma discloses disclose obtaining distance information indicating a distance between each of a plurality of points on a boundary surface between an actually existing space corresponding to the second generation space and an actually existing space corresponding to the unnecessary space, and an imaging apparatus belonging to a first imaging apparatus group, which is at least part of the plurality of imaging apparatuses (Konuma ¶26), and wherein the unnecessary space is identified by using the distance information (Konuma ¶26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the distance technique of Konuma into the shape generation method of Mora, as modified by Ito, to yield deleting unnecessary space due to distance information. As stated by Konuma, “the deletion unit 10 leaves or deletes each pixel based on the distance information of the distance image depending on whether it is outside a predetermined distance range (Konuma ¶26).” Consider claim 8, Mora, as modified by Ito, discloses the claimed invention wherein the unnecessary space is identified by using a mask image capable of masking an actually existing space corresponding to the unnecessary space in a captured image obtained in a case where it is assumed that an imaginary imaging apparatus is arranged at a predetermined position and the imaginary imaging apparatus captures the image capturing-target space based on predetermined image capturing parameters, and the predetermined image capturing parameters (Mora ¶93, 98-99, 110-116, 149, 171-172). Mora, as modified by Ito, fails to explicitly disclose a mask image capable of masking an actually existing space corresponding to the unnecessary space in a captured image. In related art Konuma discloses wherein the unnecessary space is identified by using a mask image capable of masking an actually existing space corresponding to the unnecessary space in a captured image (Konuma ¶25-29, 34). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the masking technique of Konuma into the shape generation of Mora, as modified by Ito, to accurately delete unnecessary space and therefore reduce computational cost (Mora ¶110-116). As stated by Konuma, “When performing the visual volume intersection method, the background subtraction method is generally used as a method of extracting the silhouette and the image of the subject from the captured image. The background difference method is a method of determining a difference between an image registered as a background and a subject. However, in the background subtraction method, the audience seats and other image noises are also extracted as subjects when shooting a stadium competition or the like. The images of these audience seats and other image noises are data that are not substantially used unless they are converted into three-dimensional shape data. It is wasteful to process, transmit, and store such data that has no use destination (Konuma ¶6).” Consider claim 9, Mora, as modified by Ito and Konuma, discloses the claimed invention wherein an area in the first generation space is identified as the unnecessary space, which is shielded by a mask area in the mask image arranged in the virtual space in a case where the mask image is arranged in the virtual space by using the predetermined image capturing parameters and the virtual space is captured based on the predetermined image capturing parameters from a position in the virtual space, which corresponds to the predetermined position (Mora ¶98-99, 110-116, ¶171-172; Konuma ¶25-29). Consider claim 10, Mora, as modified by Ito and Konuma, discloses the claimed invention wherein the one or more programs further include an instruction for: obtaining distance information indicating a distance between each of a plurality of points on a boundary surface between an actually existing space corresponding to the second generation space and an actually existing space corresponding to the unnecessary space, and the predetermined position, and wherein the unnecessary space is identified by using the distance information, in addition to the mask image and the predetermined image capturing parameters (Mora ¶110-116, 136, Konuma ¶25-29). Consider claim 11, Mora, as modified by Ito and Konuma, discloses the claimed invention wherein the three-dimensional shape data is generated by a visual hull method by using a silhouette image indicating a foreground area, which is an image area in which the foreground object is captured in part of a captured image, the silhouette image being generated based on a captured image obtained by image capturing by an imaging apparatus, which is at least part of the plurality of imaging apparatuses (Mora ¶167-174, 110-116). Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. JP5295044B2 discloses a method and program for extracting a mask image with high accuracy, including an unnecessary portion removal process capable of sequentially removing unnecessary portions in the vicinity of the floor surface and on the silhouette outline, and the mask. JP6914734B2 discloses a method for silhouette extraction. Guan, ‘Visual Hull Construction in the Presence of Partial Occlusion.’ Furukawa, ‘Carved visual hulls for image-based modeling.’ Kleinkort, ‘Visual Hull Method for Realistic 3D Particle Shape Reconstruction Based on High-Resolution Photographs of Snowflakes in Free Fall from Multiple Views.’ Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEY HYTREK whose telephone number is (703)756-4562. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steve Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASHLEY HYTREK/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Nov 02, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597122
DEFECT DETECTION DEVICE AND METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12555239
Microscopy System and Method for Image Segmentation
2y 5m to grant Granted Feb 17, 2026
Patent 12555357
SYSTEMS AND METHODS FOR CATEGORIZING IMAGE PIXELS
2y 5m to grant Granted Feb 17, 2026
Patent 12548291
VIDEO SIGNAL PROCESSING APPARATUS, VIDEO SIGNAL PROCESSING METHOD, AND IMAGING APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12548157
SYSTEMS AND METHODS FOR INLINE QUALITY CONTROL OF SLIDE DIGITIZATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 83 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month