Prosecution Insights
Last updated: April 19, 2026
Application No. 18/415,831

IMAGE PROCESSING APPARATUS CAPABLE OF BOTH IMPROVING IMAGE QUALITY AND REDUCING AFTERIMAGES WHEN COMBINING PLURALITY OF IMAGES, CONTROL METHOD FOR IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Jan 18, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-19 are pending in the present application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Japan patent application number JP2023-009400 filed on 01/25/2023 has been received and made of record. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/18/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a first obtaining unit that obtains”, “a second obtaining unit that obtains”, “a first generating unit that generates”, “a second generating unit that generates”, and “a learning unit that performs” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 18-19 recite the limitation "the basis frame image of the third image group" in line 13. There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites limitation “wherein according to a result of the learning performed by the learning unit, the first generating unit makes a size of the second image with respect to each of the frame images different for each of the frame images”, which, as discussed above, It is unclear to the examiner how a result of a leaning could work on the first generating unit to make a size of the second image with respect to each of the frame images different for each of the frame images. It is impossible for the examiner to derive clarity through the specification, leaving these limitations indefinite, as it is not apparent how the limitations relate to the invention. Therefore the scope of the claim is rendered indefinite as it is not clear how a result of a leaning could work on the first generating unit to make a size of the second image with respect to each of the frame images different for each of the frame images. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 6-7, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Japan PGPubs 2011077820 to Ichikawa et al. in view of U.S. PGPubs 2022/0398698 to Li et al. Regarding claim 1, Ichikawa et al. teach a image processing apparatus that performs learning of an image processing model for combining a plurality of images (abstract), comprising: at least one processor; and a memory coupled to the processor storing instructions that, when executed by the processor, cause the processor to function as (Fig 1, par 0015, CPU and memory): PNG media_image1.png 150 286 media_image1.png Greyscale a first obtaining unit that obtains a first image group, which includes a plurality of frame images including a basis frame image that becomes a basis for combining the plurality of images (Fig 7, par 0019, “capturing the background images D1 to Dn (see FIG. 7)”, par 0029, “the number of the background images D1 to Dn acquired by the background acquisition unit 8c and the subject cut-out images C1 to Cn acquired by the subject acquisition unit 8b are the same”); PNG media_image2.png 154 240 media_image2.png Greyscale a second obtaining unit that obtains a second image (Fig 5, par 0028-0029, “the subject acquisition unit 8b, when acquiring the image data of the subject clipped images C1 to Cn, captures the imaging frame rate and the number of images when continuously capturing the subject existing images A1 to An related to the subject clipped images C1 to Cn. Data related to (imaging conditions) is read and acquired from the Exif information attached to the image data of the subject cutout images C1 to Cn. the number of the background images D1 to Dn acquired by the background acquisition unit 8c and the subject cut-out images C1 to Cn acquired by the subject acquisition unit 8b are the same”); PNG media_image3.png 126 290 media_image3.png Greyscale a first generating unit that generates a third image group, which includes superimposed images obtained by superimposing the second image on each of the frame images (Fig 10, par 0030, “The image combining unit 8d combines the plurality of subject cutout images C1 to Cn and the plurality of background images D1 to Dn to generate combined images M1 to Mn. That is, the image compositing unit 8d has a plurality of subject cut-out images C1 to Cn and a plurality of background images D1 to Dn, the first one (see FIG. 9A) and the second one (see FIG. 9B). ) And the third image (see FIG. 9C),... The Nth image (see FIG. 9D), and the images are combined in the order of image capturing”); wherein the first generating unit makes superimposing positions of the second image on each of the frame images different for each of the frame images (Fig 10, par 0030, “The image combining unit 8d combines the plurality of subject cutout images C1 to Cn and the plurality of background images D1 to Dn to generate combined images M1 to Mn. That is, the image compositing unit 8d has a plurality of subject cut-out images C1 to Cn and a plurality of background images D1 to Dn, the first one (see FIG. 9A) and the second one (see FIG. 9B). ) And the third image (see FIG. 9C),... The Nth image (see FIG. 9D), and the images are combined in the order of image capturing”). But Ichikawa et al. keep silent for teaching a third obtaining unit that obtains the basis frame image of the third image group as a training image; a second generating unit that generates an input image group by performing an image processing with respect to the frame images and the second image, or the superimposed images; and a learning unit that performs the learning of the image processing model based on an error between an output image outputted from the image processing model by inputting the input image group into the image processing model, and the training image. PNG media_image4.png 178 532 media_image4.png Greyscale In related endeavor, Li et al. teach a third obtaining unit that obtains the basis frame image of the third image group as a training image (par 0176-0177, “The training image set may include a plurality of training image groups with different image contents. Each training image group may include a first image and a second image. The first image corresponds to the second image. The first image and the second image may represent the same image scene. The second image may be a normally displayed image (i.e., an original image)” ….second image as a training image); a second generating unit that generates an input image group by performing an image processing with respect to input images (Fig 16, par 0182-0185, “before generating the generated image corresponding to the first image by the predetermined network model according to the first image in the training image set, the method further includes, for each training image group in the training image set, performing aligning processing on the first image in the training image group and the second image corresponding to the first image to obtain the aligned image aligned with the second image, and use the aligned image as a first image. In some embodiments, processing for each training image group in the training image set may refer to performing the alignment processing on each training image group in the training image set. The alignment processing may include performing the alignment processing on each training image group after the training image set is obtained to obtain an aligned training image group. After all the training image groups are aligned, the first image of each training image group may be input to the predetermined network model. Before the first image in each training image group is input into the predetermined network model, the alignment processing may be performed on the training image group to obtain an aligned training image group corresponding to the training image group. Then, the first image in the aligned training image group may be input into the predetermined network model” ….first image and second image as an input image group for training model); and a learning unit that performs the learning of the image processing model based on an error between an output image outputted from the image processing model by inputting the input image group into the image processing model, and the training image (Fig 16, par 0176-0177, “As shown in FIGS. 15 and 16, the method includes generating the generated image corresponding to the first image by the predetermined network model according to the first image in the training image set (N10).In some embodiments, the predetermined network model may be a deep learning network model. The training image set may include a plurality of training image groups with different image contents. Each training image group may include a first image and a second image. The first image corresponds to the second image“, par 0207, “correcting the predetermined network model may include correcting the model parameter of the predetermined network model until the model parameter satisfies the predetermined condition. The predetermined condition may include that a loss function value satisfies the predetermined requirement, or a quantity of times of training reaches a predetermined quantity. The predetermined requirement may be determined according to the precision of the image processing model, which is not described in detail here. The predetermined quantity of times may be the maximum number of times of training of the predetermined network model, for example, 4000 times, etc. Thus, the predetermined network model may output the generated image. The loss function value of the predetermined network model may be calculated according to the generated image and the second image. After the loss function value is calculated, whether the loss function value satisfies the preset requirement may be determined”, par 0209-0213, “the predetermined network model is trained based on the total loss function value, and the generated image corresponding to the first image is continuously generated according to the first image in the next training image group of the training image set, until the training condition of the predetermined network model satisfies the predetermined condition, to obtain the trained image processing model”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. to include a third obtaining unit that obtains the basis frame image of the third image group as a training image; a second generating unit that generates an input image group by performing an image processing with respect to the frame images and the second image, or the superimposed images; and a learning unit that performs the learning of the image processing model based on an error between an output image outputted from the image processing model by inputting the input image group into the image processing model, and the training image as taught by Li et al. to combine introduce a neural network model, as taught by Li et al. to further processing superimposed images as training input images to obtain a processed superimposed image to improve the superimposed image quality. PNG media_image2.png 154 240 media_image2.png Greyscale PNG media_image3.png 126 290 media_image3.png Greyscale Regarding claim 3, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, and Ichikawa et al. further teach wherein the second obtaining unit obtains a plurality of images different in at least one of a shape and a size as a plurality of the second images (Fig 5, par 0028-0029, “the subject acquisition unit 8b, when acquiring the image data of the subject clipped images C1 to Cn, captures the imaging frame rate and the number of images when continuously capturing the subject existing images A1 to An related to the subject clipped images C1 to Cn. Data related to (imaging conditions) is read and acquired from the Exif information attached to the image data of the subject cutout images C1 to Cn. the number of the background images D1 to Dn acquired by the background acquisition unit 8c and the subject cut-out images C1 to Cn acquired by the subject acquisition unit 8b are the same”), and the first generating unit superimposes the plurality of the second images on each of the frame images (Fig 10, par 0030, “The image combining unit 8d combines the plurality of subject cutout images C1 to Cn and the plurality of background images D1 to Dn to generate combined images M1 to Mn. That is, the image compositing unit 8d has a plurality of subject cut-out images C1 to Cn and a plurality of background images D1 to Dn, the first one (see FIG. 9A) and the second one (see FIG. 9B). ) And the third image (see FIG. 9C),... The Nth image (see FIG. 9D), and the images are combined in the order of image capturing”). PNG media_image3.png 126 290 media_image3.png Greyscale Regarding claim 6, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, and Ichikawa et al. further teach wherein the first generating unit generates the superimposed images by changing the superimposing positions of the second image so as to flow in one direction in a horizontal direction between the plurality of frame images and by changing the superimposing positions of the second image so as to swing upward and downward in a vertical direction (Fig 10, par 0030, “The image combining unit 8d combines the plurality of subject cutout images C1 to Cn and the plurality of background images D1 to Dn to generate combined images M1 to Mn. That is, the image compositing unit 8d has a plurality of subject cut-out images C1 to Cn and a plurality of background images D1 to Dn, the first one (see FIG. 9A) and the second one (see FIG. 9B). ) And the third image (see FIG. 9C),... The Nth image (see FIG. 9D), and the images are combined in the order of image capturing”). Regarding claim 7, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, and Ichikawa et al. further teach wherein according to a result of the learning performed by the learning unit, the first generating unit makes a size of the second image with respect to each of the frame images different for each of the frame images (Fig 10, par 0030, directly disclose different foreground images as second image combine with different background images as frame images without consider a result of learning). Regarding claim 16, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, and Ichikawa et al. further teach wherein the second image is a circular image (Fig 4A, par 0046, “image data of the subject cutout images C1 to Cn in which the subject S (for example, a person) (see FIG. 4A) is extracted from the background is generated” …..cut out subject (person) include circular part (considered as design choice)). Regarding claim 17, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, and Ichikawa et al. further teach wherein the second image is an image generated by computer graphics (Fig 4A, par 0046, “image data of the subject cutout images C1 to Cn in which the subject S (for example, a person) (see FIG. 4A) is extracted from the background is generated”). Regarding claim 18, the method claim 18 is similar in scope to claim 1 and is rejected under the same rational. Regarding claim 19, Ichikawa et al. teach a non-transitory computer-readable storage medium storing a program for causing a computer to execute a control method for controlling an image processing apparatus that performs learning of an image processing model for combining a plurality of images (par 0051). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Japan PGPubs 2011077820 to Ichikawa et al. in view of U.S. PGPubs 2022/0398698 to Li et al., further in view of U.S. PGPubs 2022/0327365 to Kubota. Regarding claim 2, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, but do not explicitly teach wherein the image processing model is a neural network, and the learning unit performs the learning by an error back propagation method so as to minimize the error. In related endeavor, Kubota teaches wherein the image processing model is a neural network, and the learning unit performs the learning by an error back propagation method so as to minimize the error (par 0045, par 0116-0118, “Returning now to FIG. 2, the adjusting unit 13 adjusts each weight of the first function when a parameter of the neural network is updated using error back propagation based on a supervisor label of the prescribed learning data. For example, when learning the learning model 12a, the learning unit 12 updates a hyper parameter or a bias of the learning model 12a by error back propagation based on a supervisor label of the learning data (training data). In doing so, the adjusting unit 13 performs adjustment by a prescribed method with respect to each weight of the first function. Alternatively, instead of having the learning unit 12 update hyper parameters or the like, the adjusting unit 13 may adjust each weight and each hyper parameter or the like may store each weight that minimizes a loss function”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. as modified by Li et al. to include wherein the image processing model is a neural network, and the learning unit performs the learning by an error back propagation method so as to minimize the error as taught by Kubota to improve learning accuracy of the learning model through more appropriately setting the prescribed functions to be applied to the hidden layer to perform at least one of classifying, producing, and optimizing at least one of image data, series data, and text data. Claim(s) 4-5 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Japan PGPubs 2011077820 to Ichikawa et al. in view of U.S. PGPubs 2022/0398698 to Li et al., further in view of U.S. PGPubs 2020/0090629 to Yokoyama. Regarding claim 4, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, but do not explicitly teach wherein in a case that two directions perpendicular to each other on an image are set as a first direction and a second direction, the first generating unit makes a maximum change amount of the superimposing position between the frame images different between in the first direction and in the second direction. In related endeavor, Yokoyama teaches wherein in a case that two directions perpendicular to each other on an image are set as a first direction and a second direction, the first generating unit makes a maximum change amount of the superimposing position between the frame images different between in the first direction and in the second direction (par 0036, “the display image generation unit 104 superimposes CG on the captured image to generate a composite image (MR image). Then, the composite image is transmitted to the HMD 101, displayed on the image display unit 105, and provided to the wearer of the HMD 101 “, par 0047-0048, “the image shift processing unit accepts an image shift instruction instructing a change of the display position of the frame image in the vertical direction (a direction perpendicular to a line constituting the frame image) of the image display unit 105. For example, a movement amount (VSHIFT) in the vertical direction and a movement amount (HSHIFT) in the horizontal direction are accepted as the image shift instruction. Details will be described later with reference to FIG. 3”, par 0059, “If ΔV≤VBLK, the synchronizing signal correction unit 202 sets the movement amount (VSHIFT) in the vertical direction to ΔV. On the other hand, if ΔV>VBLK, the synchronizing signal correction unit 202 sets the movement amount (VSHIFT) in the vertical direction to VBLK. That is, the movement amount (VSHIFT) is controlled such that the upper limit of the movement amount (VSHIFT) in the vertical direction becomes VBLK (a value equal to or smaller than a predetermined amount)”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. as modified by Li et al. to include wherein in a case that two directions perpendicular to each other on an image are set as a first direction and a second direction, the first generating unit makes a maximum change amount of the superimposing position between the frame images different between in the first direction and in the second direction as taught by Yokoyama to performs correction corresponding to an image shift amount not more than the predetermined amount to generate an image obtained by shifting an input image in a vertical direction and/or a horizontal direction (provides a technique that enables generation of a display image giving less uncomfortable feeling). Regarding claim 5, Ichikawa et al. as modified by Li et al. and Yokoyama teach all the limitation of claim 4, and Yokoyama further teaches wherein the first direction is a horizontal direction and the second direction is a vertical direction, and the maximum change amount of the superimposing position between the frame images is greater in the horizontal direction than in the vertical direction (par 0047-0048, “the image shift processing unit accepts an image shift instruction instructing a change of the display position of the frame image in the vertical direction (a direction perpendicular to a line constituting the frame image) of the image display unit 105. For example, a movement amount (VSHIFT) in the vertical direction and a movement amount (HSHIFT) in the horizontal direction are accepted as the image shift instruction. Details will be described later with reference to FIG. 3”, par 0059, “If ΔV≤VBLK, the synchronizing signal correction unit 202 sets the movement amount (VSHIFT) in the vertical direction to ΔV. On the other hand, if ΔV>VBLK, the synchronizing signal correction unit 202 sets the movement amount (VSHIFT) in the vertical direction to VBLK. That is, the movement amount (VSHIFT) is controlled such that the upper limit of the movement amount (VSHIFT) in the vertical direction becomes VBLK (a value equal to or smaller than a predetermined amount)”). This would be obvious for the same reason given in the rejection for claim 4. Regarding claim 8, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, but do not explicitly teach wherein the frame images are patch images, and the first obtaining unit performs a shifting processing that obtains the first image group by shifting a cutout position of each of the patch images. In related endeavor, Yokoyama teaches wherein the frame images are patch images, and the first obtaining unit performs a shifting processing that obtains the first image group by shifting a cutout position of each of the patch images (par 0047-0048, “the image shift processing unit accepts an image shift instruction instructing a change of the display position of the frame image in the vertical direction (a direction perpendicular to a line constituting the frame image) of the image display unit 105. For example, a movement amount (VSHIFT) in the vertical direction and a movement amount (HSHIFT) in the horizontal direction are accepted as the image shift instruction. Details will be described later with reference to FIG. 3”, Fig 4, par 0056-0057, “An image 401 and an image 402 exemplarily show the first frame and the second frame, respectively, in a case of VSHIFT≤VBLK. The image 401 exemplarily shows the output image in the first frame shown in FIG. 3, in which the image shift amount is set to “0”. On the other hand, the image 402 exemplarily shows the output image in the second frame shown in FIG. 3, in which an image shifted by VBLK in the vertical direction is output. An image 403 and an image 404 exemplarily show the first frame and the second frame, respectively, in a case of VSHIFT>VBLK. As is understood from the timing chart shown in FIG. 3, if the movement amount (VSHIFT) in the vertical direction is set to a value (predetermined amount) exceeding the width (VBLK) of the vertical blanking period, image disturbance occurs” …obtain a group of images through shifting image in vertical direction). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. as modified by Li et al. to include wherein the frame images are patch images, and the first obtaining unit performs a shifting processing that obtains the first image group by shifting a cutout position of each of the patch images as taught by Yokoyama to performs correction corresponding to an image shift amount not more than the predetermined amount to generate a group of images obtained by shifting an input image in a vertical direction and/or a horizontal direction (provides a technique that enables generation of a display image giving less uncomfortable feeling). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Japan PGPubs 2011077820 to Ichikawa et al. in view of U.S. PGPubs 2022/0398698 to Li et al., further in view of U.S. PGPubs 2019/0295225 to Saito. Regarding claim 14, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, but do not explicitly teach wherein as the image processing, the second generating unit performs at least one process of a process of adding noises, a process of narrowing a dynamic range, and a process of lowering a resolution. In related endeavor, Saito teaches wherein as the image processing, the second generating unit performs at least one process of a process of adding noises, a process of narrowing a dynamic range, and a process of lowering a resolution (par 0036, “a noise generation unit 36 generating a noise, and a reverse multiresolution transform unit 38 generating an output image in the same resolution as the original image by performing a reverse multiresolution transform process including image size expansion and a noise addition process on the plurality of band images subjected to the noise reducing process. The reverse multiresolution transform unit 38 of the present example includes a noise addition unit 39 that adds the noise generated by the noise generation unit 36 to an image that is any of the band image subjected to the noise reducing process and an image (hereinafter, referred to as an “in-processing image”) in the middle of the reverse multiresolution transform process and is in a lower resolution than the original image”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. as modified by Li et al. to include wherein the image processing model is a neural network, and the learning unit performs the learning by an error back propagation method so as to minimize the error as taught by Saito to increase/reduce noise and resolution of a processing image to improve quality of the processing image. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Japan PGPubs 2011077820 to Ichikawa et al. in view of U.S. PGPubs 2022/0398698 to Li et al., further in view of U.S. PGPubs 2017/0280066 to Hayashi. Regarding claim 15, Ichikawa et al. as modified by Li et al. teach all the limitation of claim 1, but do not explicitly teach wherein the second image is an image obtained from the first image group. In related endeavor, Hayashi teaches wherein the second image is an image obtained from the first image group (Figs 3A-3D, par 0031-0037, “the second generating unit (a second generating means) 5b cuts out a region of a portion of the original image I0 captured by the image capturing unit 3 (refer to FIG. 3A) as the second area A2 so as to generates the second image I2. In this case, the second generating unit 5b generates the second image I2 from the original image I0 the same as an image (the original image I0) used for the generation of the first image I1 by the first generating unit 5a”, par 0045-0046, “the second acquiring unit 5d individually acquires the YUV data of the second image I2 generated by the second generating unit 5b with each of the plurality of frame images included in the moving image as the original image I0. The first compositing unit 5e generates a third image I3 (refer to FIG. 3D).”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Ichikawa et al. as modified by Li et al. to include wherein the second image is an image obtained from the first image group as taught by Hayashi to composite the second area specified corresponding to the specific subject S (a notice portion), as the second image having higher resolving power, with the first image to generate the third image through effectively utilized in the storage and the playback of the third image. Allowable Subject Matter Claims 9-13 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 9, including " wherein the first obtaining unit performs, as a preprocessing when using the image processing model, an aligning processing in which the remaining frame images excluding the basis frame image are shifted in accordance with the basis frame image so that a position of a main subject is aligned between the frame images". Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Jan 11, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month