Prosecution Insights
Last updated: April 19, 2026
Application No. 18/848,495

IMAGE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE

Non-Final OA §103
Filed
Sep 18, 2024
Examiner
GOCO, JOHN PATRICK
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
68.8%
+28.8% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in international Application No. CN2023/085565, filed on April 08, 2022. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 3. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 4. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an obtaining module, a processing module in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 1-2,11-12,18,19,21 are rejected under 35 U.S.C. 103 as being unpatentable over CN 111369686 A (Li) in view of US 10127631 B1 (Duan et al, hereinafter Duan). Regarding claim 1, Li teaches An image processing method, comprising: (Page 1, Summary of the Invention Par 3: “AR imaging virtual shoe testing method”) obtaining an original image; (Page 1, Summary of the Invention Par 4, Step 1: “obtain and call the camera to capture the image of the foot area”) in response to determining that a foot feature area is in the original image, determining a foot overlay model according to the foot feature area in the original image, wherein the foot overlay model is configured to mark an area to be processed in the foot feature area; (Page 1, Summary of the Invention Par 4, Step 1: “obtain and call the camera to capture the image of the foot area, segment and identify the ankle target, the foot or vamp target and the occlusion target in the foot image through the MaskR-CNN neural network, as well as the ankle target, the foot or the vamp target.”) determining a shoe model according to the foot feature area in the completed image; (Page 2 Par 1, Step 3: “Generate a tried-on shoe image corresponding to the predicted 6D pose based on the 3D model of the tried-on shoe, overlay the tried-on shoe image on the foot surface or vamp target in the foot area image.”) and rendering the shoe model in the foot feature area in the completed image to obtain a target image. (Page 2 Par 1, Step 3: “Generate a tried-on shoe image corresponding to the predicted 6D pose based on the 3D model of the tried-on shoe, overlay the tried-on shoe image on the foot surface or vamp target in the foot area image”). Li fails to explicitly teach performing background completion on the area to be processed in the original image to obtain a completed image. In related field of endeavor, Duan teaches performing background completion on the area to be processed in the original image to obtain a completed image (Col.8 Line 64-65 “The inpainting component 404 is configured to inpaint the user-selected region using local patch match statistics”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Li to include performing background completion on the area to be processed in the original image to obtain a completed image as taught by Duan. Doing so would produce an image that appears natural (Col.9 Line 9-12 “The inpainting of the user-selected region in the original image results in a modified image that appears natural despite omitting what was previously shown in the user-selected region.”) Regarding claim 2, Li as modified by Duan teach The image processing method according to claim 1. Duan further teaches wherein the performing the background completion on the area to be processed in the original image comprises: according to an initial screen coordinates of a target pixel point in the area to be processed in the original image, a width of the foot feature area on the screen and a length of the foot feature area on the screen (Col 10 Line 42-45 “The determining of the local region includes dynamically computing a size (e.g., height and width) of the local region based on a size of the user-selected region.”), calculating the offset screen coordinates corresponding to the target pixel point, wherein the target pixel point is any pixel point in at least some pixel points in the area to be processed; and (Col 8 Line 65 – Col 9 Line 3 “As part of this process, the inpainting component 404 identifies patch matches (e.g., identical groupings of pixels) in the local region and obtains the offsets of the patch matches (e.g., a distance between patch matches defined by two-dimensional coordinates)”) replacing the color value of the target pixel point with the color value of a pixel point corresponding to the offset screen coordinates to perform the background completion on the area to be processed. (Col 2 Line 13-23 “allow a user to select an object, region, or other element in an original image to be removed and replaced using other portions (e.g. background) of the image … Upon receiving an indication of the selection of person to be removed, the system removes the selected person from the image and inpaints (e.g., fills) the missing region (e.g., the region with the person removed) using portions of the picture near the missing region.” Col 9 Line 3-5 “The inpainting component 404 inpaints the user-selected region using at least a portion of the patch matches from the local region”) It would have been obvious to one of ordinary skill in the art prior to the time of filing to have modified Li to include performing the background completion on the area to be processed in the original image comprises: according to an initial screen coordinates of a target pixel point in the area to be processed in the original image, a width of the foot feature area on the screen and a length of the foot feature area on the screen, calculating the offset screen coordinates corresponding to the target pixel point, wherein the target pixel point is any pixel point in at least some pixel points in the area to be processed; and replacing the color value of the target pixel point with the color value of a pixel point corresponding to the offset screen coordinates to perform the background completion on the area to be processed as taught by Duan. Doing so would produce an image that appears natural (Col 9 Line 9-12 “The inpainting of the user-selected region in the original image results in a modified image that appears natural despite omitting what was previously shown in the user-selected region”) Regarding claim 18, Li as modified by Duan teaches the image processing method according to claim 1. Li fails to explicitly teach an electronic device, comprising a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the image processing method. In related field of endeavor, Duan further teaches an electronic device, comprising a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the image processing method. (Col. 14 Line 39-54 “The software architecture 1006 may execute on hardware such as a machine 1100 of FIG. 11 that includes, among other things, processors 1104, memory/storage 1106, and I/O components 1118. A representative hardware layer 1052 is illustrated and can represent, for example, the machine 1100 of FIG. 11. The representative hardware layer 1052 includes a processing unit 1054 having associated executable instructions 1004. The executable instructions 1004 represent the executable instructions of the software architecture 1006, including implementation of the methods, components, and so forth described herein. The hardware layer 1052 also includes memory and/or storage modules memory/storage 1056, which also have the executable instructions 1004. The hardware layer 1052 may also comprise other hardware 1058.”) It would have been obvious to one of ordinary skill in the art prior to the time of filing to have modified Li to include an electronic device, comprising a processor, a memory and a computer program stored in the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the image processing method as taught by Duan. Doing so would provide a device to perform the image processing. Regarding claim 11, the apparatus claim 11 is similar in scope to claim 1, and is rejected under similar rationale. Regarding claim 12, the apparatus claim 12 is similar in scope to claim 2, and is rejected under similar rationale. Regarding claim 19, the non-transitory computer-readable medium claim 19 is similar in scope to claim 1, and is rejected under similar rationale. Regarding claim 21, the non-transitory computer-readable medium claim 21 is similar in scope to claim 2, and is rejected under similar rationale. 9. Claims 3-5, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Li as modified by Duan as applied to claim 1 and 11 above, and further in view of Ren et al, Structureflow: Image Inpainting via Structure-Aware Appearance Flow (hereinafter Ren). Regarding claim 3, Li as modified by Duan teaches the image processing method according to claim 1, and replacing the color value of the target pixel point with the color value of a pixel point corresponding to the offset screen coordinates to perform the background completion on the area to be processed. Li and Duan fails to explicitly teach wherein the performing the background completion on the area to be optimized processed in the original image comprises: according to an obtained initial color value of a target pixel point in the area to be processed in the original image and the initial color value of an adjacent pixel point of the target pixel point, determining a final color value corresponding to the target pixel point, wherein the target pixel point is any pixel point in at least some pixel points in the area to be processed. In related field of endeavor, Ren teaches wherein the performing the background completion on the area to be optimized processed in the original image comprises: according to an obtained initial color value of a target pixel point in the area to be processed in the original image and the initial color value of an adjacent pixel point of the target pixel point, determining a final color value corresponding to the target pixel point, wherein the target pixel point is any pixel point in at least some pixel points in the area to be processed (Page 184 Formulas 8 and 9, Sect 3.2 Texture Generator Par 4 “The process of Gaussian sampling operation with kernel size n can be written as (Formula 8) where Fi,j is the features around the sample center and Fo is the output feature. The weights ai,j is calculated as (Formula 9) where ∆h and ∆v is the horizontal and vertical distance between the sampling center and feature Fi,j respectively.”). PNG media_image1.png 43 260 media_image1.png Greyscale PNG media_image2.png 35 241 media_image2.png Greyscale It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Li to include performing the background completion on the area to be optimized processed in the original image comprises: according to an obtained initial color value of a target pixel point in the area to be processed in the original image and the initial color value of an adjacent pixel point of the target pixel point, determining a final color value corresponding to the target pixel point, wherein the target pixel point is any pixel point in at least some pixel points in the area to be processed as taught by Ren. Doing so would expand the receptive field to improve the results of existing methods (Section 2.2, Par 1 “However, many existing unsupervised optical flow estimation methods struggle to capture large motions. Some papers [18,23] manage to use multi-scale approaches to improve the results. We believe it is due to the limited receptive field of Bilinear sampling. In this paper, we use Gaussian sampling as an improvement.”) Regarding claim 4, Li as modified by Duan and further modified by Ren as described above teaches the image processing method according to claim 3. Li as modified by Duan fails to explicitly teach wherein the determining the final color value corresponding to the target pixel point comprises: performing weighted summation of the initial color value of the target pixel point in the area to be processed and the initial color value of the adjacent pixel point of the target pixel point; and determining the final color value corresponding to the target pixel point according to a result after the weighted summation. In related field of endeavor, Ren teaches wherein the determining the final color value corresponding to the target pixel point comprises: performing weighted summation of the initial color value of the target pixel point in the area to be processed and the initial color value of the adjacent pixel point of the target pixel point (Page 184 Formulas 8 and 9, Sect 3.2 Texture Generator Par 4 “The process of Gaussian sampling operation with kernel size n can be written as (Formula 8) where Fi,j is the features around the sample center and Fo is the output feature. The weights ai,j is calculated as (Formula 9) where ∆h and ∆v is the horizontal and vertical distance between the sampling center and feature Fi,j respectively.”) PNG media_image1.png 43 260 media_image1.png Greyscale PNG media_image2.png 35 241 media_image2.png Greyscale It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Li to include wherein the determining the final color value corresponding to the target pixel point comprises: performing weighted summation of the initial color value of the target pixel point in the area to be processed and the initial color value of the adjacent pixel point of the target pixel point as taught by Ren. Doing so would expand the receptive field to improve the results of existing methods (Section 2.2, Par 1 “However, many existing unsupervised optical flow estimation methods struggle to capture large motions. Some papers [18,23] manage to use multi-scale approaches to improve the results. We believe it is due to the limited receptive field of Bilinear sampling. In this paper, we use Gaussian sampling as an improvement.”) Regarding claim 5, Li as modified by Duan and Ren above teach The image processing method according to claim 4. Li fails to explicitly teach wherein the weight value corresponding to the target pixel point in the area to be processed is greater than the weight value corresponding to the adjacent pixel point of the target pixel point. Ren teaches wherein the weight value corresponding to the target pixel point in the area to be processed is greater than the weight value corresponding to the adjacent pixel point of the target pixel point (Formula 9, Sect 3.2 Texture Generator Par 4 “where ∆h and ∆v is the horizontal and vertical distance between the sampling center and feature Fi,j respectively”, pixels further from the target point have lower weights, so the target point has the greatest weight.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have further modified Li as modified by Duan and Ren to include wherein the weight value corresponding to the target pixel point in the area to be processed is greater than the weight value corresponding to the adjacent pixel point of the target pixel point as taught by Ren. Doing so would expand the receptive field to improve the results of existing methods (Section 2.2, Par 1 “However, many existing unsupervised optical flow estimation methods struggle to capture large motions. Some papers [18,23] manage to use multi-scale approaches to improve the results. We believe it is due to the limited receptive field of Bilinear sampling. In this paper, we use Gaussian sampling as an improvement.”) Regarding claim 13, the apparatus claim 13 is similar in scope to the method claim 3, and is rejected under similar rationale. 10. Claims 6-8, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Li as modified by Duan as applied to claim 1 and 11 above, and further in view of US 12100156 B2 (Dudovitch et al, hereinafter Dudovitch). Regarding claim 6, Li as modified by Duan teaches the image processing method according to claim 1, further comprising: after rendering the shoe model in the foot feature area in the completed image to obtain the target image. Li fails to explicitly teach obtaining an initial color value of a first pixel point, wherein the first pixel point is any pixel point corresponding to the shoe model in the completed image; sampling from a target effect image according to first coordinates corresponding to the first pixel point to obtain a first color value; calculating a final color value of the first pixel point according to the initial color value of the first pixel point and the first color value; and replacing the initial color value of the first pixel point with the final color value. In related field of endeavor, Dudovitch teaches obtaining an initial color value of a first pixel point, wherein the first pixel point is any pixel point corresponding to the shoe model in the completed image (Col.3 Line 24-27 “the disclosed techniques can apply one or more visual effects to the garment worn by a user that has been segmented in the current image. For example, a color or texture of a shirt worn by a user depicted in the image”, Col. 3 Line 41-46“In this way, specific portions of the virtual garment (e.g., a portion of pixels of the virtual garment) can be overlaid by the real-world garment (e.g., pixel colors of the certain portions of the real-world garment worn by the user) without overlaying the entirety of the virtual garment or vice-versa”); sampling from a target effect image according to first coordinates corresponding to the first pixel point to obtain a first color value (Col.3 Line 24-30 “the disclosed techniques can apply one or more visual effects to the garment worn by a user that has been segmented in the current image.”, Col. 3 Line 41-46 “In this way, specific portions of the virtual garment (e.g., a portion of pixels of the virtual garment) can be overlaid by the real-world garment (e.g., pixel colors of the certain portions of the real-world garment worn by the user) without overlaying the entirety of the virtual garment or vice-versa”, Col.30 Line 48-52 “the image modification module 518 determines which subset of pixels 924 of the virtual pants garment 920 are overlapping a subset of pixels of the real-world garment 910 in the second image 901 relative to the first image 900.”); calculating a final color value of the first pixel point according to the initial color value of the first pixel point and the first color value; and replacing the initial color value of the first pixel point with the final color value (Col.3 Line 24-30 “the disclosed techniques can apply one or more visual effects to the garment worn by a user that has been segmented in the current image. For example, a color or texture of a shirt worn by a user depicted in the image can be replaced with a different color, texture or animation to provide an illusion that the user is wearing a different shirt than what the user is actually wearing in the image”, Col.30 Line 48-52 “the image modification module 518 determines which subset of pixels 924 of the virtual pants garment 920 are overlapping a subset of pixels of the real-world garment 910 in the second image 901 relative to the first image 900.”). It would have been obvious to one of ordinary skill in the art prior to the time of filing to have modified the combination of Li and Duan to include obtaining an initial color value of a first pixel point, wherein the first pixel point is any pixel point corresponding to the shoe model in the completed image; sampling from a target effect image according to first coordinates corresponding to the first pixel point to obtain a first color value; calculating a final color value of the first pixel point according to the initial color value of the first pixel point and the first color value; and replacing the initial color value of the first pixel point with the final color value as taught by Dudovitch. Doing so would improve the overall experience of the user (Col.3 Line 51-52 “This improves the overall experience of the user in using the electronic device.”) Regarding claim 7, Li as modified by Duan and Dudovitch teach the image processing method according to claim 6, wherein the calculating the final color value of the first pixel point according to the initial color value of the first pixel point and the first color value comprises. Li fails to explicitly teach superposing the initial color value of the first pixel point and the first color value according to a superposition parameter to obtain the final color of the first pixel point, wherein the superposition parameter is used to indicate a ratio when the initial color value of the first pixel point and the first color value are superimposed. Dudovitch teaches superposing the initial color value of the first pixel point and the first color value according to a superposition parameter to obtain the final color of the first pixel point, wherein the superposition parameter is used to indicate a ratio when the initial color value of the first pixel point and the first color value are superimposed (Col.29 Line 1-8 “The image modification module 518 can select an occlusion pattern for the virtual pants garment in which the virtual pants garment 820 overlaps the real-world shirt in the image. Namely, the image modification module 518 sets the occlusion pattern such that the virtual pants garment 820 overlaps a portion of the real-world garment 810 (e.g., a short sleeve shirt) corresponding to the garment segmentation received from the smoothed segmentation module 516”, where the occlusion pattern determines the ratio of the initial color value (the real-world shirt) and first color value (the virtual pants)). Regarding claim 8, Li as modified by Duan and Dudovitch teach the image processing method according to claim 6. Duan further teaches wherein the first coordinates comprise any of two-dimensional space coordinates, world space coordinates and screen control coordinates (Col.9 Line 2 “a distance between patch matches defined by two-dimensional coordinates.”) Regarding claim 14, the apparatus claim 14 is similar in scope to the method claim 6, and is rejected under similar rationale. Regarding claim 15, the apparatus claim 15 is similar in scope to the method claim 8, and is rejected under similar rationale. Allowable Subject Matter 11. Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 9, the closest prior art of US 20220414907 A1 (Peuhkurinen et al, hereinafter Peuhkurinen) teaches applying a blur effect to pixels in a region, where the blur is applied based on the size of the region (Par 84-86 “if none of the width of the culled part or the remaining part of the at least one virtual object is less than the predefined percentage of the total width, … apply a blur and fade filter to pixel values of a region in the intermediate extended-reality image that spans across a culled boundary of the at least one culled virtual object, to generate the extended-reality image”, Par 92 “For example, the blur and fade filter may be applied to the pixel values of said region in a direction from the culled boundary towards the boundary of the real object.”). However, Peuhkurinen fails to explicitly teach the combined limitation below as a whole, “establishing a shoe model space based on the shoe model; determining a target noise value according to second coordinates in the shoe model space, wherein the second coordinates are any model space coordinates in the shoe model space; and rendering a second pixel point corresponding to the second coordinates in the foot feature area in response to the target noise value being greater than a preset value, wherein the preset value decreases within a first duration.” Furthermore, no prior art of record alone or in combination teaches the above limitation as a whole. Therefore claim 9 is considered to be allowable. 12. Claim 10 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 10, the closest art of Peuhkurinen fails to explicitly teach the combined limitation of claim 9 as described above. Claim 10 is dependent on claim 9, and is therefore considered allowable. 13. Claim 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 16, the closest prior art of Peuhkurinen teaches applying a blur effect to pixels in a region, where the blur is applied based on the size of the region (Par 84-86 “if none of the width of the culled part or the remaining part of the at least one virtual object is less than the predefined percentage of the total width, … apply a blur and fade filter to pixel values of a region in the intermediate extended-reality image that spans across a culled boundary of the at least one culled virtual object, to generate the extended-reality image”, Par 92 “Par 92 “For example, the blur and fade filter may be applied to the pixel values of said region in a direction from the culled boundary towards the boundary of the real object.”) However, Peuhkurinen fails to explicitly teach the combined limitation below as a whole, “establish a shoe model space based on the shoe model; determine a target noise value according to second coordinates in the shoe model space, wherein the second coordinates are any model space coordinates in the shoe model space; and render a second pixel point corresponding to the second coordinates in the foot feature area in response to the target noise value being greater than a preset value, wherein the preset value decreases within a first duration.” Furthermore, no prior art of record alone or in combination teaches the above limitation as a whole. Therefore claim 16 is considered to be allowable. 14. Claim 17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 17, the closest art of Peuhkurinen fails to explicitly teach the combined limitation of claim 16 as described above. Claim 17 is dependent on claim 16, and is therefore considered allowable. Conclusion 15. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11250572 B2 (Sollami et al, hereinafter Sollami) teaches a similar to that described in claim 1. Namely, Sollami teaches masking an area of an image corresponding to a first garment to be processed, and generating an image including a second garment at the area corresponding to the first garment (Col.4 Line 1-9 “At operation 140, the server may mask the first image to occlude pixels of the first fashion item that is to be replaced with the second fashion item. Operation 140 may be used to determine which portions of the body of the person are not to be covered with the second fashion item (e.g., the hands, portions of the arms, portions of the neck, or the like). For example, image 220 of FIG. 2A shows masking of the image 200 so that the shirt 222a, pants 222b, and shoes 222c may be replaced with a fashion item.”) 16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN PATRICK GOCO whose telephone number is (571)272-5872. The examiner can normally be reached M-Th, 7:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /JOHN P GOCO/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 18, 2024
Application Filed
Apr 06, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month