Prosecution Insights
Last updated: April 19, 2026
Application No. 18/568,745

METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR IMAGE PROCESSING

Non-Final OA §103
Filed
Dec 08, 2023
Examiner
WU, MING HAN
Art Unit
2618
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
282 granted / 370 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
68.3%
+28.3% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§103
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application aft final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 9, and 21 – 31 are rejected under 35 U.S.C. 103 as being unpatentable over Comploi et al. (Publication: US 2020/0312002 A1) in view of Cheng et al. (Publication: US 2020/0327309 A1). Regarding claim 1, see rejection on claim 30. Regarding claim 2, see rejection on claim 22. Regarding claim 3, see rejection on claim 23. Regarding claim 4, see rejection on claim 24. Regarding claim 5, see rejection on claim 25. Regarding claim 6, see rejection on claim 26. Regarding claim 7, see rejection on claim 27. Regarding claim 8, see rejection on claim 28. Regarding claim 9, see rejection on claim 29. Regarding claim 21, see rejection on claim 30. Regarding claim 22, Comploi in view of Cheng disclose all the limitations of claim 21. Cheng discloses wherein the target area comprises a forehead area ( [0052], [0054] - 466 of Fig. 8B “forehead”, removing hair and eyebrow features to show skin image based on Fig. 8A the capture user image. It expand because the features are remove and reveal the skim image. PNG media_image1.png 158 100 media_image1.png Greyscale ). Regarding claim 22, Comploi in view of Cheng disclose all the limitations of claim 22. Cheng discloses wherein the forehead area is determined based on a key point of an eyebrow and a key point of a forehead contour in the first facial image ([0039] - the processor can analyze an image and determine based on color and/or location within the frame whether a particular pixel may be more likely to be a portion of the forehead or a portion of the eyebrow. In another example, an AI classifer can be used to conduct the image analysis and identify the facial features of users. To this end, the system may use algorithmic techniques, trained AI models, or the like, to determine the facial features. [0052] - Obstructions may be determined by using the eyes and mouth as landmarks and anything other than skin color detected above the eyes or around the mouth may be disregarded. In one example, the masks 320, 322 generated in operation 210 may be used to identify the location and shape of certain obstructions to allow their removable.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi in view of Cheng with wherein the forehead area is determined based on a key point of an eyebrow and a key point of a forehead contour in the first facial image by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 24, Comploi in view of Cheng disclose all the limitations of claim 21. Cheng discloses wherein the target facial image is a mask of the target facial organ determined based on the target facial organ ([0256] - A mask m on facial may be obtained by Equation (13) as follows: m=abs(img.sub.1−img.sub.2)  [0258] The foreground part (pixels with value 1 or 255) of mask m may represent the major differences between the input image and the processed image, which may be the covering state of the second object. The corresponding part in the input image may be determined as the covering region, and the corresponding part in the processed image may be determined as the uncovering region.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi in view of Cheng with wherein the target facial image is a mask of the target facial organ determined based on the target facial organ by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 25, Comploi in view of Cheng disclose all the limitations of claim 24. Cheng discloses wherein the mask of the target facial organ is determined based on a key point of the target facial organ in the first facial image ([0258] The foreground part (pixels with value 1 or 255) of mask m may represent the major differences between the input image and the processed image, which may be the covering state of the second object. The corresponding part in the input image may be determined as the covering region, and the corresponding part in the processed image may be determined as the uncovering region.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi in view of Cheng with wherein the target facial image is a mask of the target facial organ determined based on the target facial organ by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 26, Comploi in view of Cheng disclose all the limitations of claim 21 including skin image Cheng discloses performing a mirror reflection process on the image in the target area ([0285] - A mirror mask G.sub.4 may be generated from mask G.sub.3 by turning G.sub.3 around its symmetric axis.) ; and performing a stitching process on a reflected image obtained by the mirror reflection process and the image in the target area ([0285] - A mirror mask G.sub.4 may be generated from mask G.sub.3 by turning G.sub.3 around its symmetric axis. A matching may then be performed optionally between G.sub.3 and G.sub.4. For a pair of matched points p and p′ (p is from G.sub.3 and p′ is from G.sub.4), if I(p)−I(p′)≠0, a difference of the pixel value d may be determined from point p and the points around p. If d is within a predetermined range and I(p)−I(p′)<0, point p may be added into G.sub.3, “stitching” .). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi in view of Cheng with performing a mirror reflection process on the image in the target area; and performing a stitching process on a reflected image obtained by the mirror reflection process and the image in the target area by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 27, Comploi in view of Cheng disclose all the limitations of claim 21 including skin image. Cheng discloses performing a replication process on the image in the target area ([0285] - A mirror mask G.sub.4 may be generated from mask G.sub.3 by turning G.sub.3 around its symmetric axis.); and performing a stitching processing on a plurality of replicated images obtained from the replication process ([0285] - A mirror mask G.sub.4 may be generated from mask G.sub.3 by turning G.sub.3 around its symmetric axis. A matching may then be performed optionally between G.sub.3 and G.sub.4. For a pair of matched points p and p′ (p is from G.sub.3 and p′ is from G.sub.4), if I(p)−I(p′)≠0, a difference of the pixel value d may be determined from point p and the points around p. If d is within a predetermined range and I(p)−I(p′)<0, point p may be added into G.sub.3, “stitching”) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi in view of Cheng with performing a replication process on the image in the target area; and performing a stitching processing on a plurality of replicated images obtained from the replication process by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 28, see rejection on claim 31. Regarding claim 29, Comploi in view of Cheng disclose all the limitations of claim 21 including smeared facial image. Comploi discloses transferring a predetermined animation to the image to obtain a dynamic image ([0024] - Using the wrapped mesh, the 3D landmarks are measured and compared or classified relative to selected or stored avatar or design features, where the stored avatar features are designed to match an avatar design paradigm. In one example, a user nose width spanning across X number of mesh vertices or X inches may be compared against paradigm nose sizes, such as small, medium, and large, where each nose size has a vertices or inches interval (e.g., 0.3 to 0.6 inches is a small nose, 0.65-0.9 is a medium nose, etc.). Once the nose or other 3D feature is classified, the corresponding avatar 3D feature size and shape is selected based on the classification and applied to the 3D mesh. The 3D features are applied to the wrapped mesh and the user specific built 3D mesh is combined with the shape and color features classified and a complete user specific avatar can be generated.). Regarding claim 30, Comploi discloses A non-transitory computer-readable storage medium, with a computer program stored thereon, the computer program being executable by a processor to implement a method comprising: obtaining a facial image to be processed ([0005] - a system for generating graphical representations of a person is disclosed. The system includes a camera for capturing images of the person and a computer in electronic communication with the camera. The computer includes a non-transitory memory component for storing a plurality of graphical features and a processor in electronic communication with the memory component. The processor is configured to perform operations .); and performing a smearing process to a target facial organ in the facial image to be processed based on a pre-trained smearing model to obtain a smeared facial image corresponding to the facial image to be processed ( [0052] - Fig. 4, step 232 remove obstructions is based on the masks 320, 322 generated in operation 210 may be used to identify the location and shape of certain obstructions to allow their removable. a Boolean operation can be used to determine if there is 3D data that falls inside the mask or outside the mask. [0041] - The feature masks 320, 322 are generated based on a perimeter shape as identified during the detection of the user features via trained AI models [0039] .), wherein the smearing model is training based on a first facial image obtained without smearing the target facial organ and a second facial image obtained by smearing the target facial organ in the first facial image ([0029] - AI model is trained by images to estimate depth information. [0054] After operation 232, operation 216 proceeds to step 234 and a user 3D mesh is generated. the processor 120 uses the landmark information and depth information detected in the user 3D information to generate a user mesh, e.g., a 3D geometric representation or point cloud corresponding to the user's features. FIGS. 8B and 8C illustrate examples of the initial user 3D mesh 464, 466. [0053] FIGS. 8B and 8C illustrate user images with obstructions, facial hair and bangs, respectively, as compared to a first 3D shape generated “without” the obstructions removed and a second 3D shape generated with the obstructions removed.) and changes pixel in the area based on a texture image ([0053] FIGS. 8B and 8C illustrate user images with obstructions, facial hair and bangs, respectively, as compared to a first 3D shape generated without the obstructions removed and a second 3D shape generated with the obstructions removed “changes pixel based on the comparison to a first 3D shape”.), wherein the second facial image is generated based on a predetermined image generating model([0052] - Fig. 4, step 232 remove obstructions is based on the masks 320, 322 generated in operation 210 may be used to identify the location and shape of certain obstructions to allow their removable. a Boolean operation can be used to determine if there is 3D data that falls inside the mask or outside the mask “predetermined”. ), the image generating model being trained based on a target texture image and a target facial image ([0057] - The factors for choosing a neural network (or a group of neural networks) may include feature(s) of object 130 (e.g., race, gender, age, facial expression, posture, type of object 136, or a combination thereof), properties of input image 135 (e.g., the quality, color of input image 135), and/or other factors including, for example, clothing, light conditions, or the like, or the combination thereof. For example, a neural network may be specifically trained to process a full-face color image including an expressionless male and to remove a pair of glasses. [0240] To generate a training image, image database generator 1700 may recognize and locate certain part of the first object in image 1710. An image of the second object may be obtained or generated. The image of the second object may be merged into a copy of image 1710 at a location determined by one or more recognized parts of the first object. A training image (e.g., image 1721) may then be generated. In some embodiments, more than one images of the second object may be added into image 1710 to generate one training image. These images may include second objects of the same kind, (e.g., scars) or of different kinds (e.g. a pair of glass and eye shadow).). wherein the target texture image is obtained by performing an expanding process on a skin image in a target area in the first facial image ([0052], [0054] - 466 of Fig. 8B “forehead”, removing hair and eyebrow features to show skin image based on Fig. 8A the capture user image. It “expands” because the features are remove and reveal the skim image. [0053] FIGS. 8B and 8C illustrate user images with obstructions, facial hair and bangs, respectively, as compared to a first 3D shape generated without the obstructions removed and a second 3D shape generated with the obstructions removed “the target texture image is obtained by performing an expanding process”. PNG media_image1.png 158 100 media_image1.png Greyscale ). Comploi does not however Cheng discloses wherein the model recognizes an area of pixels of the target facial organ in the facial image to be processed and changes characteristic of pixels in the area ([0011] In some embodiments, the locating the covering region may further include: determining, on the first image, a plurality of pixels, wherein the plurality of pixels are distributed on the covering region; locating a rough covering region basing on a sparse location; and refining the rough covering region, wherein the plurality of pixels are determined by an active shape model algorithm. [0003] - then to remove the covering objects from the face .). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Comploi with wherein the model recognizes an area of pixels of the target facial organ in the facial image to be processed and changes characteristic of pixels in the area as taught by Cheng. The motivation for doing is to improve recognition as taught by Cheng. Regarding claim 31, Comploi in view of Cheng disclose all the limitations of claim 21. Camploi discloses extracting a first organ image corresponding to the target facial organ in the facial image to be processed ([0022] - a hairline shape for the user is extracted from the user images and the shape is classified and matched to a selected paradigm shape, which is then used to build the user's avatar. the user's skin and eye color are extracted and classified within a paradigm scale to be matched to a paradigm selection of skin and eye color.); adjusting a shape and/or a size of the target facial organ in the first organ image to obtain a second organ image ([0022] - a hairline shape for the user is extracted from the user images and the shape is classified and matched to a selected paradigm shape, which is then used to build the user's avatar. the user's skin and eye color are extracted and classified within a paradigm scale to be matched to a paradigm selection of skin and eye color.); and adding the second organ image to the smeared facial image ([0065] – step 218, the processor 120 combines the selected facial features (e.g., facial hair, hairline, accessories, or the like), with the avatar colors (e.g., hair color, eye color, skin color), onto the avatar 3D facial shape to generate the actual avatar icon 492 (see FIG. 12).). Response to Arguments Claim Rejection Under 35 U.S.C. 103 Applicant asserts “For example, amended Claim 1 now recites: "wherein the smearing model recognizes an area of pixels of the target facial organ in the facial image to be processed and changes pixels in the area based on a texture image" (referred to as feature (1) hereinafter) and "wherein the second facial image is generated based on a predetermined image generating model, the image generating model being trained based on a target texture image and a target facial image" (referred to as feature (2) hereinafter) and "wherein the target texture image is obtained by performing an expanding process on a skin image in a target area in the first facial image" (referred to as feature ( 3) hereinafter). Applicant submits that amended Claim 1 is allowable at least for reasons stated below. Regarding feature (l) In rejecting Claim l, the Office Action acknowledges that Comploi fails to disclose the feature "wherein the smearing model recognizes an area of pixels of the target facial organ in the facial image to be processed and changes characteristics of pixels in the area" (similar to the above amended feature (1)), hut refers to Cheng at paragraphs [0011] and [0003] as disclosing this feature. Applicant respectfully disagrees. Cheng is related to an image processing method and system. Cheng at the cited p01iion discloses "the locating the covering region may further include: determining, on the first image, a pluralitv of pixels, wherein the plurality of pixels are distributed on the covering region; locating a rough covering region basing on a sparse location; and refining the rough covering region, wherein the pluralitv of pixels are determined bv an active shape model algorithm. These covering objects may include a pair of glasses, makeups, scars, tattoos, accessories, etc. Thus, before the recognition process is carried, it may be preferable to remove the covering objects from the face to be identified and generate the covered face part as realistic as possible basing on some features of the image." Cheng at best discloses "an active shape model algorithm determines a plurality of pixels distributed on the covering region" and "remove the covering objects from the face to be identified." However, Cheng never mentions "the smearing model" which recognizes an area of pixels of the target facial organ and changes pixels in the area based on a texture image. In Cheng, the covering objects are removed from the face, instead of changing pixels of the covering objects based on a texture image. In fact, Cheng never mentions a "texture image" at all. Therefore, Cheng also fails to disclose or teach the above feature (1) as recited in amended Claim 1, and thus cannot cure the deficiencies of Comploi. Regarding feature (2) (2): The Office Action refers to Cheng at the below cited portion as disclosing the above feature Each neural network may be trained for a specific situation. For example, a neural network may be specifically trained to process a full-face color image including an expressionless male and to remove a pair of glasses. To generate a training image, image database generator 1700 mav recognize and locate certain part of the first obiect in image 1710. An image of the second object may be obtained or generated. The image of the second object may be merged into a copy of image 1710 at a location determined by one or more recognized pa1is of the first object. A training image ( e.g., image 172 l) may then be generated. In some embodiments, more than one images of the second object may be added into image 1710 to generate one training image. These images mav include second objects of the same kind, (e.g., scars) or of different kinds (e.g. a pair of glass and eve shadow). Applicant respectfully disagrees. Comploi at most discloses training an Al model based on facial images to estimate depth infom1ation used to provide a unique avatar. However, in Comploi, the trained AI model is not used to perform a smearing process on the target facial organ. In addition, Comploi says nothing about "a predetermined image generating model," "a target texture image and a target facial image" or the like. As such, Comploi at least fails to disclose, teach or suggest the above feature (2) of amended Claim 1. Cheng is related to image processing method and system. In Cheng, training images are generated by merging the image of the second object into a copy of image at a ocation determined by one or more recognized parts of the first object. However, Cheng never mentions "a target texture image and a target facial image," let alone "the second facial image is generated based on a predetennined image generating model, the image generating model being trained based on a target texture image and a target facial image." In Cheng, an image of the second object is obtained or generated by merging a copy of image at a determined location, instead of being generated by an image generating model based on a target texture image and a target facial Therefore, Cheng also fails to disclose or suggest the above feature (2) as recited in amended Claim 1, and thus cannot cure the deficiencies of Comploi. Regarding feature (3) In rejecting claim 22 which recites the feature similar to the above feature (3), the Office Action indicates that Comploi discloses "removing hair and eyebrow features to show skin image based on Fig. 8A the capture user image," which discloses feature (3). Specifically, the Office Action indicates that removing and revealing the skin image can be considered as expanding the skin image. Applicant respectfully disagrees. In Comploi, the original image is changed by removing the hair and eyebrow. The original image is not expanded, but is changed by removing hair and eyebrow. However, as required by feature (2), an expanding process is perfonned on a skin image in a target area in the first facial image. As recited in Claim 1, the first facial image is obtained without smearing the target facial organ. The expanding process is performed on a skin image in the original first facial image, in which the target facial organ is not smeared. In addition, the claimed target texture image is used to train the image generating model as required by feature (2} Comploi fails to disclose training the image generating model, and thus will necessarily fail to disclose or teach the target texture image which is generated by the expanding process on a skin image. Therefore, Comploi fails to disclose, teach or suggest the above feature (3). As discussed with respect to feature (2), Cheng fails to disclose the target texture image as well Based on the foregoing, the applied references, taken individual or in combination, fail to disclose or render obvious the combination now recited in Claim 1. Moreover, on the basis of features (1) to (3) together with other features in Claim 1, a smearing model is trained based on the first facial image and the second facial image, where the second facial image is generated by an image generating model which is trained based on a target texture image and a target facial image. Using such trained smearing model, an area of target facial organ in the facial image may be recognized, and pixels in the area may be changed based on a texture image. In this manner, the recognized region is smeared with a target texture.” Examiner disagrees. During patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification. See MPEP § 2111. Further, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). See also MPEP § 2145(VI). Furthermore, it is the combination of Comploi in view of Cheng that disclose the limitations. Comploi discloses [0052] - Fig. 4, step 232 remove obstructions is based on the masks 320, 322 generated in operation 210 may be used to identify the location and shape of certain obstructions to allow their removable. a Boolean operation can be used to determine if there is 3D data that falls inside the mask or outside the mask. [0041] - The feature masks 320, 322 are generated based on a perimeter shape as identified during the detection of the user features via trained AI models [0039] . [0029] - AI model is trained by images to estimate depth information. [0054] After operation 232, operation 216 proceeds to step 234 and a user 3D mesh is generated. the processor 120 uses the landmark information and depth information detected in the user 3D information to generate a user mesh, e.g., a 3D geometric representation or point cloud corresponding to the user's features. FIGS. 8B and 8C illustrate examples of the initial user 3D mesh 464, 466. [0053] FIGS. 8B and 8C illustrate user images with obstructions, facial hair and bangs, respectively, as compared to a first 3D shape generated without the obstructions removed and a second 3D shape generated with the obstructions removed “changes pixel based on the comparison to a first 3D shape”. [0052] - Fig. 4, step 232 remove obstructions is based on the masks 320, 322 generated in operation 210 may be used to identify the location and shape of certain obstructions to allow their removable. a Boolean operation can be used to determine if there is 3D data that falls inside the mask or outside the mask “predetermined”. [0057] - The factors for choosing a neural network (or a group of neural networks) may include feature(s) of object 130 (e.g., race, gender, age, facial expression, posture, type of object 136, or a combination thereof), properties of input image 135 (e.g., the quality, color of input image 135), and/or other factors including, for example, clothing, light conditions, or the like, or the combination thereof. For example, a neural network may be specifically trained to process a full-face color image including an expressionless male and to remove a pair of glasses. [0240] To generate a training image, image database generator 1700 may recognize and locate certain part of the first object in image 1710. An image of the second object may be obtained or generated. The image of the second object may be merged into a copy of image 1710 at a location determined by one or more recognized parts of the first object. A training image (e.g., image 1721) may then be generated. In some embodiments, more than one images of the second object may be added into image 1710 to generate one training image. These images may include second objects of the same kind, (e.g., scars) or of different kinds (e.g. a pair of glass and eye shadow. [0052], [0054] - 466 of Fig. 8B “forehead”, removing hair and eyebrow features to show skin image based on Fig. 8A the capture user image. It “expands” because the features are remove and reveal the skim image. [0053] FIGS. 8B and 8C illustrate user images with obstructions, facial hair and bangs, respectively, as compared to a first 3D shape generated without the obstructions removed and a second 3D shape generated with the obstructions removed “the target texture image is obtained by performing an expanding process”. PNG media_image1.png 158 100 media_image1.png Greyscale Cheng discloses [0011] In some embodiments, the locating the covering region may further include: determining, on the first image, a plurality of pixels, wherein the plurality of pixels are distributed on the covering region; locating a rough covering region basing on a sparse location; and refining the rough covering region, wherein the plurality of pixels are determined by an active shape model algorithm. [0003] - then to remove the covering objects from the face .). Regarding claims 2 – 9, 22 – 29, and 31, the Applicant asserts that they are not obvious over based on their dependency from independent claims 1, 21, and 30 respectively. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claims 1, 21, and 30 respectively. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Friday: 9:30am - 6:00pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING WU/ Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §103
Oct 28, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103
Jan 07, 2026
Response after Non-Final Action
Feb 09, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597109
SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MODELS USING CAPTURED VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12579702
METHOD AND SYSTEM FOR ADAPTING A DIFFUSION MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579623
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567185
Method and system of creating and displaying a visually distinct rendering of an ultrasound image
2y 5m to grant Granted Mar 03, 2026
Patent 12548202
TEXTURE COORDINATE COMPRESSION USING CHART PARTITION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month