Prosecution Insights
Last updated: April 19, 2026
Application No. 18/833,391

METHOD, AND APPARATUS, DEVICE, AND STORAGE MEDIUM FOR GENERATING EFFECT IMAGE

Non-Final OA §101§102§103
Filed
Jul 26, 2024
Examiner
SUO, JOSHUA JUNGWOOK
Art Unit
2616
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
10 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Allowable Subject Matter Claims 3, 6-9, 22, and 25-28 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 20 and 29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed computer readable medium may include signal embodiments, as stated in paragraph [0100] in the specifications. Signals are energy per se and are not patentable. They are not one of the four statutory categories of invention. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mochizuki (US 20110115786 A1). As per claim 1, Mochizuki teaches the claimed: 1. A method for generating an effect image, comprising: acquiring a character appearance image to be morphed, and a first morphing point in the character appearance image to be morphed; (Mochizuki [0010]: “An image processing method or program in accordance with another embodiment of the present invention includes the steps of: acquiring feature points, which are characteristic points on a face in an image presenting a face” Mochizuki teaches acquiring feature points of a face in an image, which indicates that at some point this method would have had to acquire image presenting a face (character appearance image). Mochizuki also teaches that from this image presenting a face it acquires feature points, which correspond to the first morphing points.) acquiring a second morphing point in a character appearance template; (Mochizuki [0087]: “At this point, generating a texture image from an original image presenting the user's face involves conducting a process for transforming the original image such that individual points in the original image match (i.e., are mapped to) corresponding individual points in the texture image 72. In other words, the texture image 72 is a projection of the surface of the face shape 71 onto a flat plane, and is used as a template texture image for transforming an original image presenting the user's face into a texture image to be applied to that face shape 71.” Mochizuki teaches the individual points from the original image correspond to the individual points in the texture image, which this corresponds to the second morphing point, as it matches exactly one to one to the individual points in the original image (first morphing point). Mochizuki also teaches the individual points in the texture image (second morphing point) in a template texture image (character appearance template) as the template texture image is the original image presented into a texture image, where the individual points in the texture image (second morphing point) are.) determining a target morphing point according to the second morphing point and the first morphing point; and (Mochizuki [0087]: “Consequently, in the image transform process, the feature points and supplementary feature points in the texture image 72 become the target points when translating the feature points and supplementary feature points set in the original image.” Mochizuki teaches the target points (target morphing point) that is from on the feature points and supplementary feature points in the texture image (second morphing point) and the feature points and supplementary feature points set in the original image (first morphing point).) generating a character appearance morphing effect image based on the target morphing point. (Mochizuki [0050]: “the image transform processor 33 uses the feature points and supplementary feature points to generate a transform image by conducting the image transform process for transforming a face appearing in an image.” Mochizuki [0089]: “In the image transform process, the image transform processor 33 segments the texture image 72 into a plurality of triangular regions, using the target points as vertices. In addition, the image transform processor 33 segments the face region of the original image into a plurality of triangular regions, using the transform points as vertices. The image transform processor 33 then respectively transforms each corresponding pair of triangular regions.” Mochizuki teaches the image transform processor to generate a transform image (character appearance morphing effect image) with the target points, as Mochizuki states in paragraph 88: “the image transform process for transforming an original image while using the texture image 72 as a template”.) As per claim 19, the reasons and rationale for the rejection of claim 1 is incorporated herein. In particular, only additional features unique to claim 19 that were not present in claim 1 will be explicitly addressed here. As per claim 19, Mochizuki teaches the claimed: (strikethrough taught in claim 1) 19. An electronic device, comprising: one or more processors; and a memory configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method comprising: (Mochizuki [0054]: “The controller 25 is provided with components such as a central processing unit (CPU), read-only memory (ROM), random access memory (RAM), and flash memory (such as Electronically Erasable and Programmable Read-Only Memory (EEPROM), for example). As a result of the CPU loading into RAM and executing a program stored in the ROM or flash memory, the controller 25 controls the various components of the image processing apparatus 11.”) As per claim 20, the reasons and rationale for the rejection of claim 1 is incorporated herein. In particular, only additional features unique to claim 20 that were not present in claim 1 will be explicitly addressed here. As per claim 20, Mochizuki teaches the claimed: (strikethrough taught in claim 1) 20. A computer-readable medium, storing a computer program which, when executed by a processor, causes the processor implements the method comprising: (Mochizuki [0184]: “The series of processes described above may be executed in hardware or software. In the case of executing the series of process in software, a program constituting such software may be installed from a program recording medium onto a computer built into special-purpose hardware. Alternatively, the program may be installed onto an apparatus capable of executing a variety of functions by installing various programs thereon, such as a general-purpose personal computer, for example.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Mochizuki (US 20110115786 A1) in view of Ouimet (US 10678849 B1) and in further view of Kamiyama (US 20200319015 A1). As per claim 2, Mochizuki alone does not explicitly teach the claimed limitations. However, Mochizuki in combination with Ouimet and Kamiyama teaches the claimed: 2. The method of claim 1, wherein acquiring a character appearance image to be morphed comprises: scanning an image in a picture according to a set scanning mode in a process of acquiring a character appearance image; and (Ouimet [0058]: “Operation 704 involves initiating, via a scanning input of the image capture interface, a scanning mode, wherein the scanning mode comprises capture of scan data from a plurality of input/output modules of the first client device.” Ouimet (63): “In some embodiments, first client device captures scan data using some or all of a plurality of sensor devices, including a camera device, a position sensor, a temperature sensor, a motion sensor, a microphone, and one or more wireless communication modules. Associated scan data may include wireless access point name data, image data, video data, audio data, location data, temperature data, and motion data.”) performing frame hold on a scanned area until the entire picture is scanned to obtain the character appearance image to be morphed. (Kamiyama [0069]: “In some embodiments, a subject video, for example, comprising a front view, a 90-, 180-, or even 360-degree view of the subject, may be received. From the subject video, one or more still frames or photos, such as a front view, a side view, and/or a 45-degree view of the subject are extracted from the video and used in the process that follows.” Kamiyama teaches the still frames extracted from the subject video (perform frame hold) to get a view of the subject (character appearance image).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the scanning mode as taught by Ouimet with the system of Mochizuki in order to determine how an image sensor acquires data, which affects the quality, speed, and suitability of the images. In addition, to use the still frames extraction as taught by Kamiyama with the system of Mochizuki in order to enhance data quality and reference stability of the scanned image when analyzing the image. As per claim 2, this claim is similar in scope to limitations recited in claim 21 and 29, and thus is rejected under the same rationale. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Mochizuki (US 20110115786 A1) in view of Ouimet (US 10678849 B1), in view of Kamiyama (US 20200319015 A1), and in further view of Wang (US 11190689 B1). As per claim 4, Mochizuki, Ouimet, and Kamiyama alone does not explicitly teach the claimed limitations. However, Mochizuki, Ouimet, and Kamiyama in combination with Wang teaches the claimed: 4. The method of claim 2, wherein the set scanning mode comprises scanning with one or more of scanning lines; and (Wang (18): “In some implementations, the first camera provides image data that captures image scanlines of an image frame progressively and the first canonical reference space for the first camera is one in which image data has been corrected to remove distortion due to progressive capture for the image scanlines.” Wang teaches the image data that captures image scanlines, which indicates that the scanning had to have been done with scanning lines.) scanning an image in a picture in a set scanning mode comprises: controlling, in response to determining the scanning with one scanning line, the one scanning line to scan the image in the picture in a set direction; and (Wang (33): “For example, the scan pattern data 234 may indicate a direction of scanning (e.g., scanlines read from top to bottom), whether scanlines are read individually or in groups, and so on.”) determining, in response to determining the scanning with a plurality of scanning lines, a sub-area scanned by each scanning line to control the plurality of scanning lines to scan the image in the picture in a set direction and within corresponding sub-areas. (Wang (33): “For example, the scan pattern data 234 may indicate a direction of scanning (e.g., scanlines read from top to bottom), whether scanlines are read individually or in groups, and so on.” Wang (43): “The first transformation 262 can separately describe the relationships of different subsets or regions of a single image frame. For example, different portions or components of the first transformation 262 may describe how different scanlines of a frame are mapped to the real-world scene.” Wang teaches transformation that describes the different subsets and regions of an image frame, and since the transformation can also describe how the scanlines of a frame are mapped to the real-world scene, this correlates the scanlines of the frame to the different subsets of the frame, thus teaching the plurality of scanlines scanning sub areas of the image.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the scanning lines as taught by Wang with the system of Mochizuki as modified by Ouimet and Kamiyama in order to achieve higher resolution and reduce optical distortions in the scanned image. As per claim 4, this claim is similar in scope to limitations recited in claim 23, and thus is rejected under the same rationale. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Mochizuki (US 20110115786 A1) in view of Evertt (US 9013489 B2). As per claim 5, Mochizuki alone does not explicitly teach the claimed limitations. However, Mochizuki in combination with Evertt teaches the claimed: 5. The method of claim 1, wherein determining a target morphing point according to the second morphing point and the first morphing point comprises: generating a virtual standard character appearance according to character appearance information of the character appearance image to be morphed and the character appearance template; (Evertt (25): “An avatar generation component generates a 3D avatar resembling the player by combining the captured facial appearance, hair appearance, and clothing appearance with predetermined avatar features.” Evertt teaches the avatar generation (virtual standard character appearance) based on the captured facial, hair, and clothing appearance (character appearance image) and the predetermined avatar features (character appearance template).) determining a third morphing point in the virtual standard character appearance; and (Evertt (52): “As discussed above, a visible spectrum image, depth image, and skeletal data can be used to capture a player's facial appearance. The captured facial appearance can then be incorporated into a 3D avatar.” Evertt (53): “From an identified face, a variety of facial features can be identified. In some embodiments, face alignment points corresponding to the identified facial features are determined. The identified face alignment points are matched to destination points on a template texture map. The portion of the received visible spectrum image that includes the player's face is warped into a face texture map such that the face texture map includes the face alignment points corresponding to the identified facial features of the player's face mapped to the destination points of the template texture map.” Evertt (55): “Face texture map 1300 can be morphed into a 3D head model based representing player 802's head that is included as part of a 3D avatar reflecting player 802's current appearance. FIG. 14 illustrates avatar 1400 having a face and head 1402 resembling player 802.” Evertt teaches the face alignment points matched to the destination points on a template texture map, where the texture map can then be morphed onto the 3D model that is included as part of the avatar. Therefore, Evertt teaches the face alignment point (third morphing point) in the avatar (virtual standard character appearance) when morphed onto the 3D model.) determining the target morphing point according to the first morphing point, the second morphing point and the third morphing point. (Evertt (52): “As discussed above, a visible spectrum image, depth image, and skeletal data can be used to capture a player's facial appearance. The captured facial appearance can then be incorporated into a 3D avatar.” Evertt (53): “From an identified face, a variety of facial features can be identified. In some embodiments, face alignment points corresponding to the identified facial features are determined. The identified face alignment points are matched to destination points on a template texture map. The portion of the received visible spectrum image that includes the player's face is warped into a face texture map such that the face texture map includes the face alignment points corresponding to the identified facial features of the player's face mapped to the destination points of the template texture map.” Evertt (55): “Face texture map 1300 can be morphed into a 3D head model based representing player 802's head that is included as part of a 3D avatar reflecting player 802's current appearance. FIG. 14 illustrates avatar 1400 having a face and head 1402 resembling player 802.” Mochizuki [0087]: “Consequently, in the image transform process, the feature points and supplementary feature points in the texture image 72 become the target points when translating the feature points and supplementary feature points set in the original image.” Similar to the claim limitation above, Evertt teaches the face alignment points that correspond to the destination points on a template texture map, where the texture map can then be morphed onto the 3D model that is included as part of the avatar (virtual standard character). In addition, as Mochizuki states, the feature points in the texture image become the target points. Therefore, given that the face alignment points on the texture map morphed onto the 3D model that is included in the avatar (third morphing point) is correlated to the points on the texture map/image, it can be said that the target point is determined with the feature points of the original image (first morphing point), the feature points of the texture image (second morphing point), and the face alignment points on the texture map morphed onto the 3D model that is included in the avatar (third morphing point).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the avatar generation as taught by Evertt with the system of Mochizuki in order to create a replica of a subject in an image to allow manipulation of features and different customization that are incapable to do on static images. As per claim 5, this claim is similar in scope to limitations recited in claim 24, and thus is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571) 272-8387. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA SUO/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597191
FACE IMAGE GENERATION METHOD AND DEVICE FOR GENERATING FULLY-CONTROLLABLE TALKING FACE
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month