Prosecution Insights
Last updated: April 19, 2026
Application No. 18/803,097

AI Face Replacement Device

Final Rejection §103
Filed
Aug 13, 2024
Examiner
MCCULLEY, RYAN D
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Morphusai Co. Ltd.
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
344 granted / 493 resolved
+7.8% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
524
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 493 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment/response filed on 09 March 2026, which has been entered and made of record. Response to Arguments Applicant’s arguments have been fully considered but they are moot in view of the new grounds of rejection presented in this Office Action. Note that the new references presented in this Office Action are considered the best references based on the currently-amended claims, but the previous references may be used in future Office Actions depending on future claim amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kuta et al. (US 11,436,781; hereinafter “Kuta”) in view of Berlin et al. (US 2023/0049729; hereinafter “Berlin”). Regarding claim 1, Kuta discloses An artificial intelligence (AI) (“a first machine-learning (ML) model,” col. 1, lines 49-50) face replacement device (“face swapping,” col. 1, line 35), comprising: a processor; a storage device couple to said processor; a data collection module, stored in said storage device and accessible through said processor (see Fig. 1), configured to collect facial photos and videos for training a model of a virtual idol (“use a large dataset (DS) of video clips, depicting an object (e.g., a human head or face) to train the neural networks,” col. 14, lines 1-5); a virtual idol face swapping module, stored in said storage device and accessible through said processor (see Fig. 2), wherein a virtual idol face swapping model is trained by said virtual idol face swapping module using said collected facial photos and videos as training material (“use a large dataset (DS) of video clips, depicting an object (e.g., a human head or face) to train the neural networks,” col. 14, lines 1-5), said virtual idol face swapping module learning and simulating different facial features and expressions (“ML-based feature extraction model may be trained to extract at least one feature pertaining to the face, such as a pose feature, an expression feature, a lighting feature and/or a point of view feature,” col. 7, lines 20-25) through multiple iterations (“The training process may run the following procedure iteratively,” col. 14, lines 5-10); wherein said virtual idol face swapping module is configured to perform face replacement between a source face image and a target face image (“a source image depicting a puppet object and/or one or more driving images depicting a driver object,” col. 7, lines 5-10), wherein said source face image includes a virtual idol face image, wherein said target face image is replaced by said source face image (“Reconstruction model may be adapted to generate a new image, depicting the puppet face of source image, but having an identity-invariant feature (e.g., a pose) of the driver face, depicted in driver image(s),” col. 9, lines 60-65), wherein said virtual idol face swapping module includes an optical mask segmentation unit to extract a face image for identifying and separating facial areas (“segment source image, to produce therefrom a semantic segmentation map,” col. 16, lines 55-60; “segmentation value may identify a pixel of image as pertaining to a forehead of a depicted person,” col. 17, lines 1-5); in a virtual idol face-swapping processing stage, a virtual idol photo and a target video are imported into said virtual idol face swapping module that has been trained (“a first image may be received, the first image may depict a first, ‘puppet’ object … an input video, depicting a second, ‘driver’ object may be sampled, to obtain at least one second image,” col. 20, lines 25-30), and an input virtual idol image is processed for face recognition, feature extraction (“one or more 3D features or characteristics of an object such as a face depicted in an image, and may be used (e.g., by a face recognition (FR) ML-based model, as known in the art),” col. 24, lines 45-50) and image synthesis to achieve virtual idol face replacement (“generate a new image, depicting the puppet face of source image, but having an identity-invariant feature (e.g., a pose) of the driver face, depicted in driver image(s),” col. 9, lines 60-65); a light and shadow capture and application module, stored in said storage device and accessible through said processor, configured to capture light and shadow of said target face image (“extract an identity-invariant feature such as a lighting feature, representing a lighting of the object,” col. 8, lines 30-35) and to paste back said light and shadow on a replaced face image that has completed said face replacement (“an output video, depicting movement of the first face, that is substantially identical to movement of the second face and having the same expression and/or lighting as the second face,” col. 9, lines 35-40); an adjustment module, stored in said storage device and accessible through said processor, configured to provide parameter adjustments and corrections of said replaced face image (“adjust the output image,” col. 32, lines 5-6); and an output module, stored in said storage device and accessible through said processor, configured to output processed image of said replaced face image in a required format (“The one or more output images may be further appended to produce an output video depicting animation of the puppet object based on the driver image,” col. 26, lines 55-60); an image input module, stored in said storage device, configured to receive video data of a target subject; wherein said processor executes following steps: inputting a target image data including said source face image and a target face video by said image input module (“receive a first image depicting a face, and a video data element depicting movement of a second face,” col. 25, lines 30-35); performing processing on said target image data by said face swapping module that has been trained for automated face recognition, feature extraction (“one or more 3D features or characteristics of an object such as a face depicted in an image, and may be used (e.g., by a face recognition (FR) ML-based model, as known in the art),” col. 24, lines 45-50), and image synthesis to achieve face swapping between said source face image and said target face video (“generate a new image, depicting the puppet face of source image, but having an identity-invariant feature (e.g., a pose) of the driver face, depicted in driver image(s),” col. 9, lines 60-65) with faces at said different angles and outputting a placed face video with said face replacement at different angles (“output video may depict a first face … moving according to the poses of a second face,” col. 12, lines 40-42; “a pose of a face may include … an angle (e.g., a yaw, a pitch and/or a roll) of the face in the image,” col. 8, lines 55-60). Kuta does not expressly recite, in a disclosed embodiment, collecting facial photos and video from an Internet. In the same art of training a face-swapping module, Berlin teaches collecting facial photos and video from an Internet (“some or all of images in the dataset of images may be obtained from one or more videos (e.g., from video frames from thousands, hundreds of thousands, or millions of videos, which may, for example, be posted on open source video sharing sites accessible over a network, such as the Internet),” para. 195). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Berlin to Kuta. The motivation would have been to provide “a much more realistic and accurate reconstructed image” (Berlin, para. 128). Regarding claim 9, the combination of Kuta and Berlin renders obvious restore eyes and teeth of said replaced face image (“an expression may include appearance of teeth in an image, a location of pupils in an image,” Kuta, col. 9, lines 15-20). Regarding claim 10, the combination of Kuta and Berlin renders obvious wherein said processor includes a multi-core central processing unit (CPU), a graphics processor unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or their combinations (“A processor, e.g. CPUs or graphics processing units (GPUs), or a dedicated hardware device may perform the relevant calculations,” Kuta, col. 6, lines 50-55). Claims 3, 5, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kuta and Berlin, and further in view of Wang et al. (“FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for Blind Face Inpainting”; hereinafter “Wang”). Regarding claim 3, the combination of Kuta and Berlin renders obvious wherein said face swapping module at least includes a convolutional neural network (CNN) (“a convolutional NN,” Kuta, col. 10, lines 5-10), a generative adversarial network (GAN), said GAN is responsible to generate facial image from said inputted image data (“generative adversarial network,” Berlin, para. 61; see claim 1 for motivation to combine), and a feature extraction unit, wherein said CNN is responsible to extract important facial features from said inputted image data (“feature model … may be a convolutional NN,” Kuta, col. 10, lines 5-10). The combination of Kuta and Berlin does not disclose a blind face restoration unit. In the same art of facial image processing, Wang teaches a blind face restoration unit (“Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image,” abstract). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Wang to the combination of Kuta and Berlin. The motivation would have been to improve image quality. Regarding claim 5, the combination of Kuta, Berlin, and Wang renders obvious wherein said blind face restoration unit is used to inpaint said generated facial image and further optimize said face replacement (“Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image,” Wang, abstract; see claim 3 for motivation to combine). Regarding claim 15, the combination of Kuta, Berlin, and Wang renders obvious wherein training on said face swapping module by utilizing publicly available photos or videos as a training material (“some or all of images in the dataset of images may be obtained from … open source video sharing sites accessible over a network, such as the Internet,” Berlin, para. 195; see claim 1 for motivation to combine), through multiple iterations of training process (“The training process may run the following procedure iteratively,” Kuta, col. 14, lines 5-10), to enable said face swapping module to learn and simulate various facial features and expressions (“depiction of a face … such as a pose feature, an expression feature,” Kuta, col. 7, lines 15-20). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kuta and Berlin, and further in view of Ume et al. (US 2024/0212249; hereinafter “Ume”). Regarding claim 11, the combination of Kuta and Berlin renders obvious performing a training on said face swapping module by utilizing publicly available photos or videos as a training material (“some or all of images in the dataset of images may be obtained from … open source video sharing sites accessible over a network, such as the Internet,” Berlin, para. 195; see claim 1 for motivation to combine), through multiple iterations of training process (“The training process may run the following procedure iteratively,” Kuta, col. 14, lines 5-10), to enable said face swapping module to learn and simulate various facial features and expressions (“depiction of a face … such as a pose feature, an expression feature,” Kuta, col. 7, lines 15-20); making detailed adjustments to said replaced face image outputted from said face swapping module by said adjustment module (“adjust the output image,” col. 32, lines 5-6). The combination of Kuta and Berlin does not disclose adjusting with post-production; and outputting a final restored or optimized image or video clips by said output module after performing said post-production. In the same art of training a face-swapping module, Ume teaches adjusting an image with post-production; and outputting a final restored or optimized image or video clips by said output module after performing said post-production (“postproduction video editing may be performed to enhance the altered video content,” Ume, para. 96). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Ume to the combination of Kuta and Berlin. The motivation would have been to “generate hyperreal synthetic faces in altered video content” (Ume, para. 89). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kuta, Berlin, and Ume, and further in view of Kitagawara et al. (US 2005/0207644; hereinafter “Kitagawara”). Regarding claim 12, the combination of Kuta, Berlin, and Ume renders obvious adjusting light (“a lighting of the object,” Kuta, col. 8, lines 30-35), and optimizing color balance on said outputted results of said face swapping module (“postproduction video editing may be performed to enhance the altered video content in terms of color grading, adding highlights,” Ume, para. 60; see claim 11 for motivation to combine). The combination of Kuta, Berlin, and Ume does not disclose correcting edges, adjusting shadow. In the same art of image enhancement, Kitagawara teaches correcting edges, adjusting shadow (“image quality enhancement processing such as … a highlight/shadow correction … a sharpness enhancement function that determines edge intensity from edge level of the entire image and corrects the image to a sharper one,” para. 47). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Kitagawara to the combination of Kuta, Berlin, and Ume. The motivation would have been to improve image quality. Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kuta, Berlin, and Ume, and further in view of Wang. Regarding claim 13, the combination of Kuta, Berlin, and Ume renders obvious capturing light and shadow of said target face image and pasting said light and shadow back on said replaced face image by said light and shadow capture and application module (“an output video … having the same expression and/or lighting as the second face,” Kuta, col. 9, lines 35-40; “the shading, lighting, and/or other aspects of the original footage of the subject may be replicated for the animated 3D model to make the animated 3D model look similar to the face of the subject in the original footage,” Ume, para. 33; see claim 11 for motivation to combine). The combination of Kuta, Berlin, and Ume does not disclose restoring and inpainting face image generated by said face swapping module that has been trained by using face enhance and blind face restoration technologies. In the same art of facial image processing, Wang teaches restoring and inpainting face image … that has been trained by using face enhance and blind face restoration technologies (“Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image,” abstract). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Wang to the face swapping of the combination of Kuta, Berlin, and Ume. The motivation would have been to improve image quality. Regarding claim 14, the combination of Kuta, Berlin, Ume, and Wang renders obvious restoring structures comprising teeth, eyes, skin, skeleton, or the combinations thereof (“an expression may include appearance of teeth in an image, a location of pupils in an image,” Kuta, col. 9, lines 15-20). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN MCCULLEY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 13, 2024
Application Filed
May 06, 2025
Non-Final Rejection — §103
Jul 30, 2025
Response Filed
Aug 06, 2025
Final Rejection — §103
Nov 07, 2025
Request for Continued Examination
Nov 15, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §103
Mar 09, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602859
INFORMATION PROCESSING SYSTEM, RAY TRACE METHOD, AND PROGRAM FOR RADIO WAVE PROPAGATION SIMULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12586290
TEMPORALLY COHERENT VOLUMETRIC VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12555335
SYSTEMS AND METHODS FOR ENHANCING AND DEVELOPING ACCIDENT SCENE VISUALIZATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12548241
HIGH-FIDELITY THREE-DIMENSIONAL ASSET ENCODING
2y 5m to grant Granted Feb 10, 2026
Patent 12541904
ELECTRONIC DEVICE, METHOD FOR PROMPTING FUNCTION SETTING OF ELECTRONIC DEVICE, AND METHOD FOR PLAYING PROMPT FILE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+29.7%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 493 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month