Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,796

SYSTEMS AND METHODS FOR MULTI-TIERED GENERATION OF A FACE CHART

Non-Final OA §101§103§112
Filed
Sep 18, 2023
Examiner
KOPPOLU, VAISALI RAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Perfect Mobile Corp.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
89 granted / 113 resolved
+16.8% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
22 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “matching facial features” in the fifth limitation. There is insufficient antecedent basis for this limitation in the claim as facial features is previously defined in fourth limitation of claim 1 and it is unclear and confusing if these facial features are different from the previously defined facial features. Claims 2 – 9 are rejected for being dependent on rejected claim 1. Claim 10 recites “matching facial features” in the seventh limitation. There is insufficient antecedent basis for this limitation in the claim as facial features is previously defined in sixth limitation of claim 10 and it is unclear and confusing if these facial features are different from the previously defined facial features. Claims 11 – 15 are rejected for being dependent on rejected claim 10. Claim 16 recites “matching facial features” in the fifth limitation. There is insufficient antecedent basis for this limitation in the claim as facial features is previously defined in fourth limitation of claim 16 and it is unclear and confusing if these facial features are different from the previously defined facial features. Claims 17 – 20 are rejected for being dependent on rejected claim 16. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) a method, system and computer-readable storage medium configured to identify one or more regions in the image depicting the skin of the user, predicting skin tone, defining facial feature and inserting predefined facial features into the face mask, generating hair mask, extracting hair regions and inserting hair regions on top of the skin mask to generate face chart. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1 - 20 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Claims 1, 10 and 16 are directed to a method, i.e. process, system and a computer readable medium. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e. abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The method in claim 1 ( system of claim 10 and CRM in claim 16) comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding Claims 1 and (10 and 16): the method recites: identifying one or more regions in the image depicting skin of the user and generating a skin mask (mental process including observation and evaluation, and can be done mentally in the human mind; identifying one or more regions in the image…); predicting a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone (mental process including observation and evaluation, and can be done mentally in the human mind or using a generic computer program; predicting skin tone of the user’s face depicted in the image…); defining feature points corresponding to facial features on the user’s face depicted in the image (mental process including observation and evaluation, and can be done mentally in the human mind; defining feature points…); extracting pre-defined facial patterns matching facial features depicted in the image (mental process including observation and evaluation, and can be done mentally in the human mind; extracting pre-defined facial patterns…); inserting the extracted pre-defined facial patterns into the skin mask based on the feature points (mental process including observation and evaluation, and can be done mentally in the human mind or merely uses a computer as a tool to perform an abstract idea; inserting the extracted pre-defined facial patterns into the skin mask); generating a hair mask identifying one or more regions in the image depicting hair of the user (mental process including observation and evaluation, and can be done mentally in the human mind or merely uses a computer as a tool to perform an abstract idea); extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart (mental process including observation and evaluation, and can be done mentally in the human mind or merely uses a computer as a tool to perform an abstract idea). These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could look at an image, identify the regions depicting the users skin and hair, extract feature points corresponding the facial features and generate a skin mask and a hair mask and generate a face chart. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claim(s) 1, 10 and 16 does/do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Claim 16 recites the further limitations of: obtaining an image depicting a user’s face (insignificant pre-solution extra activity of gathering data); Claim 10 recite(s) the further limitations of: a memory storing instructions; a processor coupled to the memory and configured by the instructions to at least (generic computers or components configured to perform the method); Claim 16 recite(s) the further limitations of: a non-transitory computer-readable storage medium storing instructions to be implemented by a computing device having a processor, wherein the instructions, when executed by the processor, cause the computing device to at least (generic computers or components configured to perform the method). These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claim(s) 1, 10 and 16 does/do not recite any additional elements that are not well-understood, routine or conventional. The use of a computer to “obtaining, identifying, predicting defining, extracting, inserting and generating, etc., as claimed in Claim(s) 1, 10 and 16 is a routine, well-understood and conventional process that is performed by computers. Thus, since Claim(s) 1, 10 and 16 is/are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1, 10 and 16 are not eligible subject matter under 35 U.S.C 101. Regarding claims 2 – 9, 11 – 15 and 17 – 20: the additional elements do not integrate the mental process into practical application or add significantly more to the mental process. The limitations of these claims fall under (mental process including observation and evaluation, and can be done mentally in the human mind; or generic computers or components configured to perform the method; or insignificant pre/post-solution extra activity of gathering data). The mere recitation that the functions are performed by a machine learning algorithm or convolutional neural network does not demonstrate a technological improvement. No specific structure or improvement to the neural network is recited. There is no indication that the method improved the functioning of a computer, the training of the neural network or efficiency of image classification itself. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 – 7, 9 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20170169285 A1; hereafter referred to as Chen) in view of Prasad et al. (US 20220028149 A1; hereafter referred to as Prasad). Regarding Claim 1, Chen teaches: A method implemented in a computing device, comprising: obtaining an image depicting a user’s face (Fig. 3, 310 obtain a digital image, Fig. 4, 402 digital image, [0035] “the facial region analyzer 114 obtains a digital image 402 and identifies the general location of the facial region of an individual depicted in the digital image 402”); identifying one or more regions in the image depicting skin of the user and generating a skin mask ([0035] “The facial feature identifier 116 further identifies the general location of facial features (e.g., eyes, hair, nose, lips) within the facial region. The facial region analyzer 114 may also determine the skin color associated with the face, where the skin color is utilized by the facial feature identifier 116 to more accurately define the boundaries of the hair and eyebrows of the individual”; Fig. 5, [0037] “the skin color of the individual's face is determined and a skin mask is generated… From the pixels of the facial region, the values of pixels are averaged to determined skin color. In the example skin mask shown, the hair and eyebrow regions are white”); defining feature points corresponding to facial features on the user’s face depicted in the image ([0023] “The facial region analyzer 114 analyzes attributes of each individual depicted in the digital images and identifies the general location of the individual's face in addition to the general location of facial features such as the individual's eyes, nose, mouth, and so on”); However, Chen fails to explicitly teach: predicting a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone; extracting pre-defined facial patterns matching facial features depicted in the image; inserting the extracted pre-defined facial patterns into the skin mask based on the feature points; generating a hair mask identifying one or more regions in the image depicting hair of the user; and extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart. In the same field of endeavor, Prasad teaches: predicting a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone (Prasad, [0024] “a prediction model for automatically predicting a tone based on dynamic face tone calculation, such as, a skin tone, of the primary component from the frontal image. The avatar generation engine generates 107 a primary canvas comprising the predicted tone of the primary component. The avatar generation engine generates 108 a primary graphical image excluding the secondary component by merging the primary canvas with the graphically pronounced features of the target object”); extracting pre-defined facial patterns matching facial features depicted in the image (Prasad, [0032] “a cartoon template is considered for eyes and mouth and triangulation warp to match the shape of the extracted user face features...The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); inserting the extracted pre-defined facial patterns into the skin mask based on the feature points (Prasad, [0032] “The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); generating a hair mask identifying one or more regions in the image depicting hair of the user (Prasad, [0028] “The method includes a step of subtracting ‘smooth face mask excluding hair portion’ from ‘smooth face mask including hair portion’ giving the hair mask 311. The method includes a further step of applying ‘hair mask’ to the cropped image 312 and thus returning an image of the hair portion of the face image”); and extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart (Prasad, [0032] “At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Chen and Prasad are considered analogous art as they are reasonable pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chen with the method of generating the image as taught by Prasad to make the invention that predicts the skin tone of the users face depicted in the image; extracts pre-defined facial patterns; generates hair mask identifying one or more regions in the image; and extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart; doing so efficiently generate a face drawing (face chart) that with features representing users real appearance and characteristics from the users photographic image; thus one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 2, Chen in view of Prasad teaches the method of claim 1, wherein inserting the hair region on top of the skin mask to generate the face chart comprises one of: extracting the hair region depicted in the image of the user and inserting the extracted hair region on top of the skin mask (Prasad, [0032] “ At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”); or inserting a sketch drawing of the user’s hair on top of the skin mask (Prasad, [0032] “At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Regarding Claim 3, Chen in view of Prasad teaches the method of claim 1, wherein identifying the one or more regions in the image depicting the user’s skin and generating the skin mask is performed by executing a machine-learning algorithm based on other images of the user (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 4, Chen in view of Prasad teaches the method of claim 1, wherein generating the hair mask identifying the one or more regions in the image depicting the user’s hair is performed by executing a machine-learning algorithm based on other images of the user (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 5, Chen in view of Prasad teaches the method of claim 1, wherein predicting the skin tone of the user’s face depicted in the image of the user is performed by executing a machine-learning algorithm based on other images of the user and other individuals (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 6, Chen in view of Prasad teaches the method of claim 1, wherein defining the feature points corresponding to the facial features on the user’s face depicted in the image is performed by utilizing a convolutional neural network (Prasad, [0025] “The avatar generation engine performs face segmentation 203 on the input image. According to an embodiment herein, the avatar generation engine executes a model trained on user images, using feature extractors and neural networks”). Regarding Claim 6, Chen in view of Prasad teaches the method of claim 1, wherein generating the face chart comprises one of: inserting a background into the face chart (Prasad, [0023] “the avatar generation engine performs a face segmentation on the input image to extract the user's face with hair from a background of the input image as disclosed in the detailed description of FIG. 2”); or superimposing the skin mask on the background (Prasad, [0025] “at step 203, the avatar generation engine extracts an image of a face including hair and an image of face excluding hair from a background of the input image using the model”). Regarding Claim 9, Chen in view of Prasad teaches the method of claim 1, wherein the pre-defined facial patterns comprise one of: an eye, a mouth, a nose, or an eyebrow (Prasad, [0023] “The features comprise, eyes. a nose. lips, facial lines, distinguishing marks such as birth marks or beauty spots, facial hair such as a beard or a mustache, eyeglasses, etc”). Regarding Claim 10, Chen teaches: A system, comprising: a memory storing instructions ([0005] “a memory storing instructions”); a processor coupled to the memory and configured by the instructions to at least ([0005] “a processor coupled to the memory and configured by the instructions to obtain a digital image depicting an individual”): obtain image depicting a user’s face (Fig. 3, 310 obtain a digital image, Fig. 4, 402 digital image, [0035] “the facial region analyzer 114 obtains a digital image 402 and identifies the general location of the facial region of an individual depicted in the digital image 402”); identify one or more regions in the image depicting skin of the user and generating a skin mask ([0035] “The facial feature identifier 116 further identifies the general location of facial features (e.g., eyes, hair, nose, lips) within the facial region. The facial region analyzer 114 may also determine the skin color associated with the face, where the skin color is utilized by the facial feature identifier 116 to more accurately define the boundaries of the hair and eyebrows of the individual”; Fig. 5, [0037] “the skin color of the individual's face is determined and a skin mask is generated… From the pixels of the facial region, the values of pixels are averaged to determined skin color. In the example skin mask shown, the hair and eyebrow regions are white”); define feature points corresponding to facial features on the user’s face depicted in the image ([0023] “The facial region analyzer 114 analyzes attributes of each individual depicted in the digital images and identifies the general location of the individual's face in addition to the general location of facial features such as the individual's eyes, nose, mouth, and so on”); However, Chen fails to explicitly teach: predict a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone; extract pre-defined facial patterns matching facial features depicted in the image; inserting the extracted pre-defined facial patterns into the skin mask based on the feature points; generate a hair mask identifying one or more regions in the image depicting hair of the user; and extract a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart. In the same field of endeavor, Prasad teaches: predict a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone (Prasad, [0024] “a prediction model for automatically predicting a tone based on dynamic face tone calculation, such as, a skin tone, of the primary component from the frontal image. The avatar generation engine generates 107 a primary canvas comprising the predicted tone of the primary component. The avatar generation engine generates 108 a primary graphical image excluding the secondary component by merging the primary canvas with the graphically pronounced features of the target object”); extract pre-defined facial patterns matching facial features depicted in the image (Prasad, [0032] “a cartoon template is considered for eyes and mouth and triangulation warp to match the shape of the extracted user face features...The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); insert the extracted pre-defined facial patterns into the skin mask based on the feature points (Prasad, [0032] “The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); generating a hair mask identifying one or more regions in the image depicting hair of the user (Prasad, [0028] “The method includes a step of subtracting ‘smooth face mask excluding hair portion’ from ‘smooth face mask including hair portion’ giving the hair mask 311. The method includes a further step of applying ‘hair mask’ to the cropped image 312 and thus returning an image of the hair portion of the face image”); and extract a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart (Prasad, [0032] “At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Chen and Prasad are considered analogous art as they are reasonable pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chen with the method of generating the image as taught by Prasad to make the invention that predicts the skin tone of the users face depicted in the image; extracts pre-defined facial patterns; generates hair mask identifying one or more regions in the image; and extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart; doing so efficiently generate a face drawing (face chart) that with features representing users real appearance and characteristics from the users photographic image; thus one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 11, Chen in view of Prasad teaches the system of claim 10, wherein inserting the hair region on top of the skin mask to generate the face chart comprises one of: extracting the hair region depicted in the image of the user and inserting the extracted hair region on top of the skin mask (Prasad, [0032] “ At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”); or inserting a sketch drawing of the user’s hair on top of the skin mask (Prasad, [0032] “At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Regarding Claim 12, Chen in view of Prasad teaches the system of claim 10, wherein the processor is configured to identify the one or more regions in the image depicting the user’s skin and generate the skin mask by executing a machine-learning algorithm based on other images of the user (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 13, Chen in view of Prasad teaches the system of claim 10, wherein the processor is configured to generate the hair mask identifying the one or more regions in the image depicting the user’s hair by executing a machine-learning algorithm based on other images of the user (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 14, Chen in view of Prasad teaches the system of claim 10, wherein the processor is configured to predict the skin tone of the user’s face depicted in the image of the user by executing a machine-learning algorithm based on other images of the user and other individuals (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of the 300 skin points identified previously with k=1”). Regarding Claim 15, Chen in view of Prasad teaches the system of claim 10, wherein the processor is configured to define the feature points corresponding to the facial features on the user’s face depicted in the image by utilizing a convolutional neural network (Prasad, [0025] “The avatar generation engine performs face segmentation 203 on the input image. According to an embodiment herein, the avatar generation engine executes a model trained on user images, using feature extractors and neural networks”). Regarding Claim 16, Chen teaches: A non-transitory computer-readable storage medium storing instructions to be implemented by a computing device having a processor, wherein the instructions, when executed by the processor, cause the computing device to at least ([0006] “a non-transitory computer-readable medium embodying a program executable in a computing device, comprising code for obtaining a digital image”): obtain an image depicting a user’s face (Fig. 3, 310 obtain a digital image, Fig. 4, 402 digital image, [0035] “the facial region analyzer 114 obtains a digital image 402 and identifies the general location of the facial region of an individual depicted in the digital image 402”); identify one or more regions in the image depicting skin of the user and generating a skin mask ([0035] “The facial feature identifier 116 further identifies the general location of facial features (e.g., eyes, hair, nose, lips) within the facial region. The facial region analyzer 114 may also determine the skin color associated with the face, where the skin color is utilized by the facial feature identifier 116 to more accurately define the boundaries of the hair and eyebrows of the individual”; Fig. 5, [0037] “the skin color of the individual's face is determined and a skin mask is generated… From the pixels of the facial region, the values of pixels are averaged to determined skin color. In the example skin mask shown, the hair and eyebrow regions are white”); define feature points corresponding to facial features on the user’s face depicted in the image ([0023] “The facial region analyzer 114 analyzes attributes of each individual depicted in the digital images and identifies the general location of the individual's face in addition to the general location of facial features such as the individual's eyes, nose, mouth, and so on”); However, Chen fails to explicitly teach: predict a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone; extract pre-defined facial patterns matching facial features depicted in the image; inserting the extracted pre-defined facial patterns into the skin mask based on the feature points; generating a hair mask identifying one or more regions in the image depicting hair of the user; and extract a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart. In the same field of endeavor, Prasad teaches: predicting a skin tone of the user’s face depicted in the image and populating the skin mask according to the predicted skin tone (Prasad, [0024] “a prediction model for automatically predicting a tone based on dynamic face tone calculation, such as, a skin tone, of the primary component from the frontal image. The avatar generation engine generates 107 a primary canvas comprising the predicted tone of the primary component. The avatar generation engine generates 108 a primary graphical image excluding the secondary component by merging the primary canvas with the graphically pronounced features of the target object”); extract pre-defined facial patterns matching facial features depicted in the image (Prasad, [0032] “a cartoon template is considered for eyes and mouth and triangulation warp to match the shape of the extracted user face features...The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); insert the extracted pre-defined facial patterns into the skin mask based on the feature points (Prasad, [0032] “The avatar generation engine extracts facial feature from the segmented image such as, but not limited to, eyes, mouth, nose, and eyebrows. The avatar generation engine applies seamless cloning to merge extracted facial features on face tone canvas to produce avatar's face”); generate a hair mask identifying one or more regions in the image depicting hair of the user (Prasad, [0028] “The method includes a step of subtracting ‘smooth face mask excluding hair portion’ from ‘smooth face mask including hair portion’ giving the hair mask 311. The method includes a further step of applying ‘hair mask’ to the cropped image 312 and thus returning an image of the hair portion of the face image”); and extract a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart (Prasad, [0032] “At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Chen and Prasad are considered analogous art as they are reasonable pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chen with the method of generating the image as taught by Prasad to make the invention that predicts the skin tone of the users face depicted in the image; extracts pre-defined facial patterns; generates hair mask identifying one or more regions in the image; and extracting a hair region depicted in the image of the user based on the hair mask and inserting the hair region on top of the skin mask to generate a face chart; doing so efficiently generate a face drawing (face chart) that with features representing users real appearance and characteristics from the users photographic image; thus one of the ordinary skill in the art would have been motivated to combine the references. Regarding Claim 17, Chen in view of Prasad teaches the non-transitory computer-readable storage medium of claim 16, wherein the processor is configured by the instructions to insert the hair region on top of the skin mask to generate the face chart by performing one of: extracting the hair region depicted in the image of the user and inserting the extracted hair region on top of the skin mask (Prasad, [0032] “ At step 210, the avatar generation engine generates cartoonized hair or a colour quantized hair first by extracting the secondary component from the input image and then by converting the hair portion of the face to a cartoon form. In one embodiment, the avatar generation engine quantizes the segmented image using multiple image thresholding and bitwise operations to produce cartoonized secondary image or cartoonized hair. At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”); or inserting a sketch drawing of the user’s hair on top of the skin mask (Prasad, [0032] “At step 211, the avatar generation engine merges the cartoonized hair and the cartoonized face to generate a cartoonized head or the avatar 212 with pronounced features based on normal blending and smoothening of the edges and adding a border around the avatar face”). Regarding Claim 18, Chen in view of Prasad teaches the non-transitory computer-readable storage medium of claim 16, wherein the processor is configured by the instructions to identify the one or more regions in the image depicting the user’s skin and generate the skin mask by executing a machine-learning algorithm based on other images of the user (Prasad, [0031] “after identifying the mid-point of the nose, center of left and right cheek, the avatar generation engine extracts three 10×10 skin regions on the right cheek, left cheek and middle of nose in primary image of the primary component based on the dlib points at step 403. Further, the avatar generation engine extracts RGB values of these 300 skin points. At 404, for the k-NN algorithm, the avatar generation engine uses the RGB (Red, Green, and Blue) value of the 5 skin tone values as known labels and apply KNN algorithm on the RGB values of t
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
Nov 20, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586356
ARTIFICIAL IMAGE GENERATION WITH TRAFFIC SIGNS
2y 5m to grant Granted Mar 24, 2026
Patent 12579680
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579824
OCCUPANT DETECTION DEVICE AND OCCUPANT DETECTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573210
PARKING ASSISTANCE DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573087
OBJECT THREE-DIMENSIONAL LOCALIZATIONS IN IMAGES OR VIDEOS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month