DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The reply filed on 16 September 2025 has been entered. Applicant’s arguments with respect to claims 1-3, 5-6, 8-10, 12-15, 17-18 and 20-21 have been considered but are moot in view of new ground(s) of rejection caused by the amendments.
Of note is that applicant has amended the claims to include a “digital graphical element” which does not appear in the specification. In order to advance prosecution, specification paragraph [0066] was used, though the limitation a “material” is used elsewhere in the claim. Applicants comments as to the how a “digital Graphical element” is not new material are invited.
Claims 1-3, 5-6, 8-10, 12-15, 17-18 and 20-21 are pending in this application and have been considered below. Claims 4, 7, 11, 16 and 19 are canceled by the applicant.
Priority
Receipt is acknowledged that application is a National Stage application of PCT PCT/CN2019/129119. Priority to CN201910073609.4 with a priority date of 25 January 2019 is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDSs dated 7 January 2025, 26 July 2021, 7 February 2022, 27 June 2022 and 29 November 2022 that have been previously considered remain considered and placed in the application file.
Specification - Drawings
Acknowledgement is made of the color drawings submitted 23 July 2021 in this application. Applicants are reminded that, absent a successful petition, the black and white drawings submitted on 23 July 2021 will be used. No petition is currently on file.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“digital graphical element” in claim 1, specification paragraph [0066] which states “a material required for the image processing is acquired, the material is rendered to a predetermined position of the face image of the animal according to the key points of the face image of the animal, to obtain an animal face image with the material. In this embodiment, the texture includes multiple materials. Storage addresses of the materials may be stored in the configuration file in step S103. Optionally, the material may be a pair of glasses. In this case, the key points of the face image of the animal are the position parameters in step S103, which may be eye positions of the animal in this specific example. The pair of glasses is rendered to the eye positions of the animal, to obtain an animal face image with the pair of glasses.”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
PNG
media_image1.png
642
439
media_image1.png
Greyscale
Claims 1-3, 5-6, 8-10, 12-15, 17-18 and 20-21 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2011 0115786, (Mochizuki) in view of Chinese Patent Publication CN 108876713 A, Published 23 November 2018, (Liu).
Claim 1
Regarding Claim 1, Mochizuki teaches a method for processing an animal face image to generate effects ("the transform image generator 22 reads out image data stored in the storage unit 21 and conducts a transform image generation process," paragraph [0049]), the method comprising:
[AltContent: textbox (Mochizuki Fig. 25, showing combining a digital graphical element into the face.)]reading a configuration file for image processing, wherein the configuration file was created in advance ("the user can select a face paint image with his or her preferred design, which can then be overlaid with a texture image generated from an image presenting the user's own face," paragraph [0172]), wherein the configuration file comprises parameters of the image processing ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]), wherein the parameters of the image processing comprise a type parameter indicating a type of effect to be generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174] where kumadori makeup is a type of effect), wherein the type parameter indicates a texture type of effect ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]), wherein the parameters of the image processing comprise a position parameter indicating a position of the recognized face region where the image processing is to be implemented ("In the face model 104, the paint areas and the transparent areas are clearly separated as in the face paint images 101 and 110. This exhibits the effect of having painted the face with paint or similar substances," paragraph [0175]),
wherein the parameters of the image processing comprise a positional relationship parameter indicating a positional relationship between a material and at least one subset of the key points of the recognized face region of the at least one animal for the texture type effect ("In the face model 94B, a 3D hat shape has been combined with the face model 94 that was displayed in the face model display region 95 of the display screen 91 in FIG. 19," paragraph [0162]), and the key points represent parts of the recognized face region of the at least one animal ("with reference to FIG. 6, the supplementary feature point calculator 32 calculates supplementary feature points from a face appearing in an image," paragraph [0050]); and
wherein the processed face image of the at least one animal comprises the type of effects generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and wherein the processing the recognized face region of the at least one animal according to the parameters of the image processing comprises:
acquiring the material required for the texture type of effect, wherein the material comprises a digital graphical element representative of an object and configured to be rendered onto the at least one anchor point ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and
generating the processed image by rendering the digital graphical element representative of the object onto the recognized face region of the at least one animal based on the at least one subset of key points of the recognized face region of the at least one animal ("in the image processing apparatus 11, three-dimensional shape data for objects such as hair parts and various accessories such as hats and glasses is stored in the storage unit 21, separately from the shape data for face shapes. The 3D processor 24 is then able to generate images wherein hair, accessories, and other parts have been combined with a generated face model," paragraph [0161]).
Mochizuki does not explicitly teach all of animals in the image.
However, Liu teaches processing the recognized face region of the at least one animal according to the parameters of the image processing to obtain a processed image of the at least one animal ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10),
acquiring an image comprising at least one animal ("Optionally, the two-dimensional template image may be a human face image, a human abdominal muscle image, or an animal face image," Page 3, Paragraph 5); and
recognizing a face region of the at least one animal in the image ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10) and detecting key face points of the face region ("with reference to FIG. 5, the feature point detector 31 detects feature points from a face appearing in an image," paragraph [0050]).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Image Processing Apparatus, Image Processing Method and Program” as taught by Mochizuki to use “Mapping Method and Device of Two-Dimensional Template Image, Terminal Equipment and Storage Medium” as taught by Liu.
The suggestion/motivation for doing so would have been that, “However, in the face image displayed in the above manner, a deformation occurs in the face, for example, when a facial makeup sticker is added to the face image, a person opens the mouth, and the facial makeup remains in a state of not opening the mouth” as noted by the Liu disclosure on page 5 in paragraph 7.
The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 12 and Computer readable storage medium claim 13 while noting that the rejection above cites to both device and method disclosures. Claims 12 and 13 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Mochizuki teaches the method for processing an animal face image according to claim 1, wherein the acquiring an input image comprising at least one animal comprises: acquiring a video image comprising a plurality of video frames, wherein at least one of the plurality of video frames comprises at least one animal ("or a single frame from a video acquired by the imaging apparatus 12, for example," paragraph [0176]).
Claim 3
Regarding claim 3, Mochizuki teaches the method for processing an animal face image according to claim 2, wherein the recognizing a face image of the animal in the image comprises: recognizing a face image of an animal in a current video frame ("or a single frame from a video acquired by the imaging apparatus 12, for example," paragraph [0176]).
Claim 5
Regarding claim 5, Mochizuki teaches the method for processing an animal face image according to claim 1, wherein the position parameter is associated with the key points ("described above, in the image processing apparatus 11, feature points and supplementary feature points detected and calculated from an image presenting a user's face are utilized to generate a texture image by transforming an image," paragraph [0180]).
Claim 6
Regarding claim 6, Mochizuki teaches the method for processing an animal face image according to claim 5, wherein the processing the face image of the animal according to the parameters of the image processing to obtain a processed face image of the animal comprises: processing the face image of the animal according to the type parameter of the image processing and the key points of the face image of the animal to obtain the processed face image of the animal ("in the image processing apparatus 11, three-dimensional shape data for objects such as hair parts and various accessories such as hats and glasses is stored in the storage unit 21, separately from the shape data for face shapes. The 3D processor 24 is then able to generate images wherein hair, accessories, and other parts have been combined with a generated face model," paragraph [0161]).
Claim 8
Regarding claim 8, Mochizuki teaches the method for processing an animal face image according to claim 6 wherein the processing the face image of the animal according to the type parameter of the image processing and the key points of the face image or the animal to obtain the processed face image of the animal comprises:
in response to determining that the type parameter of the image processing is a deformation type, acquiring a key point related to the deformation type ("depending on the method used to generate the texture image, a texture image may be generated that is not deformed as compared an image presenting a frontal view of the user's face. The respective positions of facial features will still differ from person to person even when given a non-deformed texture image, however, and thus the processes executed by an image processing apparatus 11 in accordance with an embodiment of the present invention are still effective in correctly positioning features such as the eyes and mouth in the texture at the positions of the eyes and mouth in the 3D face shape," paragraph [0183] which teaches that deformation is expected, but the taught system can also not deform); and
moving the key point related to the deformation type to a predetermined position to obtain a deformed face image of the animal ("FIG. 13B illustrates a texture image set with both feature points and supplementary feature points, as well as a face model wherein a face shape has been applied with a texture image generated by individually transforming triangular regions with both feature points and supplementary feature points as vertices," paragraph [0118]).
Claim 9
Regarding claim 9, Mochizuki teaches the method for processing an animal face image according to claim 1, as noted above.
Mochizuki does not explicitly teach all of face images of the plurality of animals.
However, Liu et al. teach wherein the recognizing a face image of the animal in the image comprises: recognizing face images of a plurality of animals in the image, and assigning animal face IDs respectively for the face images of the animals according to a recognition order ("identify the detection object from the image to be mapped and obtain a plurality of first keypoints of the detection object by using a keypoint identification technique corresponding to the type of the detection object," page 6, paragraph 12, where the first keypoints are a plurality of faces).
Mochizuki and Liu et al. are combined as per claim 1.
Claim 10
Regarding claim 10, Mochizuki teaches the method for processing an animal face image according to claim 9, as noted above.
Mochizuki does not explicitly teach all of according to each of the animal face IDs.
However, Liu et al. teach wherein the reading a configuration file for image processing, the configuration file comprising parameters of the image processing comprises: reading the configuration file for the image processing, and acquiring, according to each of the animal face IDs, parameters of the image processing corresponding to each of the animal face IDs ("That is to say, in this embodiment, the image to be mapped may be detected under the detection instruction of the user, instead of blindly detecting all detection objects in the image to be mapped, which improves the efficiency of obtaining the first key point by detecting the terminal device, further improves the redrawing efficiency of the two-dimensional template image, and simultaneously reduces the overhead caused by blind detection of the terminal device," page 6, paragraph 12).
Mochizuki and Liu et al. are combined as per claim 1.
Claim 12
Regarding claim 12, Mochizuki teaches an electronic device, comprising:
at least one processor ("a program constituting such software may be installed from a program recording medium onto a computer built into special-purpose hardware. Alternatively, the program may be installed onto an apparatus capable of executing a variety of functions by installing various programs thereon, such as a general-purpose personal computer," paragraph [0184]); and
at least one memory communicatively coupled to the at least one processor and storing instructions that upon execution by the at least one processor ("a program constituting such software may be installed from a program recording medium onto a computer built into special-purpose hardware. Alternatively, the program may be installed onto an apparatus capable of executing a variety of functions by installing various programs thereon, such as a general-purpose personal computer," paragraph [0184]) cause the device to:
detecting key points of the face region ("with reference to FIG. 5, the feature point detector 31 detects feature points from a face appearing in an image," paragraph [0050]);
reading a configuration file for image processing, wherein the configuration file was created in advance ("the user can select a face paint image with his or her preferred design, which can then be overlaid with a texture image generated from an image presenting the user's own face," paragraph [0172]), wherein the configuration file comprises parameters of the image processing ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]), wherein the parameters of the image processing comprise a type parameter indicating a type of effect to be generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174] where kumadori makeup is a type of effect), wherein the type parameter indicates a texture type of effect ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]), wherein the parameters of the image processing further comprise a position parameter indicating a position of the recognized face region where the image processing is to be implemented ("In the face model 104, the paint areas and the transparent areas are clearly separated as in the face paint images 101 and 110. This exhibits the effect of having painted the face with paint or similar substances," paragraph [0175]),
wherein the parameters of the image processing comprise a positional relationship parameter indicating a positional relationship between a material and at least one subset of the key points of the recognized face region of the at least one animal for the texture type effect ("In the face model 94B, a 3D hat shape has been combined with the face model 94 that was displayed in the face model display region 95 of the display screen 91 in FIG. 19," paragraph [0162]), and the key points represent parts of the recognized face region of the at least one animal ("with reference to FIG. 6, the supplementary feature point calculator 32 calculates supplementary feature points from a face appearing in an image," paragraph [0050]); and
wherein the processed face image of the at least one animal comprises the type of effects generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and wherein the processing of the recognized face region of the at least one animal according to the parameters of the image processing comprises:
acquiring the material required for the texture type of effect, wherein the material comprises a digital graphical element representative of an object, and the material is not part of the recognized face region of the at least one animal ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and
generating the processed image by rendering the digital graphical element representative of the object onto the recognized face region of the at least one animal based on the at least one subset of key points of the recognized face region of the at least one animal ("in the image processing apparatus 11, three-dimensional shape data for objects such as hair parts and various accessories such as hats and glasses is stored in the storage unit 21, separately from the shape data for face shapes. The 3D processor 24 is then able to generate images wherein hair, accessories, and other parts have been combined with a generated face model," paragraph [0161]).
Mochizuki does not explicitly teach all of recognizing a face region of the animal.
However, Liu et al. teach processing the recognized face region of the at least one animal according to the parameters of the image processing to obtain a processed image of the at least one animal ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10),
acquiring an image comprising at least one animal ("Optionally, the two-dimensional template image may be a human face image, a human abdominal muscle image, or an animal face image," Page 3, Paragraph 5); and
recognizing a face region of the animal in the image ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10).
Mochizuki and Liu et al. are combined as per claim 1.
Claim 13
Regarding claim 13, Mochizuki teaches a non-transitory computer readable storage medium having non- transitory computer readable instructions stored thereon, wherein when executed by a computer ("a program constituting such software may be installed from a program recording medium onto a computer built into special-purpose hardware. Alternatively, the program may be installed onto an apparatus capable of executing a variety of functions by installing various programs thereon, such as a general-purpose personal computer," paragraph [0184]), the non-transitory computer readable instructions cause the computer to perform operations comprising:
reading a configuration file for image processing, wherein the configuration file was created in advance ("the user can select a face paint image with his or her preferred design, which can then be overlaid with a texture image generated from an image presenting the user's own face," paragraph [0172]), wherein the configuration file comprises parameters of the image processing ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]), wherein the parameters of the image processing comprise a type parameter indicating a type of effect to be generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174] where kumadori makeup is a type of effect),
wherein the type parameter indicates a texture type of effect ("The face paint image 110 illustrated in FIG. 26 depicts kumadori makeup," paragraph [0174]) wherein the parameters of the image processing further comprise a position parameter indicating a position of the recognized face region of the animal where the image processing is to be implemented ("In the face model 104, the paint areas and the transparent areas are clearly separated as in the face paint images 101 and 110. This exhibits the effect of having painted the face with paint or similar substances," paragraph [0175]) and
wherein the parameters of the image processing comprise a positional relationship parameter indicating a positional relationship between a material and at least one subset of the key points of the recognized face region of the at least one animal for the texture type effect ("In the face model 94B, a 3D hat shape has been combined with the face model 94 that was displayed in the face model display region 95 of the display screen 91 in FIG. 19," paragraph [0162]), and the key points represent parts of the recognized face region of the at least one animal; and
wherein the processed face image of the animal comprises the type of effects generated by the image processing ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and wherein the processing of the recognized face region of the at least one animal based on the parameters of the image processing comprises:
acquiring the material required for the texture type of effect wherein the material comprises a digital graphical element representative of an object, and the material is not part of the recognized face region of the at least one animal ("This texture image 113 is then applied to the face shape to generate a face model 114 painted with kumadori makeup. In this way, by switching out face paint images, face painting with a variety of designs can be presented on a face model," paragraph [0174]), and
generating the processed image by rendering the digital graphical element representative of the object onto the recognized face region of the at least one animal based on the at least one subset of key points of the recognized face region of the at least one animal ("in the image processing apparatus 11, three-dimensional shape data for objects such as hair parts and various accessories such as hats and glasses is stored in the storage unit 21, separately from the shape data for face shapes. The 3D processor 24 is then able to generate images wherein hair, accessories, and other parts have been combined with a generated face model," paragraph [0161]).
Mochizuki does not explicitly teach all of recognizing a face region of the animal.
However, Liu et al. teach processing the recognized face region of the animal based on the parameters of the image processing to obtain a processed face image of the animal ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10),
acquiring an image comprising an animal ("Optionally, the two-dimensional template image may be a human face image, a human abdominal muscle image, or an animal face image," Page 3, Paragraph 5); and
recognizing a face region of the animal in the image and detecting key points of the face region ("When the detection object is a face, the detection object can be a human face or an animal face," page 12, paragraph 10).
Mochizuki and Liu et al. are combined as per claim 1.
Claim 14
Regarding claim 14, Mochizuki teaches the electronic device according to claim 12, wherein the acquiring an input image comprising at least one animal comprises: acquiring a video image comprising a plurality of video frames, wherein at least one of the plurality of video frames comprises at least one animal ("or a single frame from a video acquired by the imaging apparatus 12, for example," paragraph [0176]).
Claim 15
Regarding claim 15, Mochizuki teaches the electronic device according to claim 14, wherein the recognizing a face image of the animal in the image comprises: recognizing a face image of an animal in a current video frame ("or a single frame from a video acquired by the imaging apparatus 12, for example," paragraph [0176]).
Claim 17
Regarding claim 17, Mochizuki teaches the electronic device according to claim 16, wherein the position parameter is associated with the key points ("described above, in the image processing apparatus 11, feature points and supplementary feature points detected and calculated from an image presenting a user's face are utilized to generate a texture image by transforming an image," paragraph [0180]).
Claim 18
Regarding claim 18, Mochizuki teaches the electronic device according to claim 17, wherein the processing the face image of the animal according to the parameters of the image processing to obtain a processed face image of the animal comprises: processing the face image of the animal according to the type parameter of the image processing and the key points of the face image of the animal to obtain the processed face image of the animal ("in the image processing apparatus 11, three-dimensional shape data for objects such as hair parts and various accessories such as hats and glasses is stored in the storage unit 21, separately from the shape data for face shapes. The 3D processor 24 is then able to generate images wherein hair, accessories, and other parts have been combined with a generated face model," paragraph [0161]).
Claim 20
Regarding claim 20, Mochizuki teaches
20. The electronic device according to claim 18, wherein the processing the face image of the animal according to the type parameter of the image processing and the key points of the face image of the animal, to obtain the processed face image or the animal comprises:
in response to determining that the type parameter of the image processing is a deformation type, acquiring a key point related to the deformation type ("depending on the method used to generate the texture image, a texture image may be generated that is not deformed as compared an image presenting a frontal view of the user's face. The respective positions of facial features will still differ from person to person even when given a non-deformed texture image, however, and thus the processes executed by an image processing apparatus 11 in accordance with an embodiment of the present invention are still effective in correctly positioning features such as the eyes and mouth in the texture at the positions of the eyes and mouth in the 3D face shape," paragraph [0183] which teaches that deformation is expected, but the taught system can also not deform); and
moving the key point related to the deformation type to a predetermined position to obtain a deformed face image of the animal ("FIG. 13B illustrates a texture image set with both feature points and supplementary feature points, as well as a face model wherein a face shape has been applied with a texture image generated by individually transforming triangular regions with both feature points and supplementary feature points as vertices," paragraph [0118]).
Claim 21
Regarding claim 21, Mochizuki teaches the electronic device according to claim 12, as noted above.
Mochizuki does not explicitly teach all of recognizing a face image of the animal.
However, Liu et al. teach wherein the recognizing a face image of the animal in the image comprises: recognizing face images of a plurality of animals in the image, and assigning animal face IDs respectively for the face images of the animals according to a recognition order ("identify the detection object from the image to be mapped and obtain a plurality of first keypoints of the detection object by using a keypoint identification technique corresponding to the type of the detection object," page 6, paragraph 12, where the first keypoints are a plurality of faces).
Mochizuki and Liu et al. are combined as per claim 1.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
PNG
media_image2.png
206
363
media_image2.png
Greyscale
Non Patent Publication Robust Statistical Frontalization of Human and Animal Faces to Sagonis et al. discloses unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.E.W./Examiner, Art Unit 2664
Date: 13 November 2025
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664