DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claims 1-20 have been interpreted under 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) to not invoke 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) claim interpretation.
It is noted claim 14 claims various “code configured to” which is considered not to be structure, thus, not invoking 112(f) claim interpretation.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 at lines 6-9, claim 14 at lines 10-12, and claim 15 at lines 7-10 each claim the same conditional statement: “and in a case of issuing the voice content, the emotion driving information being configured for describing a target emotion of the to-be-adjusted object to drive the face pose of the original face to change according to the target emotion;”. Following this conditional statement are three performing steps/three perform functions which are unclear as to which condition they are responsive. Additionally the second performing step “performing feature interaction processing on the audio driving information and the emotion driving information to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and”/the second perform function “perform feature interaction processing on the audio driving information and the emotion driving information, to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and” utilizes “the emotion driving information” which is modified in the conditional statement. The second performing step/the second perform function is unclear as to which previously claimed condition dependent “the emotion driving information” they are responsive. The dependent claims inherit and do not correct this indefinite issue.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Challapali, US Patent Application No. 2002/0194006, describes a text-to-animation system for generating a displayable animated face image that can reproduce facial movements corresponding to the received word strings and the received emoticon strings.
Guo, CN 114895817 A, describes in the translation:
Illustratively, the interactive input information and/or interactive response information for emotional feature extraction to obtain interactive emotional characteristics. performing the intention feature extraction on the interactive input information and/or the interactive response information to obtain the interactive intention characteristic. performing audio feature extraction on the interactive input information and/or interactive response information to obtain interactive audio characteristic. can according to the interactive emotion characteristic, interaction intention characteristic and interactive audio characteristic of at least one, determining the facial driving parameter and the limb driving parameter of the virtual image model.
Allowable Subject Matter
Claims 1-20 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action.
The following is a statement of reasons for the indication of allowable subject matter:
Claims 1-13:
The prior art of record fails to teach or suggest in the context of independent claim 1:
“performing spatial feature extraction on the original face image frame to obtain an original face spatial feature corresponding to the original face image frame;
performing feature interaction processing on the audio driving information and the emotion driving information to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and
performing, based on the original face spatial feature and the face local pose feature, face reconstruction processing on the to-be-adjusted object to generate a target face image frame.”.
Claim 14:
The prior art of record fails to teach or suggest in the context of independent claim 14:
“extraction code configured to cause the at least one processor to
perform spatial feature extraction on the original face image frame, to obtain an original face spatial feature corresponding to the original face image frame;
interaction code configured to cause the at least one processor to
perform feature interaction processing on the audio driving information and the emotion driving information, to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and
reconstruction code configured to cause the at least one processor to perform, based on the original face spatial feature and the face local pose feature, face reconstruction processing on the to-be-adjusted object, to generate a target face image frame.”.
Claims 15-20:
The prior art of record fails to teach or suggest in the context of independent claim 15:
“performing spatial feature extraction on the original face image frame to obtain an original face spatial feature corresponding to the original face image frame;
performing feature interaction processing on the audio driving information and the emotion driving information to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and
performing, based on the original face spatial feature and the face local pose feature, face reconstruction processing on the to-be-adjusted object to generate a target face image frame.”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFERY A BRIER whose telephone number is (571)272-7656. The examiner can normally be reached on Mon-Fri from 8:30am-3:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu, can be reached at telephone number 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
JEFFERY A. BRIER
Primary Examiner
Art Unit 2613
/JEFFERY A BRIER/Primary Examiner, Art Unit 2613