DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1 (and by dependency claims 2-6) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 cites “a conversion unit that converts” and “an emotion estimation unit that maps”. A claim limitation expressed in means- (or step-) plus-function language "shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof." 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. If the specification fails to disclose sufficient corresponding structure, materials, or acts that perform the entire claimed function, then the claim limitation is indefinite because the applicant has in effect failed to particularly point out and distinctly claim the invention as required by 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In re Donaldson Co., 16 F.3d 1189, 1195, 29 USPQ2d 1845, 1850 (Fed. Cir. 1994) (en banc). Such a limitation also lacks an adequate written description as required by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because an indefinite, unbounded functional limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See also MPEP § 2181.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 (and by dependency claims 2-6) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Similarly to the above, “a conversion unit that converts” and “an emotion estimation unit that maps” do not have sufficient structure within the claim to perform the function.
Examiner recommends amending the claims to no longer have “means for”/equivalent language, as removing the 35 USC 112 f/6th language will also remove the need for the 35 USC 112 a/b rejections.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites an image capturing unit that acquires a human facial expression; a conversion unit that converts the human facial expression acquired by the image capturing unit into a continuous value indicating a human emotion; and an emotion estimation unit that maps the continuous value converted by the conversion unit to estimate an emotion of a target person.
The limitation of “an image capturing unit that acquires a human facial expression” is insignificant pre-solution activity. The limitation of “a conversion unit that converts the human facial expression acquired by the image capturing unit into a continuous value indicating a human emotion”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, “converts” in the context of this claim encompasses the user manually determining the person is 0 if the person is frowning or 1 for smiling. Similarly, the limitation of “an emotion estimation unit that maps the continuous value converted by the conversion unit to estimate an emotion of a target person”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “maps” in the context of this claim encompasses the user judging a frown as unhappy and a smile as happy. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. There are not any meaningful limits on practicing the abstract idea, such as controlling a robot in response to the output. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. For instance there is no citation of a way the computation is done more effectively than in the prior art. The claim is not patent eligible.
Claims 7 and 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more for the same reasons as claim 1.
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 2 just gives further detail on the mapping.
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The neural network is not described in any detail/no details on the training are provided.
Claims 5-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 5-6 are pure math/fall into the mathematical concepts grouping of abstract ideas.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Glasner et al. (US 20200251190 A1) in view of Shikii et al. (US 20170150930 A1).
Regarding claims 1, 7, and 8, Glasner et al. disclose an emotion acquisition device comprising, an emotion acquisition method comprising, and computer readable non-transitory storage medium having a program stored therein, the program causing a computer of an emotion acquisition device to: an image capturing unit that acquires a human facial expression (for each frame of video captured (i.e., for each unit of subject response data) a facial analysis system (e.g., OpenFace) is used to detect and crop the face of the subject being exposed to stimuli, ensuring that the spatial positioning of the face remains consistent across frames, [0111]); a conversion unit that converts the human facial expression acquired by the image capturing unit into a continuous value indicating a human emotion (FACS defines a set of facial musculature movements—or action units—that collectively are able to describe nearly all possible facial expressions. For each frame, the computational analysis unit outputs whether or not an action unit is present as well as a 5-point score on the expressivity of that action unit, [0112], In some implementations, to calculate emotion-related subject descriptors, for each frame of video, several initial subject descriptors are extracted: a binary variable of whether facial behavior associated with the emotion is detected, and a continuous variable indicating an intensity for each action unit, [0118], continuous variables of each frame, binary and continuous variables may be extracted using, for example, OpenFace, [0119]); and an emotion estimation unit that maps the continuous value converted by the conversion unit to estimate an emotion of a target person (Sadness, for example, is detected by looking for an “inner brow raise,” a “brow lower,” or a “lip corner depress”, [0118], If the binary variable for the frame indicates emotion detection, the continuous variables of each frame are summed and normalized to give a frame-wise subject descriptor of sadness intensity, [0119], Biomarkers may be derived based on the emotion-related subject descriptors. For example, the biomarker face sad exp mean is the average value of face_sad_exp over all frames of data. face sad exp mean_posimg, face sad exp mean neuimg, and face_sad_exp_mean_negimg are the averages of face_sad_exp in response to positively, neutrally, and negatively valenced image stimuli, respectively, [0120]).
Glasner et al. do not specify the emotion is an “estimation” therefore another reference is provided.
Shikii et al. teach an emotion acquisition device comprising: an image capturing unit that acquires a human facial expression (The camera section 241 captures an image of the entirety of a person's face, [0152]); a conversion unit that converts the human facial expression acquired by the image capturing unit into a continuous value indicating a human emotion (physiological value, movement of a facial muscle, [0145], determining a position along a pleasure axis on the basis of the person's facial expression using Russell's circumplex model, [0147], “In Russell's circumplex model illustrated in FIG. 4, various emotions experienced by a person are mapped on a plane defined by a pleasure axis and an arousal axis. Russell's circumplex model illustrated in FIG. 4 indicates that various emotions experienced by a person can be mapped in a circle”, [0150]); and an emotion estimation unit that maps the continuous value converted by the conversion unit to estimate an emotion of a target person (Alternatively, the emotion estimation unit 24 may estimate a person's emotion by determining a position along a pleasure axis on the basis of the person's facial expression and a position along an arousal axis on the basis of the person's physiological value such as the blood flow, heartbeats, the respiration, the pulse wave, or the blood pressure using Russell's circumplex model illustrated in FIG. 4. This case will be described hereinafter, [0147], The emotion estimation processing section 243 may determine a person's emotion by determining a position along the pleasure axis on the basis of the person's facial expression estimated by the facial expression estimation section 242 and a position along the arousal axis on the basis of the blood flow volumes of the at least two body parts measured by the blood flow measuring unit 11, [0153]).
Glasner et al. and Shikii et al. are in the same art of detecting emotion based on a facial expression (Glasner et al., [0119]; Shikii et al., [0153]). The combination of Shikii et al. with Glasner et al. will enable emotion estimation. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the estimation of Shikii et al. with the invention of Glasner et al. as this was known at the time of filing, the combination would have predictable results, and as Shikii et al. indicate “In doing so, the person's emotion can be estimated more accurately than when the person's emotion is estimated only on the basis of the person's facial expression and physiological value” ([0150]) thereby indicating a potential improvement to the invention accuracy when the inventions are combined.
Regarding claim 2, Glasner et al. and Shikii et al. disclose the emotion acquisition device according to claim 1. Shikii et al. further teach the emotion estimation unit estimates an emotion of a target person by mapping continuous values using Russell's emotional circle model ([0147], [0150]).
Regarding claim 3, Glasner et al. and Shikii et al. disclose the emotion acquisition device according to claim 1. Glasner et al. further indicate the conversion unit uses a CNN network to extract a feature amount of an image which is a continuous value from the image of the acquired human facial expression (deep neural networks and/or other machine learning methods are used to label facial musculature in each frame, allows detection of action units according to the FACS, [0111]).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Glasner et al. (US 20200251190 A1) and Shikii et al. (US 20170150930 A1) as applied to claim 3 above, further in view of Seo et al. (US 20200104670 A1).
Regarding claim 4, Glasner et al. and Shikii et al. disclose the emotion acquisition device according to claim 3. Glasner et al. and Shikii et al. do not disclose the emotion estimation unit inputs the continuous value indicating the human emotion converted by the conversion unit into an RNN network to obtain a reward depending on whether the facial expression of the target person is positive or negative.
Seo et al. teach the emotion estimation unit inputs the continuous value indicating the human emotion converted by the conversion unit into an RNN network to obtain a reward depending on whether the facial expression of the target person is positive or negative (“The multimodal-based motion recognizer 110 of the electronic device 1 may apply at least a part of the obtained multimedia data 101 to each of a plurality of neural network models (e.g., deep-learning models) 111 through 113, for example, first through third neural network models 111 through 113. The neural network model may be a model learned according to a supervised learning scheme or an unsupervised learning scheme based on an AI algorithm. The neural network model may include a plurality of network nodes having weights, which are located at different depths (or layers) and may transmit and receive data according to a convolution connection relationship. For example, a model such as, but not limited to, a deep neural network (DNN), a recurrent neural network (RNN), a bidirectional recurrent deep neural network (BRDNN), or the like may be used as the neural network model”, [0048], “The electronic device 1 may update an emotion recognition module by recognizing a user's feedback in operation 925. For example, in a case where the user shows an unsatisfactory facial expression or action, the electronic device 1 may recognize a user's face or action as a negative emotion. Thus, the electronic device 1 may update a weight module included in an emotion recognition module by using a result of the recognition”, [0155], RNN, [0181]).
Glasner et al. and Shikii et al. and Seo et al. are in the same art of detecting emotion (Glasner et al., [0119]; Shikii et al., [0153]; Seo et al., [0155]). The combination of Seo et al. with Glasner et al. and Shikii et al. will enable using an RNN network. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the network of Seo et al. with the invention of Glasner et al. and Shikii et al. as this was known at the time of filing, the combination would have predictable results, and as Seo et al. indicate “Application of various forms of multimedia data such as a human voice, etc., as well as a human facial expression to a neural network model enables accurate identification of human emotion” ([0006]) “Use of a plurality of neural network models enables emotional recognition customized for human characteristics. For example, when a person (e.g., a user) having obtained a recognized emotion provides feedback, neural network models may be relearned to be personalized or customized for a particular person based on feedback information” ([0008]) thereby providing accuracy and customization benefits to the combination of inventions.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Glasner et al. (US 20200251190 A1) and Shikii et al. (US 20170150930 A1) as applied to claim 1 above, further in view of Moerland et al. (“Emotion in reinforcement learning agents and robots: a survey”, 2018).
Regarding claim 5, Glasner et al. and Shikii et al. disclose the emotion acquisition device according to claim 1. Glasner et al. and Shikii et al. do not disclose the emotion estimation unit updates a Q value in Q-learning using the following equation each time the human emotion is acquired,
PNG
media_image1.png
204
602
media_image1.png
Greyscale
where s.sub.t and a.sub.t are an emotional state detected by and an emotional action selected by a robot at time step t, respectively, α is a learning rate, R.sub.h is a reward and predicted implicit feedback, and s′ is a next state.
Moerland et al. teach the emotion estimation unit updates a Q value in Q-learning using the following equation each time the human emotion is acquired,
PNG
media_image1.png
204
602
media_image1.png
Greyscale
where s.sub.t and a.sub.t are an emotional state detected by and an emotional action selected by a robot at time step t, respectively, α is a learning rate, R.sub.h is a reward and predicted implicit feedback, and s′ is a next state (
PNG
media_image2.png
168
720
media_image2.png
Greyscale
PNG
media_image3.png
260
714
media_image3.png
Greyscale
).
Glasner et al. and Shikii et al. and Moerland et al. are in the same art of detecting emotion (Glasner et al., [0119]; Shikii et al., [0153]; Moerland et al., abstract). The combination of Moerland et al. with Glasner et al. and Shikii et al. will enable using Q-learning. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the Q-learning of Moerland et al. with the invention of Glasner et al. and Shikii et al. as this was known at the time of filing, the combination would have predictable results, and as Moerland et al. indicate “Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class” (abstract), providing an efficiency improvement when the inventions are combined.
Allowable Subject Matter
Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims AND UPON CORRECTION OF RELEVANT 35 USC 101 AND 35 USC 112 REJECTIONS.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671