DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is in response to the Application filed on 4/23/2024.
Claims 1-11 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/23/2024 and 2/14/2025 have been considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “first emotion estimation section”, “second emotion estimation section”, “emotion data generation section”, “first sensing section”, “second sensing section”, “communication section”, “analysis section”, “user interface section”, in claims 1-11.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 lines 9-10 recites “…to decide a combination ratio between the first emotion data and the second emotion data…”. It is not clear what a ‘combination ratio’ is here. From the specification paragraphs [0029] and [0068] state:
[0029] Next, the context analysis section 23 sets a threshold TH, an arousal offset A, and a non-arousal offset B on the basis of a result of analyzing such context. As will be described later, these three parameters are used by the shared emotion generation section 25 to generate the shared emotion data DT3 through a combination process based on the external emotion data DT1 and the internal emotion data DT2. The threshold TH is a threshold of the parameter Ae included in the external emotion data DT1, and is a parameter to be used for determining which of two methods of generating the shared emotion data DT3 to use. The arousal offset A and the non-arousal offset B are parameters for adjusting a combination ratio to be used for performing the combination process based on the external emotion data DT1 and the internal emotion data DT2.
[0068] Specifically, for example, the context analysis section 23 increases the arousal offset A or the threshold TH to increase the ratio of the external emotion included in the shared emotion in a case where a stark atmosphere without laughing or an atmosphere with fake smile is detected. This allows the internal emotion to become less likely to be disclosed. For example, the context analysis section 23 increases the absolute value of the non-arousal offset B and decreases the threshold TH to increase the ratio of the internal emotion included in the shared emotion in a case where a casual atmosphere with jokes is detected.
It is further not clear from the support in the specification if the ‘combination ratio’ and ‘the ratio’ are the same? does this ‘combination ratio’ have the meaning that two emotions are compared to generate a third emotion? If this is the case then it is not clear from the specification if these two emotions are compared to generate the third emotion. Appropriate correction is required. Claims 2-11 are rejected for the same reasons.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 10-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea – mental process) without significantly more. Claim 1 is used as an example. Claims 10 and 11 recite a device and system, respectively. The two-part test to identify claims that are directed to a judicial exception (Step 2A) and to then evaluate if additional elements of the claim provide an inventive concept (Step 2B) are:
(1) Are the claims directed to a process, machine, manufacture or composition of matter;
(2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea;
Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application;
(2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept.
Claim 1. An information processing device comprising: (a) a first emotion estimation section configured to generate first emotion data by estimating emotion of a user on a basis of a result of detection made by a first sensing section configured to detect behavior of the user, the behavior being used for communication; (b) a second emotion estimation section configured to generate second emotion data by estimating emotion of the user on a basis of a result of detection made by a second sensing section configured to detect movement or response of the user, the movement or response not being used for communication; (c) an emotion data generation section configured to decide a combination ratio between the first emotion data and the second emotion data, and to generate third emotion data by combining the first emotion data and the second emotion data with use of the decided combination ratio; and (d) a communication section configured to transmit the third emotion data. [emphasis added].
With regard to (1), the instant claims recite a device and a system, therefore the answer is "yes".
With regard to (2A), Prong One: Yes. When viewed under the broadest most reasonable interpretation, the instant claims are directed to a Judicial Exception – an abstract idea belonging to the group of mental process – concepts that are practicably performed in the human mind (including an observation, evaluation, judgement, opinion). The steps of (a), (b) and (c) (above in emphasized claim 1) are generically recited and nothing in these steps precludes the steps from practically being performed by a human equipped with an appropriate apparatus. It can be interpreted as merely looking at the data and determining an emotion of a subject in the image, for example. There is nothing in the claim that requires more than an operation that a human, armed with the appropriate apparatus, pen and a paper, can not perform. The generating and deciding, under its broadest reasonable interpretation, covers performance of the limitation in the mind. The claim encompasses the user having a certain emotion or a behavior for a communication and once a sample is received, attribute such as a shape or an orientation/movement/response of a user/subject of the image/data can be determined/generated. This way, essentially one can present/output information about the section of an image/data that represents that emotion/shape/orientation. Thus, these limitations are a mental process.
With regard to (2A), Prong Two: No. The instant claims do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception of (d) “transmit”, and therefore does not integrate the judicial exception into a practical application.
The use of a generic communication section to transmit emotion data (i.e., “data”) at a high level of generality such that said “data” can be used in the operation of the recited judicial exception (the mental step of “generating”/”deciding”). Supplying “data” does not provide for “integration” of the abstract idea into a practical application, as said data do not change the way in which said system operates. There are no specifics on how the data is transmitted. Even if this step is by a “processor” that may be, for example, a camera. A camera/sensor is well known in the field, and receiving data from a camera/sensor is also well known.
This limitation is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In conclusion, the claim as a whole does not provide for “integration” of the abstract idea into a practical application.
The claim is directed to the abstract idea.
With regard to (2B), as discussed with respect to Step 2A Prong Two, the additional element in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The pending claims do not show what is more than a routine in the art presented in the claims, i.e., the additional elements are nothing more than routine and well-known steps. There is no improvement to technology here. There is only steps of (a), (b) and (c) with additional elements of (d), and it has not been shown that the mental process allows the “technology” to do something that it previously was not able to do.
Therefore, the claims 1, 10, and 11 are ineligible.
With regard to dependent claims 2-9, similar analysis is applied and therefore does not integrate the judicial exception into a practical application – does not provide significant more than the judicial exception. These claims are similarly rejected for the same reasons discussed in view of steps recited in claim 1 and not repeated herewith.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-5, and 7-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP2012059107A to Yosuke.
With regards to claim 1 Yosuke discloses an information processing device (paragraph [0025], Figure 1, for example) comprising: a first emotion estimation section configured to generate first emotion data by estimating emotion of a user on a basis of a result of detection made by a first sensing section configured to detect behavior of the user, the behavior being used for communication (facial expression recognition unit 12, starting with paragraph [0025, 0027 to 0046-0049], Figure 1, 3, 4-5, etc.); a second emotion estimation section configured to generate second emotion data by estimating emotion of the user on a basis of a result of detection made by a second sensing section configured to detect movement or response of the user, the movement or response not being used for communication (a biological information analysis unit 14, Fig. 1, paragraphs [0025, 0029-0030, 0035, 0037, 0048-0057], etc.); an emotion data generation section configured to decide a combination ratio between the first emotion data and the second emotion data, and to generate third emotion data by combining the first emotion data and the second emotion data with use of the decided combination ratio (Figures 3, 6, and 9, step S15 paragraph [0049], step S27 paragraph [0057], step S37 paragraph [0073]); and a communication section configured to transmit the third emotion data (paragraphs [0076, 0082, 0084]).
With regards to claim 2 Yosuke discloses wherein the emotion data generation section is configured to decide the combination ratio on a basis of the first emotion data (paragraphs [0072-0075]).
With regards to claim 4 Yosuke discloses wherein the emotion data generation section further includes an analysis section configured to analyze atmosphere of communication on a basis of a result of detection made by the first sensing section, and the emotion data generation section is configured to decide the combination ratio on a basis of a result of analysis made by the analysis section (paragraphs [0068-0070]).
With regards to claim 5 Yosuke discloses a user interface section configured to display the third emotion data (estimation result output unit 17, step S17 for “adjustment estimation result received from the estimation result adjustment unit 16”, step S29, “The estimation result output unit 17 outputs the adjustment estimation result received from the estimation result adjustment unit 16”, step S38 for “estimation result output unit 17 outputs the emotion estimation result received from the estimation result selection unit 20”, display unit 35, Figures 1, 3, 6 and 7-10).
With regards to claim 7 Yosuke discloses wherein the emotion data generation section is configured to monitor change in emotion indicated by the third emotion data, and the user interface section is configured to change a display mode of the third emotion data on a basis of a result of monitoring done by the emotion data generation section (paragraphs [0018, 0031, 0073, 0076, 0081]]).
With regards to claim 8 Yosuke discloses wherein the user interface section is configured to accept an operation input of the combination ratio from the user, and the emotion data generation section is configured to generate the third emotion data by combining the first emotion data and the second emotion data with use of the combination ratio accepted by the user interface section (paragraphs [0040-0041]).
With regards to claim 9 Yosuke discloses a user interface section, wherein the communication section is further configured to receive fourth emotion data transmitted from a communication partner, and the user interface section is configured to display the fourth emotion data (paragraphs [0082, 0086, 0088]).
With regard to claims 10 and 11, claims 10-11 are rejected same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to claims 10-11. Yosuke discloses second communication support device, and the second communication support device includes a second communication section configured to receive the third emotion data, and a user interface section configured to display the third emotion data received by the second communication section (estimation result output unit 17, step S17 for “adjustment estimation result received from the estimation result adjustment unit 16”, step S29, “The estimation result output unit 17 outputs the adjustment estimation result received from the estimation result adjustment unit 16”, step S38 for “estimation result output unit 17 outputs the emotion estimation result received from the estimation result selection unit 20”, display unit 35, Figures 1, 3, 6 and 7-10, paragraphs [0082, 0086 and 0088]), and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over JP2012059107A to Yosuke in combination with Imani, et al. ("A survey of emotion recognition methods with emphasis on E-Learning environments." Journal of network and computer applications 147 (2019): 102423), (hereafter, “Imani”).
With regard to claim 3, Yosuke teaches information processing device according to claim 2. However, Yosuke does not teach wherein the first emotion data includes a first component indicating an arousal level and a second component indicating an emotional valence, and the emotion data generation section is configured to decide the combination ratio on a basis of the first component in the first emotion data.
Imani teaches wherein the first emotion data includes a first component indicating an arousal level and a second component indicating an emotional valence, and the emotion data generation section is configured to decide the combination ratio on a basis of the first component in the first emotion data (page 15 right column first paragraph, page 28 left column first full paragraph, Figures 4-6).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify Yosuke’s reference to have emotions including arousal and valence of Imani’s reference. The suggestion/motivation for doing so would have been to have data that is useful for statistical processing, as suggested by Imani on page 10.
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Imani with Yosuke to obtain the invention as specified in claim 3.
With regards to claim 6 Yosuke in combination with Imani discloses wherein the third emotion data includes a first component indicating an arousal level and a second component indicating an emotional valence, and the user interface section is configured to display the third emotion data by orienting the first component and the second component of the third emotion data to a first direction and a second direction on a display screen of the user interface section (Imani: page 15 right column first paragraph, page 28 left column first full paragraph, Figures 4-6).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEFALI D. GORADIA whose telephone number is (571)272-8958. The examiner can normally be reached Monday-Thursday 8AM-6PM, Friday 8AM-12PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SHEFALI D. GORADIA
Primary Patent Examiner
Art Unit 2676
/SHEFALI D GORADIA/Primary Patent Examiner, Art Unit 2676