Prosecution Insights
Last updated: April 19, 2026
Application No. 17/847,137

FEEDBACK LOOP FOR EMOTION RECOGNITION SYSTEM

Non-Final OA §101§103§112
Filed
Jun 22, 2022
Examiner
SZUMNY, JONATHON A
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cephalgo SAS
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
143 granted / 247 resolved
+5.9% vs TC avg
Strong +61% interview lift
Without
With
+60.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
58 currently pending
Career history
305
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 247 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending in the present application with claims 1 and 10 being independent. Information Disclosure Statement The information disclosure statement filed June 22, 2022 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein (i.e., the foreign and NPL citations) has not been considered. Claim Objections Claims 1, 9, and 20 are objected to because of the following informalities: In claim 1, line 9, "each the" should be changed to --each of the--. In claims 9 and 20, "VGG16" should be changed to --Visual Geometry Group (VGG)-16--. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7, 9, 16-18, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the limitation "the Fourier transform" in line 1. There is insufficient antecedent basis for this limitation in the claim. Regarding claim 9, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim 9 recites the limitation "the convolutional neural network" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claims 16-18 and 20 recite the limitation "The emotional recognition system" in line 1. There is insufficient antecedent basis for this limitation in the claims. The Examiner will assume Applicant intended --The method-- for these claims. Claim 17 recites the limitation "the Fourier transform" in line 1. There is insufficient antecedent basis for this limitation in the claim. Regarding claim 20, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim 20 recites the limitation "the convolutional neural network" in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more: Subject Matter Eligibility Criteria - Step 1: Claims 1-9 are directed to a system (i.e., a machine) while claims 10-20 are directed to a method (i.e., a process). Accordingly, claims 1-20 are all within at least one of the four statutory categories. 35 USC §101. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong One: Regarding Prong One of Step 2A of the Alice/Mayo test (which collectively includes the guidance in the January 7, 2019 Federal Register notice and the October 2019 and July 2024 updates issued by the USPTO as incorporated into the MPEP, as supported by relevant case law), the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP 2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. MPEP 2106.04(a). Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: An emotion recognition system comprising: a. a Valence-Arousal model comprising: i. a Valence factor comprising two endpoints; ii. an Arousal factor comprising two endpoints; iii. a plurality of points, one of said plurality of points being an origin; b. an algorithm; c. a user input acquisition device; d. a database comprising training data, said training data comprising: i. actual measurements of user inputs assigned to the endpoints of each the Valence factor and the Arousal factor; ii. actual measurements of user inputs assigned to some of the plurality of points, said some of the plurality of points being in addition to the endpoints of the Valence factor and the Arousal factor; iii. emotions assigned to each actual measurement of user inputs; e. a processor; and f. a user device, wherein the user input acquisition device collects actual measurements of user inputs, and wherein the actual measurements of user inputs are transmitted to the database in the form of non-transitory computer-readable media, and wherein the processor retrieves the actual measurements of user inputs from the database, and wherein the processor uses the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model, and wherein the processor uses the algorithm to recognize the closest corresponding emotions to the one or more of the plurality of points, and wherein the processor uses the algorithm to assign user emotions based on the closest corresponding emotions to said one or more of the plurality of points, and wherein the user emotions are transmitted to a user device in the form of non- transitory computer-readable media, and wherein the user emotions are displayed on the user device in the form of human- readable information, and wherein a user uses the user device to provide user's emotion feedback. The Examiner submits that the foregoing underlined limitations recite: (a) “certain methods of organizing human activity” because using an algorithm to assign actual user input measurements (e.g., EEG/ECG readings, blood pressure values, etc.) to one or more points on a Valence-Arousal model and assigning corresponding emotions based on the points and then receiving user feedback regarding the emotions relates to managing personal behavior or relationships or interactions between people (e.g., social activities, teaching, and following rules or instructions). These recitations, under their broadest reasonable interpretation, are similar to the concept of a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982). MPEP 2106.04(a)(2)(II)(C). Furthermore, the foregoing underlined limitations recite (b) “mental processes” because they are observations/evaluations/judgments/analyses that can, at the currently claimed high level of generality, be practically performed in the human mind (e.g., with pen and paper). As an example, a medical professional could utilize any appropriate algorithm to assign actual user input measurements (e.g., EEG/ECG readings, blood pressure values, etc.) to one or more points on a Valence-Arousal model and assign corresponding emotions based on the points. For instance, if most of the assigned points hover around relaxed/calm emotions in the model, such emotions can be "assigned" to the user. A medical professional could also receive user feedback regarding the emotions such as by listening to or mentally reviewing such feedback. These recitations, under their broadest reasonable interpretation, are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis found to be "mental processes" in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQe2d 1739 (Fed. Cir. 2016)). MPEP 2106.04(a)(2)(III). Accordingly, the claim recites at least one abstract idea. Furthermore, dependent claims 2, 3, 6-8, 11-13, and 16-19 further define the at least one abstract idea (and thus fail to make the abstract idea any less abstract) as set forth below: -Claims 2, 3, 11, and 12 call for using a "credibility algorithm" to determine whether or not the user's emotion feedback includes outliers, discarding the feedback if it includes outliers, re-assigning the emotions to re-assigned points on the Valence-Arousal model if the feedback does not include outliers, and updating the algorithm based on the re-assigned points. All of these limitations are practically performable in the human mind with pen and paper ("mental processes") at the claimed high level of generality. -Claims 6, 7, 16, and 17 call for transforming the actual measurements into Hjorth parameters or applying a Fourier transform to the actual measurements which recite "mathematical concepts" because they represent mathematical formulas/equations/calculations. -Claims 8 and 19 call for using the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model by converting the actual measurements of user inputs into one or more scalograms (or other continuous wavelet transformation coefficient(s) in claim 8) which recite "mathematical concepts" because they represent mathematical formulas/equations/calculations. -Claim 13 recites how the method is continuously repeated which just further defines the at least one abstract idea. -Claim 18 calls for denoising the other measurements of user inputs which a person could practically perform in the mind such as by reviewing and discarding certain measurements. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong Two: Regarding Prong Two of Step 2A of the Alice/Mayo test, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. As noted at MPEP §2106.04(II)(A)(2), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” MPEP §2106.05(I)(A). In the present case, the additional limitations beyond the above-noted at least one abstract idea recited in the claim are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”): An emotion recognition system comprising: a. a Valence-Arousal model comprising: i. a Valence factor comprising two endpoints; ii. an Arousal factor comprising two endpoints; iii. a plurality of points, one of said plurality of points being an origin; b. an algorithm; c. a user input acquisition device (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)); d. a database (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) comprising training data, said training data comprising: i. actual measurements of user inputs assigned to the endpoints of each the Valence factor and the Arousal factor; ii. actual measurements of user inputs assigned to some of the plurality of points, said some of the plurality of points being in addition to the endpoints of the Valence factor and the Arousal factor; iii. emotions assigned to each actual measurement of user inputs; e. a processor (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)); and f. a user device (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)), wherein the user input acquisition device (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) collects actual measurements of user inputs, and wherein the actual measurements of user inputs are transmitted (extra-solution activity (data gathering/transmitting data) as noted below, see MPEP § 2106.05(g)) to the database in the form of non-transitory computer-readable media (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)), and wherein the processor (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) retrieves the actual measurements of user inputs from the database (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)), and wherein the processor (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) uses the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model, and wherein the processor (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) uses the algorithm to recognize the closest corresponding emotions to the one or more of the plurality of points, and wherein the processor (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) uses the algorithm to assign user emotions based on the closest corresponding emotions to said one or more of the plurality of points, and wherein the user emotions are transmitted to a user device in the form of non- transitory computer-readable media, and wherein the user emotions are displayed on the user device in the form of human-readable information (extra-solution activity (transmitting data) as noted below, see MPEP § 2106.05(g); using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)), and wherein a user uses the user device to (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) provide user's emotion feedback. For the following reasons, the Examiner submits that the above-identified additional limitations, when considered as a whole with the limitations reciting the at least one abstract idea, do not integrate the above-noted at least one abstract idea into a practical application. Regarding the additional limitations of the user input acquisition device, processor, user device, and database including non-transitory computer-readable media, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). Regarding the additional limitations of collecting/retrieving/transmitting actual user input measurements and transmitting/displaying user emotions, the Examiner submits that these additional limitations merely add insignificant extra-solution activity (data gathering; selecting data to be manipulated; transmitting/displaying data) to the at least one abstract idea in a manner that does not meaningfully limit the at least one abstract idea (see MPEP § 2106.05(g)). Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Furthermore, looking at the additional limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. MPEP §2106.05(I)(A) and §2106.04(II)(A)(2). For these reasons, representative independent claim 1 and analogous independent claim 10 do not recite additional elements that integrate the judicial exception into a practical application. Accordingly, representative independent claim 1 and analogous independent claim 10 are directed to at least one abstract idea. The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below: -Claims 4, 5, 14, and 15 recite how the user input device is an EEG or ECG device which does no more than generally link use of the abstract idea to a particular technological environment or field of use without adding an inventive concept to the abstract idea (see MPEP § 2106.05(h)). -Claim 8 recites how the scalograms or other continuous wavelet transformation coefficient are the input of ML/deep learning which amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. -Claims 9 and 20 recite how one or more pre-trained algorithms such as VGG16 is/are used within the convolution neural network which again amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. When the above additional limitations are considered as a whole along with the limitations directed to the at least one abstract idea, the at least one abstract idea is not integrated into a practical application. Therefore, the claims are directed to at least one abstract idea. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2B: Regarding Step 2B of the Alice/Mayo test, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. Regarding the additional limitations of the user input acquisition device, processor, user device, and database including non-transitory computer-readable media, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). Regarding the additional limitations directed to collecting/retrieving/transmitting actual user input measurements and transmitting/displaying user emotions which the Examiner submits merely add insignificant extra-solution activity to the abstract idea (see MPEP § 2106.05(g)) as discussed above, the Examiner has reevaluated such limitations and determined such limitations to not be unconventional as they merely consist of receiving/transmitting data over a network. See Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1321, 120 USPQ2d 1353, 1362 (Fed. Cir. 2016); See MPEP 2106.05(d)(II). The dependent claims also do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application. -Claims 4, 5, 14, and 15 recite how the user input device is an EEG or ECG device which does no more than generally link use of the abstract idea to a particular technological environment or field of use without adding an inventive concept to the abstract idea (see MPEP § 2106.05(h)). -Claim 8 recites how the scalograms or other continuous wavelet transformation coefficient are the input of ML/deep learning which amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. -Claims 9 and 20 recite how one or more pre-trained algorithms such as VGG16 is/are used within the convolution neural network which again amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims drafted using largely (if not entirely) result-focused functional language, containing no specificity about how the purported invention achieves those results, are almost always found to be ineligible for patenting under Section 101.” Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1356 (Fed. Cir. 2024). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. Therefore, claims 1-20 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, 5, 10, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik"): Regarding claim 1, Deitcher discloses an emotion recognition system (system 100 in Figure 1) comprising: a. a Valence-Arousal model (valence-arousal grid/model in Figure 4A and [0049]) comprising: i. a Valence factor (one of the axes in Figure 4A is the valence factor)…; ii. an Arousal factor (the other of the axes in Figure 4A is the arousal factor)…; iii. a plurality of points (points 406 in Figure 4A), one of said plurality of points being an origin (the intersection point at the middle is an origin); b. an algorithm (emotional-imaging composer/emotion prediction module 130 in Figure 1 and [0025]-[0026], [0032], [0044], [0048]-[0049]); c. a user input acquisition device (sensors 102 in Figure 1); d. a database comprising training data (training data 132 in Figure 1 which would be stored in a database/region/section/area of storage in the computing system 101), said training data comprising: …; ii. actual measurements of user inputs assigned to some of the plurality of points, said some of the plurality of points being in addition to the endpoints of the Valence factor and the Arousal factor ([0021] and [0025] disclose training data including training biosignals (actual measurements) with corresponding/assigned ground truth emotions (which would correspond to the emotion points on the valence-arousal grid/model); iii. emotions assigned to each actual measurement of user inputs (emotions are assigned to the measurements/signals as noted per [0025]; e. a processor (processor 103 in Figure 1); and f. a user device (end user device 150 in Figure 1)), wherein the user input acquisition device collects actual measurements of user inputs ([0014], [0015], [0018], [0019] disclose how the sensors collect biometric data from user(s)), and wherein the actual measurements of user inputs are transmitted to the database in the form of non-transitory computer-readable media ([0030] discloses how the sensor data sets (actual measurements) from sensors 102 are stored which requires transmission to the database/storage of computing system 101 in the form of non-transitory computer-readable media), and wherein the processor retrieves the actual measurements of user inputs from the database ([0021], [0025], [0044], [0049] discloses receiving the biosignals (actual measurements which would be from the storage/database as noted above) and determining corresponding emotions), and wherein the processor uses the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model ([0049] discusses how the emotional-imaging composer receives the sensor data (actual measurements of user inputs which can include data from various sensors over time per [0019], [0022], [0029], [0048]) and generates probabilities of each emotion such that each piece of sensor data (each of the actual measurements) would correspond/be assigned to one or more points on the valence-arousal grid/model)), and wherein the processor uses the algorithm to recognize the closest corresponding emotions to the one or more of the plurality of points ([0049] discloses generating probabilities of each emotion (which correspond to the various points on the valence-arousal grid/model as noted above) and determining the most probable emotions (recognizing the closest corresponding emotions to the points), and wherein the processor uses the algorithm to assign user emotions based on the closest corresponding emotions to said one or more of the plurality of points ([0049] discloses how the most probable emotion (closest corresponding emotion) is determined to be (assigned as) the predicted emotion for each sensor data/actual measurements (points on the grid/model)), and wherein the user emotions are transmitted to a user device in the form of non- transitory computer-readable media, and wherein the user emotions are displayed on the user device in the form of human-readable information ([0004] discloses outputting a display feature corresponding to the determined emotion on a GUI (transmitting emotions to user device in formation of non-transitory CRM and displaying the emotions in the form of human-readable information); also see Figure 4A; also see end of [0025], [0027], and [0028] which disclose outputting the predicted emotion to an end user device in the form of visual information ), and wherein a user uses the user device to provide user's emotion feedback ([0021], [0024] disclose receiving feedback from a user regarding the emotions (which would be obtained via the user device)). While Deitcher discloses Valence/Arousal factors/axes in Figure 4A as noted above and also discloses ([0021] and [0025]) how the training data includes training biosignals (actual measurements) with corresponding/assigned ground truth emotions (which would correspond to the emotion points on the valence-arousal grid/model), Deitcher might be silent regarding each factor/axis specifically having two endpoints, where the actual measurements of user inputs in the training also being assigned to the endpoints of each of the Valence factor and the Arousal factor. Nevertheless, Craik teaches (Figure 20B and [0182]-[0190]) that it was known in the healthcare informatics art for each of the factors/axes of a Valence-Arousal grid/model to include two endpoints (e.g., Pleasure/Displeasure in the case of the Valence factor and High/Low in the case of the Arousal factor) and to store a data matrix of user electrode values (actual measurements) in association with valence-arousal values from the grid/model for use in training/validation/testing to advantageously facilitate an understanding of a user's most basic, often subconscious feelings (e.g., high/low state of pleasure and high/low intensity/activation level) which can in combination lead to determination of complex directed emotions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for each factor/axis to have two endpoints such that the actual measurements of user inputs in the training also are assigned to the endpoints of each of the Valence factor and the Arousal factor in the system of Deitcher similar to as taught by Craik to advantageously facilitate an understanding of a user's most basic, often subconscious feelings (e.g., high/low state of pleasure and high/low intensity/activation level) which can in combination lead to determination of complex directed emotions. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 4, the Deitcher/Craik combination discloses the emotion recognition system of claim 1, further including wherein the user input acquisition device is an EEG device ([0018]). Regarding claim 5, the Deitcher/Craik combination discloses the emotion recognition system of claim 1, further including wherein the user input acquisition device in an ECG device ([0018]). Claim 10 is rejected in view of the Deitcher/Craik combination as discussed above in relation to claim 1. In relation to using the processor to denoise the actual measurements of user inputs as recited in claim 10, [0020] of Deitcher discloses filtering the received sensor data (the actual measurements) such as by removing erroneous/unsuitable data (denoising the actual measurements). In relation to the user emotions specifically being displayed on the user device in the form of human-readable "text" in claim 10 as opposed to human-readable "information" as in claim 1, [0025], [0027], and [0028] of Deitcher disclose outputting the predicted emotion to an end user device in the form of visual information which one of ordinary skill in the art would understand includes "text." To the extent that such visual information in Deitcher does not necessarily include "text," Craik teaches (Figure 32A-32C and [0287]) that it was known in the healthcare informatics art to display determined emotional states in the form of human-readable text which advantageously provides a well-known manner of conveying important medical information to users. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the human-readable information of Deitcher to specifically being "text" as taught by Craik to advantageously provide a well-known manner of conveying important medical information to users. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 14 and 15 are rejected in view of the Deitcher/Craik combination as respectively discussed above in relation to claims 4 and 5. Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik"), and further in view of U.S. Patent App. Pub. No. 2016/0042648 to Kothuri ("Kothuri"): Regarding claim 2, the Deitcher/Craik combination discloses the emotion recognition system of claim 1, but appears to be silent regarding further including a credibility algorithm, wherein the credibility algorithm determines that the user's emotion feedback are outliers, and wherein the user's emotion feedback are discarded. Nevertheless, Kothuri teaches ([0068]) that it was known in the healthcare informatics art to remove/discard outlier user emotion feedback to advantageously improve the ability to accurately evaluate a user's emotional state and generate appropriate responses to enhance an overall experience ([0008]). As the system is computer implemented via software per [0002], then there is necessarily a set of instructions (e.g., a "credibility algorithm") of the software to determine whether or not the user feedback includes outliers. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system to further include a credibility algorithm that determines that the user's emotion feedback are outliers, and wherein the user's emotion feedback are discarded in the system of the Deitcher/Craik combination similar to as taught by Kothuri to advantageously improve the ability to accurately evaluate a user's emotional state and generate appropriate responses to enhance an overall experience. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claim 11 is rejected in view of the Deitcher/Craik/Kothuri combination as discussed above in relation to claim 2. Claims 3, 12, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik"), and further in view of U.S. Patent App. Pub. No. 2020/0104670 to Seo et al. ("Seo"): Regarding claim 3, the Deitcher/Craik combination discloses the emotion recognition system of claim 1, but appears to be silent regarding further including a credibility algorithm, wherein the credibility algorithm determines that the user's emotion feedback are not outliers, and wherein the processor re-assigns the emotions to re-assigned points on the Valence-Arousal model based on the user's emotion feedback, and wherein the processor updates the algorithm based on the re-assigned points. Nevertheless, Seo teaches (Figure 1 and [0144]-[0147]) that it was known in the healthcare informatics and machine learning art for an electronic device 1 to analyze a user's voice/face/conversation (actual measurements of user inputs) with one or more NN/ML models, determine emotion information of the user based on the analysis, recognize from a user's feedback the user's real emotion (which would necessarily include determining that the feedback is not an outlier or else the feedback would not represent the user's real emotion), and update a weight module in an emotion recognition model (which includes NN/ML models per [0048] and Figure 1) based on the user emotion feedback recognition to advantageously increase the accuracy of detected user emotions ([0058] and [0092]). Because the electronic device 1 is implemented via software per [0222]-[0224], then there is necessarily a set of instructions (e.g., a "credibility algorithm") of the software to determine that the user feedback is "real" (i.e., that it does not include outliers). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system of the Deitcher/Craik combination to further include a credibility algorithm that determines that the user's emotion feedback are not outliers, wherein the processor updates the algorithm based on the real emotions of the user as set forth in the feedback similar to as taught by Seo to advantageously increase the accuracy of detected user emotions. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. As Deitcher already discloses ([0049] how the emotional-imaging composer receives the sensor data (actual measurements of user inputs which can include data from various sensors over time per [0019], [0022], [0029], [0048]) and generates probabilities of each emotion whereby the most probable emotion (closest corresponding emotion) is determined to be (assigned as) the predicted emotion for each sensor data/actual measurements (corresponding to points on the grid/model) and also discloses ([0021], [0024]) updating/retraining an ML model based on a user's feedback indicating how the user's emotion correlates to the biosignal values (the actual measurements), then such emotions in the feedback would necessarily be "reassigned" to new points on the grid/model corresponding to the updated emotions in the real/actual/non-outlier feedback and the algorithm would be updated by as on the "reassigned" points per the combination with Seo. Claim 12 is rejected in view of the Deitcher/Craik/Seo combination as discussed above in relation to claim 3. Regarding claim 13, the Deitcher/Craik/Seo combination discloses the method of claim 12, further including wherein said method is continuously repeated ([0016] of Deitcher discloses generating real-time emotion predictions which would amount to continuous repeating of the method). Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik"), and further in view of NPL "Emotion Recognition From EEG Signal Focusing on Deep Learning and Shallow Learning Techniques" to Islam et al. ("Islam"): Regarding claim 6, the Deitcher/Craik combination discloses the emotion recognition system of claim 1 but appears to be silent regarding wherein the actual measurements of user inputs are transformed into Hjorth parameters for further processing. Nevertheless, Islam teaches (bottom of right column on page 94611 to left column on page 94612) that it was known in the healthcare informatics art to calculate Hjorth parameters from EEG signals (actual measurements) of a user as doing so is a popular manner of extracting features from the EEG signals to facilitate deep learning-based emotion recognition methods. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the actual measurements of user inputs of the Deitcher/Craik combination to be transformed into Hjorth parameters for further processing similar to as taught by Islam because doing so is a popular manner of extracting features from the EEG signals to facilitate deep learning-based emotion recognition methods. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 7, the Deitcher/Craik combination discloses the emotion recognition system of claim 1 but appears to be silent regarding wherein the Fourier transform is applied to the actual measurements of user inputs to obtain other measurements of user inputs. Nevertheless, Islam teaches (right column on page 94609 to right column on page 94611) that it was known in the healthcare informatics art to perform a Fourier transform on EEG data (actual measurements) of a user as doing so is a popular manner of extracting features (other measurements) from the EEG signals to facilitate deep learning-based emotion recognition methods. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Fourier transform to the actual measurements of user inputs to obtain other measurements of user inputs in the Deitcher/Craik combination similar to as taught by Islam because doing so is a popular manner of extracting features from the EEG signals to facilitate deep learning-based emotion recognition methods. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Regarding claim 8, the Deitcher/Craik combination discloses the emotion recognition system of claim 1, further including wherein the processor uses the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model ([0049] discusses how the emotional-imaging composer receives the sensor data (actual measurements of user inputs which can include data from various sensors over time per [0019], [0022], [0029], [0048]) and generates probabilities of each emotion such that each piece of sensor data (each of the actual measurements) would correspond/be assigned to one or more points on the valence-arousal grid/model))… However, the Deitcher/Craik combination appears to be silent regarding the assigning of the measurements being by converting the actual measurements of user inputs into one or more scalograms or other continuous wavelet transformation coefficient(s) as the input of machine learning/deep learning. Nevertheless, Islam teaches (Figure 8 on page 94606, section 3 on page 94611, and section IV on page 94614) that it was known in the healthcare informatics art to perform a wavelet transform on EEG data (actual measurements of user inputs) to obtain features (e.g., which would be a matrix of coefficients) and then input the features into a deep learning/ML model (see Figure 8) configured to recognize and classify the user's emotions because doing so is the most suitable method of extracting features for non-stationary signals like EEG to facilitate deep learning-based emotion recognition methods. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the assigning of the measurements to be by converting the actual measurements of user inputs into one or more scalograms or other continuous wavelet transformation coefficient(s) as the input of machine learning/deep learning in the system of the Deitcher/Craik combination similar to as taught by Islam because doing so is the most suitable method of extracting features for non-stationary signals like EEG to facilitate deep learning-based emotion recognition methods. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 16 and 17 are rejected in view of the Deitcher/Craik/Islam combination as respectively discussed above in relation to claims 6 and 7. Regarding claim 18, the Deitcher/Craik/Islam combination discloses the [method] of claim 17, but appears to be silent regarding denoising the other measurements of user inputs. Nevertheless, Deitcher already teaches ([0020]) that it was known in the healthcare informatics art to remove erroneous/unsuitable data from the sensor data (denoising the actual measurements) which would advantageously increase the accuracy of generated emotion determinations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have denoised the other measurements of user inputs in the system of the Deitcher/Craik/Islam combination similar to as already taught by Deitcher to advantageously increase the accuracy of generated emotion determinations. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik") and NPL "Emotion Recognition From EEG Signal Focusing on Deep Learning and Shallow Learning Techniques" to Islam et al. ("Islam"), and further in view of U.S. Patent App. Pub. No. 2021/0052215 to Mouton et al. ("Mouton"): Regarding claim 9, the Deitcher/Craik/Islam combination discloses the emotion recognition system of claim 8, but appears to be silent regarding wherein one or more pre-trained algorithms such as VGG16 is used within the convolution neural network. Nevertheless, Mouton teaches ([0164]) that it was known in the healthcare informatics and ML art to utilize a pre-trained VGG-16 CNN architecture to successfully detect a wide range of emotions such as pain and the like. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize one or more pre-trained algorithms such as VGG16 within a convolution neural network in the system of the Deitcher/Craik/Islam combination as taught by Mouton as doing so is a known manner of successfully detecting a wide range of emotions such as pain and the like. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Claim 20 is rejected in view of the Deitcher/Craik/Islam/Mouton combination as discussed above in relation to claim 9. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent App. Pub. No. 2022/0075450 to Deitcher et al. ("Deitcher") in view of U.S. Patent App. Pub. No. 2020/0383598 to Craik et al. ("Craik") and NPL "Emotion Recognition From EEG Signal Focusing on Deep Learning and Shallow Learning Techniques" to Islam et al. ("Islam"), and further in view of JP Patent No. 2003-531656 ("the '656 Patent"): Regarding claim 19, the Deitcher/Craik/Islam combination discloses the emotion recognition system of claim 1, further including wherein the processor uses the algorithm to assign the actual measurements of user inputs to one or more of the plurality of points of the Valence-Arousal model ([0049] discusses how the emotional-imaging composer receives the sensor data (actual measurements of user inputs which can include data from various sensors over time per [0019], [0022], [0029], [0048]) and generates probabilities of each emotion such that each piece of sensor data (each of the actual measurements) would correspond/be assigned to one or more points on the valence-arousal grid/model))… However, the Deitcher/Craik combination appears to be silent regarding the assigning of the measurements being by converting the actual measurements of user inputs into one or more scalograms. Nevertheless, Islam teaches (Figure 8 on page 94606, section 3 on page 94611, and section IV on page 94614) that it was known in the healthcare informatics art to perform a wavelet transform on EEG data (actual measurements of user inputs) to obtain features and then input the features into a deep learning/ML model (see Figure 8) configured to recognize and classify the user's emotions because doing so is the most suitable method of extracting features for non-stationary signals like EEG to facilitate deep learning-based emotion recognition methods. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the assigning of the measurements to be by converting the actual measurements of user inputs into continuous wavelet transformation coefficient(s) as the input of machine learning/deep learning in the system of the Deitcher/Craik combination similar to as taught by Islam because doing so is the most suitable method of extracting features for non-stationary signals like EEG to facilitate deep learning-based emotion recognition methods. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Furthermore, while Islam appears to be silent regarding such conversion of the actual measurements specifically being into one or more scalograms, the '656 Patent teaches (bottom of page 8 to top of page 9 of the translation) that it was known in the healthcare informatics art to generate a scalogram based on a wavelet transform performed on an EKG of a user to advantageously provide an efficient manner of visualizing the rich structure in the EKG not available by reviewing the original EKG. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have converted the actual measurements of the Deitcher/Craik/Islam combination into a scalogram representation of a wavelet transform of the actual measurements as taught by the '656 Patent to advantageously provide an efficient manner of visualizing the rich structure in the actual measurements not available by reviewing the original measurements. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The references cited on the attached PTO-892 disclose various systems for estimating user emotional levels based on analysis of physiological data and the like. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHON A. SZUMNY whose telephone number is (303) 297-4376. The examiner can normally be reached Monday-Friday 7-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham, can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHON A. SZUMNY/Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597508
COMPUTERIZED DECISION SUPPORT TOOL FOR POST-ACUTE CARE PATIENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12586667
PSEUDONYMIZED STORAGE AND RETRIEVAL OF MEDICAL DATA AND INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12562277
METHOD OF AND SYSTEM FOR DETERMINING A PRIORITIZED INSTRUCTION SET FOR A USER
2y 5m to grant Granted Feb 24, 2026
Patent 12537102
SYSTEM AND METHOD FOR DETERMINING TRIAGE CATEGORIES
2y 5m to grant Granted Jan 27, 2026
Patent 12505912
METHODS AND SYSTEMS FOR RESTING STATE FMRI BRAIN MAPPING WITH REDUCED IMAGING TIME
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+60.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 247 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month