Prosecution Insights
Last updated: April 19, 2026
Application No. 18/277,691

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Non-Final OA §112
Filed
Aug 17, 2023
Examiner
OGLES, MATTHEW ERIC
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Yume Cloud Japan Inc.
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
51 granted / 97 resolved
-17.4% vs TC avg
Strong +55% interview lift
Without
With
+54.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
57 currently pending
Career history
154
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
36.7%
-3.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 97 resolved cases

Office Action

§112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code. Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. In particular, paragraphs 0008, 0052, 0053, 0075, and 0088 include embedded hyperlinks. The disclosure is objected to because of the following informalities: Paragraph 0093 recites that “The mood level is calculated by dividing the maximum mood index (3495.501) by a total value (4443.33572) of the mood indexes of the emotion expressions: 3495.501/4443.33572 = 44.43335” but the recited division results in a value of 0.78668 rather than the recited 44.43335. Fig. 16 Appropriate correction is required. Claim Objections Claims 35-39 and 45-49 are objected to because of the following informalities: Claims 35 and 45 line 5 it appears that “a point of coordinates” should read “the point of coordinates” Claims 36 and 46 line 4 it appears that “a video call” should read “the video call” Claims 37 and 47 line 4 it appears that “each subject” should read “the subject” Claims 38 and 48 line 2 it appears that “a pulse wave” should read “the pulse wave of the subject” Claim 39 lines 4 and 7 and claim 49 lines 6 and 9 it appears that “pulse interval PPI” should read “pulse interval (PPI)” Claim 39 line 8 and claim 49 lines 10 it appears that “time of day” and “PPI” should read “the time of day” and “the PPI” Claim 39 lines 10-11 and claim 49 line 12 it appears that “the time domain-PPI graph” should read “the time-PPI graph” Claim 39 line 11 and claim 49 line 13 it appears that “a fast Fourier transform FFT” should read “a fast Fourier transform (FFT)” Claim 39 lines 13-14 and claim 49 line 15 it appears that “a power spectral density PSD” should read “a power spectral density (PSD)” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: A step of dividing in claim 49 A step of generating for each section in claim 49 A step of generating between discrete values in claim 49 Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. In particular, each of these limitations recite the corresponding acts to entirely perform the recited function. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: A data managing section in claim 32 An emotion expression engine section in claim 32 A three-axes processing section in claim 32 A step of executing a cerebral activity index measurement algorithm in claim 47 A step of interpolating in claim 49 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A data managing section is interpreted as the particular structures and algorithm for carrying out the recited function of acquiring voice, facial expression image, and pulse wave data. Paragraphs 0079-0083 describe that the pulse wave is acquired from a pulse wave meter and that the voice and facial expression images are acquired from a recording of a video conference through a terminal device. The terminal device may me a personal computer or smartphone as described by paragraph 0050. Thus the data managing structure is interpreted as a pulse wave meter and its equivalents for acquiring pulse wave data and as a computer or smartphone for acquiring video and audio recordings of a video call. The voice data is interpreted as the subject reading predetermined phrases for a fixed timeframe and the video data is interpreted as the subject’s facial expression for the same timeframe. The pulse wave data is further interpreted as being collected at the same time as the audio and visual data. An emotion expression engine section is interpreted as the particular structures and algorithm for carrying out the recited function of calculating a brain fatigue level, mood level, and stress level from their associated parameters. Paragraphs 0086-0087 recite that the brain fatigue level is calculated using cerebral activity index (CEM) values from the user’s voice using the SICECA algorithm taught by Yuki Aoki et. al. Paragraphs 0088-0093 recite that the mood level is calculated by extracting emotion from a facial expression using an open-source “face classification and detection algorithm” then using the proportion of each recognized emotion and corresponding weighting factors to calculate a mood index for each emotion, the mood level is then obtained by dividing the highest mood index value by the total of all mood index values. Paragraphs 0094-0097 recite that the stress level is determined by sectioning pulse wave data into Hamming windows, generating a pulse to pulse interval (PPI) graph, performing linear or cubic spline interpolation on the PPI graph, applying a fast Fourier transform (FFT) on the interpolation, and integrating a power spectral density of the FFT into high and low frequency components which are used in a ratio or independently to provide a value for stress level after being normalized. The data managing section is interpreted as the above described algorithms which each correspond to one of the recited functions and their respective equivalents. A three-axes processing system is interpreted as the particular structures and algorithm for carrying out the recited function of displaying a graph of points plotted at coordinates corresponding to the brain fatigue level, mood level, and stress level in a three dimensional space defined by and X, Y, and Z axis. Paragraphs 0068 and 0100-0102 describe that the system may plot the mood, stress, and fatigue levels in a three-dimensional coordinate system and display the graph on the terminal device which may be a computer or smartphone. The three-axes processing system in interpreted as a display and the algorithm for generating the recited graph and its equivalents. A step of executing a cerebral activity index measurement algorithm is interpreted as the algorithm for calculating the CEM values. Paragraphs 0086-0087 recite that the brain fatigue level is calculated using cerebral activity index (CEM) values from the user’s voice using the SICECA algorithm taught by Yuki Aoki et. al. The algorithm is interpreted as the SICECA algorithm. A step of interpolating is interpreted as the particular algorithm for carrying out the recited function of interpolating between discrete values of the time domain-PPI graph. Paragraph 0095 recites that the interpolation is linear or cubic spline interpolation. The step of interpolation will be interpreted as linear or cubic spline interpolation and their equivalents. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claims 39 and 49 recite “low-frequency component” and “high-frequency component” the term “low-frequency component” is interpreted as 0.04 Hz or higher and lower than 0.15 Hz and “high-frequency component” is interpreted as 0.15 Hz or higher and lower than 0.4 Hz as defined by paragraph 0098 of the specification. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 32-51 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 32 recites “calculates a stress level by performing a frequency analysis of the pulse wave by fast Fourier transform and extracting a high-frequency section and a low-frequency section” but it is unclear if the FFT is being applied to the pulse or wave or a parameter derived therefrom. The claim language indicates that the “frequency analysis of the pulse wave” is performed by fast Fourier transform but as described in the above presented claim interpretation section, such language does not appear to match the calculation of stress level described in the specification which involves performing the FFT on the interpolation of the PPI graph and subsequently performing frequency analysis on the product of the FFT. Thus it is unclear what this limitation is meant to convey. For the purposes of this examination, the limitation will be interpreted in accordance with the above presented claim interpretation. This rejection is further applied to the similar limitations of claim 42. Claim 32 recites “coordinates corresponding to the brain fatigue level, the mood level, and the stress level in a three- dimensional space defined by an X-axis, a Y-axis, and a Z-axis” but it is unclear if this limitation is meant to convey that each of “the brain fatigue level, the mood level, and the stress level” correspond to one of the three axes or if this limitation is meant to convey that the graph of points in a space defined by the three axes but not the axes do not correspond to the various levels. For the purposes of this examination, the limitation will be interpreted as each of the various levels corresponding to one of the three axes. This rejection is further applied to the similar limitations of claim 42. Claim 32 recites “a video call” in lines 15 and 17 but it is unclear if these two limitations are the same as, related to, or different from each other. For the purposes of this examination, the limitations are interpreted as referring to the same video call. This rejection is further applied to the similar limitations of claim 42. Claim 32 recites “the pulse wave is acquired via the terminal device from a pulse wave meter that measures a pulse wave of the subject” but it is unclear if the pulse wave meter is part of, the terminal device, in communication with the terminal device, or has some other form of structural relationship with the terminal device. For the purposes of this examination, the limitation is being interpreted as the pulse wave meter is separate from but in communication with the terminal device. This rejection is further applied to the similar limitations of claim 42. Claims 33-41 and 51 are rejected by virtue of their dependency on claim 32. Claims 43-50 are rejected by virtue of their dependency on claim 42. Claim 35 recites “an improvement plan to be proposed to the subject is determined for each of the plurality of per-type categories” but it is unclear which element of the device is performing this function. For the purposes of this examination, the limitation will be interpreted as being performed by the emotion expression engine. Claim 41 recites “the data related to the facial expression image is data acquired by making a continuous video recording of a moving image of a facial expression of the subject until at least a predetermined video recording time is reached during a video call with the subject via the terminal device” but it is unclear if “a continuous video recording of a moving image of a facial expression of the subject” and “a video call with the subject” of claim 41 are the same as, related to, or a subset of “a video recording of at least a part of a video call with the subject” of claim 32. For the purposes of this examination, the limitations of claim 41 will be interpreted as further limiting the video of claim 32. Claim 42, the limitations of “acquiring …”, “calculating …”, and “displaying …” have been evaluated under the three-prong test set forth in MPEP § 2181, subsection I, but the result is inconclusive. Thus, it is unclear whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The claim language of claim 42 does not invoke a 35 USC 112(f) interpretation but claims 43-45, 47, and 49 which depend from claim 42, appear to refer back to these method steps as “the step of acquiring/calculating/displaying” this language appears to imply that these limitations should be interpreted under 35 USC 112(f). The boundaries of this claim limitation are ambiguous; therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In response to this rejection, applicant must clarify whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Mere assertion regarding applicant’s intent to invoke or not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph is insufficient. Applicant may: (a) Amend the claim to clearly invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by reciting “means” or a generic placeholder for means, or by reciting “step.” The “means,” generic placeholder, or “step” must be modified by functional language, and must not be modified by sufficient structure, material, or acts for performing the claimed function; (b) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, should apply because the claim limitation recites a function to be performed and does not recite sufficient structure, material, or acts to perform that function; (c) Amend the claim to clearly avoid invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by deleting the function or by reciting sufficient structure, material or acts to perform the recited function; or (d) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, does not apply because the limitation does not recite a function or does recite a function along with sufficient structure, material or acts to perform that function. Claim 51 recites “a terminal device” but it is unclear if this limitation is the same as, related to, or different from “a terminal device” of claim 32. For the purposes of this examination, the limitations are interpreted as referring to the same device. Claim 51 recites “the information processing device … and displays on the terminal device” but it is unclear if the information processing device is controlling the terminal device in some manner to display the recited graph or if the information processing device merely transmits the graph information and the terminal device displays the graph itself. For the purposes of this examination, the limitation will be interpreted as the information processing device transmitting the graph information and the terminal device displaying the graph. Claim 51 recites “a graph of points plotted at coordinates corresponding to the brain fatigue level, the mood level, and the stress level in a three-dimensional space defined by an X-axis, a Y-axis, and a Z-axis” but it is unclear if this limitation is meant to convey that each of “the brain fatigue level, the mood level, and the stress level” correspond to one of the three axes or if this limitation is meant to convey that the graph of points in a space defined by the three axes but not the axes do not correspond to the various levels. For the purposes of this examination, the limitation will be interpreted as each of the various levels corresponding to one of the three axes. Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 42 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 42 recites “calculating a brain fatigue level based on a frequency of the voice, calculating a mood level by extracting an emotion of the subject from the facial expression image, and calculating a stress level by performing a frequency analysis of the pulse wave by fast Fourier transform and extracting a high-frequency section and a low-frequency section” but the specification does not support the full scope of the claim. The claim encompasses any method of calculating a “brain fatigue level” based on a frequency of the user’s voice but paragraphs 0086-0087 recite that a particular metric is utilized to determine brain fatigue level from the user’s voice. The claim encompasses any and all method of calculating a “mood level” by extracting emotion from a facial expression but paragraphs 0088-0093 recite that such a process is carried out using a particular facial expression algorithm and using the output of said algorithm in a particular manner to calculate a particular emotion level value. Finally, the claim encompasses any and all methods of calculating a “stress level” using a FFT of a pulse wave but paragraphs 0094-0097 recite a particular method of processing pulse wave to calculate a specific stress level value. In particular, each of the disclosed species of calculation methods of the specification are considered insufficient to support the claimed genus of calculations. Prior Art The closest prior art of record is considered to be: US Patent Application Publication Number US 2013/0018837 A1 hereinafter Lee teaches an emotion recognition apparatus which acquires a first emotion factor and a second emotion factor of an emotion model. An emotional state of a user is estimated based on the first emotion factor and the second emotion factor. The emotion recognition apparatus may also acquire a third emotion factor of the emotion model (Abstract). Lee teaches a system and method for acquiring up to three emotional factors and graphing the resultant emotional model in a three dimensional space (Paragraph 0110; Fig. 8) The emotional factors may include: introverted or extroverted (Paragraphs 0106-0107), an arousal level according to Russell’s emotional model, an intensity according to Watson-Telogen’s emotional model (Paragraph 0056), an intensity and/or amount of touch sensing data from a user typing (Paragraphs 0056-0057), or a device movement level (paragraph 0058). US Patent Application Publication Number US 2011/0022392 A1 hereinafter Iwamoto teaches a framework which performs location-based analysis using an individual feature such as a stress level obtained based on biological information. An information processing system including an acquisition unit which acquires frequency power information of a voice inputted at a mobile terminal having a voice communication function, and position information of a base station device that relayed voice communication of the mobile terminal when the voice was inputted; a storage unit which stores the acquired frequency power information and the acquired position information in association with each other; an acceptance unit which accepts designation of an area; and an output unit which identifies the position information related to the designated area, acquires the frequency power information associated with the identified position information with reference to the storage unit, obtains a stress level of a user of the mobile terminal in the designated area based on frequency power information of a frequency greater than or equal to a threshold value within the acquired frequency power information, and outputs the stress level in association with the designated area (Abstract). Iwamoto teaches the evaluation of stress based on a power spectral density analysis of the frequency of a user’s voice at a particular location (Paragraphs 0029-0030). US Patent Number US 8204747 B2 hereinafter Kato teaches an emotion recognition apparatus which performs accurate and stable speech-based emotion recognition, irrespective of individual, regional, and language differences of prosodic information. The emotion recognition apparatus includes: a speech recognition unit which recognizes types of phonemes included in the input speech; a characteristic tone detection unit which detects a characteristic tone that relates to a specific emotion, in the input speech; a characteristic tone occurrence indicator computation unit which computes a characteristic tone occurrence indicator for each of the phonemes, based on the types of the phonemes recognized by the speech recognition unit, the characteristic tone occurrence indicator relating to an occurrence frequency of the characteristic tone; and an emotion judgment unit which judges an emotion of the speaker in a phoneme at which the characteristic tone occurs in the input speech, based on the characteristic tone occurrence indicator computed by the characteristic tone occurrence indicator computing unit (Abstract). Kato teaches that emotion may be identified according to characteristic tones in the user’s voice (Fig. 5 Col 13 line 53 – Col 14 line 48). US patent Number US 9928462 B2 hereinafter Samsung teaches an apparatus for determining a user's mental state in a terminal is provided. The apparatus includes a data collector configured to collect sensor data; a data processor configured to extract feature data from the sensor data; and a mental state determiner configured to provide the feature data to an inference model to determine the user's mental state (Abstract). Samsung teaches that sensor data may be collected as a user inputs text at a terminal. The user’s text input speed may be used as a feature to determine the user’s mental state after training a machine learning model (Col 9 lines 44-58). Yuki Aoki “Development of an application for smartphones that estimates the degree of fatigue from voice” Pages 1-4 teaches that an algorithm for determining a brain activation measure, or CEM value, from a user’s voice may be used to estimate a degree of fatigue from the voice of the user. The SiCECA algorithm detects fluctuations and calculates the CEM using a chaological method. The CEM value can then be associated with different degrees of fatigue (Yuki Aoki: pages 3-4: sections 2.1-2.2.2 and 3.3; Table 1). Akiyama “For QOL visualization system Stress State estimation method using pulse rate sensor” Pages 1-4 teaches a stress level estimation system using a pulse rate. The pulse to pulse interval over time is graphed and the low frequency and high frequency power spectral density is determined. The low frequency PSD is considered to represent the degree of activity of the sympathetic and parasympathetic nervous system while the high frequency component is considered to represent a degree of activity of the parasympathetic nervous system. The relationship between the high and low frequency is considered to be indicative of stress level as defined by regions in a two dimension representation of their relationship (Page 3-4 sections 2-3.2). Regarding claims 32 and 42, none of the prior art of record alone or in combination reasonably teaches or suggests: “an emotion expression engine section which calculates a brain fatigue level based on a frequency of the voice, which calculates a mood level by extracting an emotion of the subject from the facial expression image, and which calculates a stress level by performing a frequency analysis of the pulse wave by fast Fourier transform and extracting a high-frequency section and a low-frequency section; and a three-axes processing section which displays a graph of points plotted at coordinates corresponding to the brain fatigue level, the mood level, and the stress level in a three- dimensional space defined by an X-axis, a Y-axis, and a Z-axis” in combination with the others claimed elements. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW ERIC OGLES whose telephone number is (571)272-7313. The examiner can normally be reached M-F 8:00AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached on Monday-Friday from 9:00AM – 4:00PM at (571) 272 – 7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW ERIC OGLES/Examiner, Art Unit 3791 /JASON M SIMS/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Dec 01, 2025
Non-Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555683
EEG P-ADIC QUANTUM POTENTIAL IN NEURO-PSYCHIATRIC DISEASES
2y 5m to grant Granted Feb 17, 2026
Patent 12543991
ELECTROCARDIOGRAM GAIN ADJUSTMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12495978
Dual Mode Non-Invasive Blood Pressure Management
2y 5m to grant Granted Dec 16, 2025
Patent 12484852
METHODS AND DEVICES RELATED TO OPERATION OF AN IMPLANTABLE MEDICAL DEVICE DURING MAGNETIC RESONANCE IMAGING
2y 5m to grant Granted Dec 02, 2025
Patent 12465224
BLOOD PRESSURE MEASUREMENT APPARATUS AND METHODS OF USE THEREOF
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+54.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 97 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month