Prosecution Insights
Last updated: April 19, 2026
Application No. 18/569,122

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND DATA COLLECTING METHOD, AND DATA COLLECTING SYSTEM

Non-Final OA §102§103§112
Filed
Dec 11, 2023
Examiner
TRAN, CON P
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
374 granted / 543 resolved
+6.9% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
14 currently pending
Career history
557
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 543 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the response to this office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement 3. The information disclosure statement filed on 12/11/2023, 11/14/2024, and 12/03/2024 have been considered and placed in the application file. CLAIM INTERPRETATION 4. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 5. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 6. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: a manipulation unit, a processing unit, a sound output unit, in claims 15-17, and 20. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Specification 7. The disclosure is objected to because of the following informalities: It is requested that Applicant to spell out the acronym “AI” (see Specification page 5, paragraph [0015]). Appropriate correction is required. Claim Rejections - 35 USC § 112 8. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 9. Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 13 is indefinite because it is unclear whether limitation “notification” in line 4 is the same “notification“ as recited in line 3 of claim 2. If it is, the examiner suggests that applicant can amend “notification” in line 4 of claim 13 to read “the notification” to overcome this problem. Claim Rejections - 35 USC § 102 10. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 11. Claims 1-3, 6-9, 12-13, 15-16, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mishra et al. U.S. Patent Application Publication 20170150282 (hereinafter, “Mishra”, cited by Applicant IDS filed 12/11/2023, 1 page). Regarding claim 1, Mishra teaches an information processing method of an information processing system (including mobile device 102, hearing assistant module 110, Fig. 1; The user device 102, which may be a mobile device such as a smartphone, tablet, laptop, wearable device, etc., is shown to include speaker 106, headphones 104, display element 108, microphone 114, various sensors 112 and a hearing assistant module 110, Fig. 1, par [0016], see Mishra), comprising: an adjustment manipulation accepting step of accepting adjustment manipulation of sound output (a gradual change in conversational behavior such as requesting others to repeat words, and raising the volume setting of the device, par [0014], see Mishra) from a sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra; User volume setting profiler circuit 214 may be configured to monitor the volume level set by the user, over time, for various applications such as for phone calls, virtual assistant interaction, alert sounds and media playback, etc., Fig. 2, par [0022], see Mishra); and a feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra) of estimating auditory capacity (i.e., hearing loss indicator data, par [0014], see Mishra) of a user corresponding to the adjustment manipulation (raising the volume setting of the device, par [0014]) from the adjustment manipulation (The analysis may detect hearing loss trends linked to each context over a period of time, par [0014], see Mishra) and signal data of the sound (in decibel (dB), par [0020], see Mishra) corresponding to the adjustment manipulation (User speech profiler circuit 208 may also be configured to monitor the volume levels of the user in response to selected keyphrases like “hello,” or “Ok Google,” which may serve as a convenient/stable benchmark for comparisons over relatively longer periods of time, par [0021], see Mishra), and outputting feedback (Analysis results and suggestions may be provided to the user (par [0014], see Mishra; The collected hearing loss indicator data from one or more of the user's devices, along with context information related to the usage environment, may be aggregated and analyzed, for example by a remote or cloud-based analysis system. The analysis may detect hearing loss trends linked to each context over a period of time, as will be described in greater detail below. Analysis results and suggestions may be provided to the user to enable early intervention to avoid or limit further hearing impairment, (par [0014], see Mishra). The hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra). Mishra thus teaches all the claimed limitations. Regarding claim 2, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), a notification based on an estimation result of the auditory capacity (i.e., hearing loss) of the user is presented to the user as output of the feedback (Example 17 includes the subject matter of Examples 15 and 16, further comprising sending a report to the user, the report comprising the estimated hearing loss and the recommended actions, par [0090], see Mishra). Regarding claim 3, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), a parameter for adjusting a function of the sound output unit (raising the volume setting of the device, par [0014], see Mishra) is output to the sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra) as the feedback based on an estimation result of the auditory capacity of the user (i.e., hearing loss; The hearing loss analysis system receives this data from one or more of the user's devices and performs statistical analysis to group the hearing loss indicators into clusters associated with each context and to identify trends, par [0015], see Mishra). Regarding claim 6, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), the auditory capacity of the user is estimated from volume adjustment manipulation for adjusting volume of the sound (the hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra) or adjustment manipulation of an equalizer for adjusting sound quality of the sound. Regarding claim 7, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), the auditory capacity of the user is estimated from a feature amount (in decibel (dB), par [0020]) of the signal data (the hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra). Regarding claim 8, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), latest auditory capacity of the user is estimated based on an estimation result of the auditory capacity of the user estimated in time series (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data and the context data over a selected time period and provide that data to a hearing loss analysis system at periodic intervals. In some embodiments, the collection period may be on the order of hours, days or weeks, Fig. 2, par [0025], see Mishra). Regarding claim 9, Mishra teaches the information processing method according to claim 1, wherein, in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra), future auditory capacity of the user is estimated based on an estimation result of the auditory capacity of the user estimated in time series (The trend identification circuit 406 may be configured to identify trends in the hearing loss indicator data for each of the generated clusters over a selected period of time. In some embodiments, the selected period of time may be on the order of weeks or months or more, Fig. 4, par [0036], see Mishra). Regarding claim 12, Mishra teaches the information processing method according to claim 1, further comprising a sound output step of outputting the sound by the sound output unit (The user device 102, which may be a mobile device such as a smartphone, tablet, laptop, wearable device, etc., is shown to include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra). Regarding claim 13, Mishra teaches the information processing method according to claim 2, further comprising: a selection manipulation accepting step of accepting selection manipulation corresponding to notification based on an estimation result of the auditory capacity of the user (Example 3 includes the subject matter of Examples 1 and 2, further comprising receiving a report (corresponds to notification) from the hearing loss analysis system, the report comprising an estimate of user hearing loss (corresponds to the auditory capacity of the user) and recommended actions to reduce further loss, the report based on an analysis of the collected data provided by the device over a second selected time period, par [0076], see Mishra); a function enabling step of enabling a hearing aid function or a sound collecting function of the sound output unit based on the selection manipulation (At operation 730, the hearing loss indicator data and the context data is collected over a selected time period, for example hours or days, Fig. 7, par [0046], see Mishra); and a hearing step or a sound collecting step in which the sound output unit performs hearing or sound collecting processing (At operation 740, the collected data is provided to a hearing loss analysis system, for example a remote or cloud-based system, at periodic intervals, Fig. 7, par [0046], see Mishra). Regarding claim 15, Mishra teaches an information processing system (including mobile device 102, hearing assistant module 110, Fig. 1; The user device 102, which may be a mobile device such as a smartphone, tablet, laptop, wearable device, etc., is shown to include speaker 106, headphones 104, display element 108, microphone 114, various sensors 112 and a hearing assistant module 110, Fig. 1, par [0016], see Mishra),comprising: a manipulation unit (this limitation invokes 112(f) parts of the terminal device 2, Fig. 1, Specification page 4, paragraph [0011]; including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra) that accepts adjustment manipulation of sound output (a gradual change in conversational behavior such as requesting others to repeat words, and raising the volume setting of the device, par [0014], see Mishra) from a sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra; User volume setting profiler circuit 214 may be configured to monitor the volume level set by the user, over time, for various applications such as for phone calls, virtual assistant interaction, alert sounds and media playback, etc., Fig. 2, par [0022], see Mishra); and a processing unit (this limitation invokes 112(f) parts of the terminal device 2, Fig. 1, Specification page 4, paragraph [0011]; corresponds to hearing assistant 110, Figs. 1, 2, par [0018], see Mishra) that estimates auditory capacity (i.e., hearing loss indicator data, par [0014], see Mishra) of a user corresponding to the adjustment manipulation (raising the volume setting of the device, par [0014]) from the adjustment manipulation (The analysis may detect hearing loss trends linked to each context over a period of time, par [0014], see Mishra) and signal data of the sound corresponding to the adjustment manipulation (User speech profiler circuit 208 may also be configured to monitor the volume levels of the user in response to selected keyphrases like “hello,” or “Ok Google,” which may serve as a convenient/stable benchmark for comparisons over relatively longer periods of time, par [0021], see Mishra), and outputs feedback (Analysis results and suggestions may be provided to the user (par [0014], see Mishra; The collected hearing loss indicator data from one or more of the user's devices, along with context information related to the usage environment, may be aggregated and analyzed, for example by a remote or cloud-based analysis system. The analysis may detect hearing loss trends linked to each context over a period of time, as will be described in greater detail below. Analysis results and suggestions may be provided to the user to enable early intervention to avoid or limit further hearing impairment, (par [0014], see Mishra). The hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra). Mishra thus teaches all the claimed limitations. Regarding claim 16, Mishra teaches the information processing system according to claim 15, further comprising a sound output unit (this limitation invokes 112(f), the sound output device 1 includes a sound output unit, Fig. 1, Specification page 4, paragraph [0012]; speaker 106, Fig. 1, par [0016], see Mishra) that outputs the sound (The reports may appear as visual and/or audio alerts through display 108 and/or speaker 106 and headphone 104, Fig. 1, par [0025], see Mishra). Regarding claim 19, Mishra teaches a data collecting method (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data, Fig. 2, par [0025], see Mishra) comprising: a manipulation data collecting step of collecting manipulation data of a manipulation unit (including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra) by a user from the manipulation unit that accepts adjustment manipulation of sound output by a sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra; User volume setting profiler circuit 214 may be configured to monitor the volume level set by the user (corresponds to accepts adjustment manipulation), over time, for various applications such as for phone calls, virtual assistant interaction, alert sounds and media playback, etc., Fig. 2, par [0022], see Mishra); a sound signal data (decibels (dB) par [0020], see Mishra) collecting step of collecting signal data of the sound (via volume of device) when the manipulation unit is manipulated (raising the volume setting of the device, par [0014], see Mishra; User volume setting profiler circuit 214 may be configured to monitor the volume level set by the user, over time, for various applications such as for phone calls, virtual assistant interaction, alert sounds and media playback, etc., Fig. 2, par [0022], see Mishra) from the sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra); and an event data (i.e., environment data, par [0024], see Mishra) generating step of generating event data by associating the manipulation data with the signal data (Audio context generation circuit 204 may be configured to estimate context or environment data associated with use of the device by the user. Examples of context may include, but not be limited to, a business meeting environment, a voice phone call, a work environment, a home environment, a factory environment and an entertainment environment, Fig. 2, par [0024], see Mishra); and a storing step (via a remote or cloud-based system, par [0046], see Mishra) of storing the event data in a storage unit (At operation 740, the collected data is provided to a hearing loss analysis system, for example a remote or cloud-based system, at periodic intervals. This is to allow the remote analysis system to aggregate the data with additional data (i.e., storing), provided from other devices or platforms of the user, over a relatively longer time frame (e.g., weeks or months) to estimate user hearing loss, Fig. 7, par [0046]; see also a storage system 970, Fig. 9, par [0054], see Mishra). Mishra thus teaches all the claimed limitations. Regarding claim 20, Mishra teaches a data collecting system (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data, Fig. 2, par [0025], see Mishra) comprising: a sound output unit (this limitation invokes 112(f), the sound output device 1 includes a sound output unit, Fig. 1, Specification page 4, paragraph [0012]) that outputs sound (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra); a manipulation unit (this limitation invokes 112(f) parts of the terminal device 2, Fig. 1, Specification page 4, paragraph [0011]; including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra) that accepts adjustment manipulation of the sound output (a gradual change in conversational behavior such as requesting others to repeat words, and raising the volume setting of the device, par [0014], see Mishra) from the sound output unit (include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra; User volume setting profiler circuit 214 may be configured to monitor the volume level set by the user (corresponds to accepts adjustment manipulation), over time, for various applications such as for phone calls, virtual assistant interaction, alert sounds and media playback, etc., Fig. 2, par [0022], see Mishra); and a data collecting unit (this limitation invokes 112(f), data collecting unit, Specification page 8, par [0026]; The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data, Fig. 2, par [0025], see Mishra) that collects manipulation data of the manipulation unit (including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra) by a user of the sound output unit (raising the volume setting of the device, par [0014], see Mishra) and signal data of the sound (in decibel (dB), par [0020], see Mishra) when the manipulation unit is manipulated (User speech profiler circuit 208 may also be configured to monitor the volume levels of the user in response to selected keyphrases like “hello,” or “Ok Google,” which may serve as a convenient/stable benchmark for comparisons over relatively longer periods of time, par [0021], see Mishra). Mishra thus teaches all the claimed limitations. Claim Rejections - 35 USC § 103 12. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 13. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 14. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 15. Claims 4, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mishra et al. U.S. Patent Application Publication 20170150282 (hereinafter, “Mishra, cited by Applicant IDS filed 12/11/2023, 1 page) in view of Huang U.S. Patent Application Publication 20220225039. Regarding claim 4, Mishra teaches the information processing method according to claim 3. Mishra further teaches in the feedback output step (the hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra). However, Mishra does not explicitly disclose wherein a notification for changing the parameter is presented in a case where a value indicating an estimated auditory capacity level of the user is equal to or less than a threshold, and the parameter is changed based on manipulation corresponding to the notification. Huang teaches self-fitting hearing aid having built-in pure tone signal generator (see Title) in which: see [par 0037] the user connects the hearing aid in a wired or wireless manner through the fitting software of the corresponding App on the computer or mobile phone when there is a network. Firstly, the pure tone signal generator in the hearing aid emits pure tone signals of different frequencies and different gains (i.e., a value indicating an estimated auditory capacity level) through the hearing test software, and a speaker in the hearing aid emits sounds. Then, the test software respectively sends out corresponding sound hertz frequency commands, and the users can hear the pure tone signals in the hearing aid respectively. If the users cannot hear the pure tone signals (i.e., threshold), the system will automatically increase the volume within the corresponding time until the user can hear the pure tone signals (corresponds to the parameter is changed based on manipulation corresponding to the notification). After hearing the pure tone signals, the users can press a confirmation button on the software interface, and the system will automatically record the hearing threshold of the frequency point. Finally, the relevant parameters suitable for the users are calculated through the software programming system, and then the program settings in the voice processing system of the hearing aid are automatically rewritten in a wired or wireless manner. If the users feel that the hearing effects are not good, the retest button can be press for retesting. After the adaptation is completed, the users can press a clicking button after hearing on the software interface. The next time the user use it, just turn on the switch. If the hearing loss degree has changed, a retesting is required again (par [0037], see Huang). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the self-fitting hearing aid having built-in pure tone signal generator taught by Huang with the information processing method of Mishra such that to obtain wherein a notification for changing the parameter is presented in a case where a value indicating an estimated auditory capacity level of the user is equal to or less than a threshold, and the parameter is changed based on manipulation corresponding to the notification in order to simplify fitting procedures and improve the accuracy of the hearing test and fitting, as suggested by Huang in Abstract. Regarding claim 14, Mishra teaches the information processing method according to claim 3. Mishra teaches further comprising a hearing step or a sound collecting step in which the sound output unit performs hearing or sound collecting processing (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data and the context data over a selected time period and provide that data to a hearing loss analysis system at periodic intervals, Fig. 2, par [0025], see Mishra). However, Mishra does not explicitly disclose based on a parameter for adjusting a function of the sound output unit. Huang teaches self-fitting hearing aid having built-in pure tone signal generator (see Title) in which he user connects the hearing aid in a wired or wireless manner through the fitting software of the corresponding App on the computer or mobile phone when there is a network. Firstly, the pure tone signal generator in the hearing aid emits pure tone signals of different frequencies and different gains through the hearing test software, and a speaker in the hearing aid emits sounds. Then, the test software respectively sends out corresponding sound hertz frequency commands, and the users can hear the pure tone signals in the hearing aid respectively. If the users cannot hear the pure tone signals, the system will automatically increase the volume within the corresponding time until the user can hear the pure tone signals (par [0037], see Huang). If the user still cannot hear the pure tone signals, the system will continue to automatically increase the volume by 3-5 dB (i.e., parameter) until the user can hear the pure tone signals (par [0030], see Huang). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the self-fitting hearing aid having built-in pure tone signal generator taught by Huang with the information processing method of Mishra such that to obtain further comprising based on a parameter for adjusting a function of the sound output unit in order to simplify fitting procedures and improve the accuracy of the hearing test and fitting, as suggested by Huang in Abstract. 16. Claims 5, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mishra et al. U.S. Patent Application Publication 20170150282 (hereinafter, “Mishra”, cited by Applicant IDS filed 12/11/2023, 1 page) in view of Strelcyk et al. U.S. Patent Application Publication 20220218236 (hereinafter, “Strelcyk”). Regarding claim 5, Mishra teaches the information processing method according to claim 1. Mishra further teaches wherein in the feedback output step (Analysis results and suggestions may be provided to the user (par [0014], see Mishra) (the hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra) wherein the auditory capacity of the user is estimated (i.e., hearing loss; the hearing loss indicator data generation circuit 220 may be configured to measure hearing loss indicators associated with use of the device by the user. Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume (Fig. 2, par [0020], see Mishra); The hearing loss analysis system receives this data from one or more of the user's devices and performs statistical analysis to group the hearing loss indicators into clusters associated with each context and to identify trends, par [0015], see Mishra). However, Mishra does not explicitly disclose based on a learning model for estimating the auditory capacity of the user. Strelcyk teaches systems and methods for hearing evaluation (see Title) in which Machine learning model 702 may be implemented by any supervised and/or unsupervised learning algorithms. For example, machine learning model 702 may be implemented by a supervised deep learning model, such as a neural network, a convolutional neural network, and/or a recurrent neural network (Fig. 7, par [0066], see Strelcyk). In some examples, the hearing profile may be estimated by estimation module 404 in terms of other than air-conduction pure-tone hearing thresholds. For example, a logistic-regression model could be used to predict the likelihood of the presence of a hearing loss or hearing impairment (Fig. 4, par [0067], see Strelcyk). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the systems and methods for hearing evaluation taught by Strelcyk with the information processing method of Mishra such that to obtain based on a learning model for estimating the auditory capacity of the user for purpose of providing remote hearing evaluation capability, as suggested by Strelcyk in paragraph [0036]. Regarding claim 10, Mishra in view of Strelcyk teaches the information processing method according to claim 5. Mishra in view of Strelcyk, as modified, teaches further comprising a data collecting step of collecting event data (via remote analysis interface circuit 212, Fig. 2, par [0025], see Mishra) which includes manipulation data of the adjustment manipulation by the user (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data and the context data over a selected time period and provide that data to a hearing loss analysis system at periodic intervals, Fig. 2, par [0025], see Mishra), signal data of the sound when the adjustment manipulation is performed (Indicators of hearing loss may include, for example, ambient sound characteristics, user speech volume level and word rate, user adjustments of device volume, Fig. 2, par [0020], see Mishra) and situation data of the user when the adjustment manipulation is performed (Audio context generation circuit 204 may be configured to estimate context or environment data associated with use of the device by the user. Examples of context may include, but not be limited to, a business meeting environment, a voice phone call, a work environment, a home environment, a factory environment and an entertainment environment, par [0024], see Mishra), device-specific data related to sound output of the sound output unit (Devices and techniques do exist for coping with hearing impairment, such as medical-grade hearing aids, par [0002], see Mishra), and user-specific data (user volume setting of the device, par [0044], see Mishra), which includes physical data related to the auditory capacity of the user (As illustrated in FIG. 7, in one embodiment hearing loss detection method 700 commences by measuring, at operation 710, hearing loss indicator data associated with use of the device by the user. The hearing loss indicator data may include, for example, ambient sound characteristics, user speech volume level and user volume setting of the device, Fig. 7, par [0044], see Mishra), and storing (via a remote or cloud-based system, par [0046], see Mishra) the event data, the device-specific data, and the user-specific data in association with each other (At operation 730, the hearing loss indicator data and the context data is collected over a selected time period, for example hours or days. At operation 740, the collected data is provided to a hearing loss analysis system, for example a remote or cloud-based system, at periodic intervals. This is to allow the remote analysis system to aggregate the data with additional data (i.e., storing), provided from other devices or platforms of the user, over a relatively longer time frame (e.g., weeks or months) to estimate user hearing loss, Fig. 7, par [0046]; see also a storage system 970, Fig. 9, par [0054], see Mishra). However, Mishra does not explicitly disclose wherein, the learning model performs machine learning for an estimation method of the auditory capacity by using data collected in the data collecting step. Strelcyk teaches systems and methods for hearing evaluation (see Title) in which Machine learning model 702 may be implemented by any supervised and/or unsupervised learning algorithms. For example, machine learning model 702 may be implemented by a supervised deep learning model, such as a neural network, a convolutional neural network, and/or a recurrent neural network (Fig. 7, par [0066], see Strelcyk). In some examples, the hearing profile may be estimated by estimation module 404 in terms of other than air-conduction pure-tone hearing thresholds. For example, a logistic-regression model could be used to predict the likelihood of the presence of a hearing loss or hearing impairment (corresponds to auditory capacity) (Fig. 4, par [0067], see Strelcyk). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the systems and methods for hearing evaluation taught by Strelcyk with the information processing method of Mishra such that to obtain wherein, the learning model performs machine learning for an estimation method of the auditory capacity by using data collected in the data collecting step for purpose of providing remote hearing evaluation capability, as suggested by Strelcyk in paragraph [0036]. Regarding claim 17, Mishra teaches the information processing system according to claim 16, wherein the sound output unit is headphones (104, Fig. 1, par [0016], see Mishra) (including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra) and the processing unit (corresponds to hearing assistant 110, Figs. 1, 2, par [0018], see Mishra) are mounted on the earphone or a terminal device (this limitation invokes 112(f), terminal device 2, Fig. 1, Specification page 4, paragraph [0010]; mobile device 102, Fig. 1, par [0016], see Mishra; The user device 102, which may be a mobile device such as a smartphone, tablet, laptop, wearable device, etc., is shown to include speaker 106, headphones 104, Fig. 1, par [0016], see Mishra) that outputs the signal data to the headphone However, Mishra does not explicitly disclose headphones being an earphone. Strelcyk teaches systems and methods for hearing evaluation (see Title) in which the acoustic stimuli may be presented to user 204 by way of one or more sound transducers (e.g., loudspeakers, headphones, and/or earphones) connected to or included in computing system 202 (Fig. 2, par [0029], see Strelcyk). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the systems and methods for hearing evaluation taught by Strelcyk with the information processing method of Mishra such that to obtain headphones being an earphone for purpose of providing remote hearing evaluation capability, as suggested by Strelcyk in paragraph [0036]. 17. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Mishra et al. U.S. Patent Application Publication 20170150282 (hereinafter, “Mishra”, cited by Applicant IDS filed 12/11/2023, 1 page) in view of Strelcyk et al. U.S. Patent Application Publication 20220218236 (hereinafter, “Strelcyk”) in view of Tran U.S. Patent Application Publication 20200268260, and further in view of Morokawa U.S. Patent 4337529. Regarding claim 11, Mishra in view of Strelcyk teaches the information processing method according to claim 10. Mishra in view of Strelcyk, as modified, teaches wherein, in the data collecting step (The remote analysis interface circuit 212 may be configured to collect the hearing loss indicator data and the context data over a selected time period and provide that data to a hearing loss analysis system at periodic intervals, Fig. 2, par [0025], see Mishra), at least any one of the event data, the device-specific data or the user-specific data is collected (hearing loss indicator data (par [0025], see Mishra), user volume setting of the device, par [0044], see Mishra), (i.e., hearing loss; The hearing loss analysis system receives this data from one or more of the user's devices and performs statistical analysis to group the hearing loss indicators into clusters associated with each context and to identify trends, par [0015], see Mishra), a genetic test result or a blood test result of the user. However, Mishra in view of Strelcyk does not explicitly disclose the event data including at least any one of an emotion of the user estimated from an image of the user captured by an imaging device and an environment around the user. Tran teaches hearing and monitoring system (see Title) in which in one aspect, a method includes providing an in-ear device to a user anatomy; determine an audio response chart for a user based on a plurality of environments (restaurant, office, home, theater, party, concert, among others), determining a current environment, and updating the hearing aid parameters to optimize the amplifier response to the specific environment (par [0164], see Tran). In one embodiment, the camera captures facial expression and a code such as the Microsoft Emotion API takes a facial expression in an image as an input, and returns the confidence across a set of emotions for each face in the image, as well as bounding box for the face, using the Face API. The emotions detected are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. These emotions are understood to be cross-culturally and universally communicated with particular facial expressions (par [0164], see Tran). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the hearing and monitoring system taught by Tran with the information processing method of Mishra in view of Strelcyk such that to obtain the event data including at least any one of an emotion of the user estimated from an image of the user captured by an imaging device and an environment around the user in order to improve diagnosis, aids in better treatment, and allows for earlier detection of problems, as suggested by Tran in paragraph [0050]. However, Mishra in view of Strelcyk in view of Tran does not explicitly disclose the device-specific data including a level diagram of the sound output unit. Morokawa teaches pace timing device (see Title) in which it should be noted that a construction such as that of the embodiment of FIG. 32 is also suitable for other types of small electronic devices, such as hearing aids, due to the ease of battery replacement (col. 42, lines 8-11, see Morokawa). FIG. 26 is a graph showing the relationship between output sound level and frequency, for a miniature electromagnetic loudspeaker utilized in the embodiment of the present invention described above. The loudspeaker impedance was 100 ohms, and the drive voltage 3V peak to peak. A sharp resonance characteristic is indicated. When the loudspeaker is driven at a frequency close to the resonant frequency, high efficiency is obtained, and the resulting sound is pleasing and does not tend to be irritating (col. 32, line 61-col. 33 line 2, see Morokawa). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the hearing and monitoring system taught by Morokawa with the information processing method of Mishra in view of Strelcyk in view of Tran such that to obtain the device-specific data including a level diagram of the sound output unit in order to ensure that safe limits for these parameters are not exceeded, as suggested by Morokawa in column 13, lines 15-17. 18. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Mishra et al. U.S. Patent Application Publication 20170150282 (hereinafter, “Mishra”) in view of Xu et al. U.S. Patent Application Publication 20220201404 (hereinafter, “Xu”). Regarding claim 18, Mishra teaches the information processing system according to claim 16, wherein the sound output unit is a speaker of a television (television, par [0053], see Mishra). Mishra further teaches the manipulation unit (including user volume setting profiler circuit 214, Fig. 2, par [0018], see Mishra). However, Mishra does not explicitly disclose the manipulation unit is a remote control of the television. Xu teaches self-fit hearing instruments with self-reported measures of hearing loss and listening (see Title) in which computing system 108 may comprise one or more mobile devices, server devices, personal computer devices, handheld devices, wireless access points, smart speaker devices, smart televisions (Fig. 1, par [0027], see Xu). These hearing tests often require calibration of transducers (headphones or earbuds), which may be a potentially difficult process for older users. Moreover, fine adjustments to meet individual preferences typically require users to manipulate many aspects of sound, such as bass, treble, overall loudness, with a control interface (e.g., a remote control or a mobile app), par [0045], see Xu). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the self-fit hearing instruments with self-reported measures of hearing loss and listening taught by Xu with the information processing system of Mishra such that to obtain the manipulation unit is a remote control of the television in order to improve the ability of hearing instruments to be fitted to individual user as suggested by Xu in paragraph [0047]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CON P TRAN whose telephone number is (571) 272-7532. The examiner can normally be reached M-F (08:30 AM- 05:00 PM) ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVIAN C. CHIN can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.P.T/Examiner, Art Unit 2695 /VIVIAN C CHIN/Supervisory Patent Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597330
EAR BUD INTEGRATION WITH PROPERTY MONITORING
2y 5m to grant Granted Apr 07, 2026
Patent 12598438
SWAPPING ROLES BETWEEN UNTETHERED WIRELESSLY CONNECTED DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12568325
COMMUNICATION METHOD APPLIED TO BINAURAL WIRELESS HEADSET, AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12549881
AUDIO PLAYING METHOD, APPARATUS AND SYSTEM FOR IN-EAR EARPHONE
2y 5m to grant Granted Feb 10, 2026
Patent 12532116
MOVEABLE ELEMENT FOR A TRANSDUCER, TRANSDUCER, IN-EAR DEVICE AND METHOD FOR DETERMINING THE OCCURRENCE OF A CONDITION IN A TRANSDUCER
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
92%
With Interview (+23.5%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 543 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month