Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,510

METHOD OF OBTAINING URINATION INFORMATION AND DEVICE THEREOF

Non-Final OA §101§103
Filed
Oct 13, 2023
Examiner
LEE, BYUNG RO
Art Unit
2858
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Dain Technology Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
82 granted / 108 resolved
+7.9% vs TC avg
Strong +19% interview lift
Without
With
+18.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
143
Total Applications
across all art units

Statute-Specific Performance

§101
28.3%
-11.7% vs TC avg
§103
37.2%
-2.8% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDSs) were submitted on 10/17/2023. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The current 35 USC 101 analysis is based on the current guidance (Federal Register vol. 79, No. 241. pp. 74618-74633). The analysis follows several steps. Step 1 determines whether the claim belongs to a valid statutory class. Step 2A prong 1 identifies whether an abstract idea is claimed. Step 2A prong 2 determines whether any abstract idea is integrated into a practical application. If the abstract idea is integrated into a practical application the claim is patent eligible under 35 USC 101. Last, step 2B determines whether the claims contain something significantly more than the abstract idea. In most cases the existence of a practical application predicates the existence of an additional element that is significantly more. The 35 USC 101 analysis between each element of claims and its combination is presented in the table below Claim number and elements Judicial exception (Step 2A Prong one) Practical application (Step 2A Prong two)/ Significantly more (Step 2B) Claim 1 Step 1: Yes, statutory class Step 2A Prong two: No / Step 2B: No A method of obtaining urination information, comprising: obtaining one or more first feature data by using first sound data, wherein the first sound data reflect a sound of a urination process; Step2A Prong one: Yes abstract idea mathematical concept or mental process “obtaining one or more first feature data ~ ” is a math process or data processing itself to collect feature data related to sound data. obtaining a urine volume determination value by using the one or more first feature data and a pre-trained urine volume determination model, wherein the urine volume determination model is trained with a urine volume training data set, wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data; abstract idea mathematical concept or mental process “obtaining a urine volume determination value ~” is a math or mental process. (pages 27- 32). the urine volume determination model is indicative of a mathematical relationship/concept, which is related to data processing itself or mathematical algorithm. obtaining a urine flow rate determination value by using the one or more first feature data and a pre-trained urine flow rate determination model, wherein the urine flow rate determination model is trained with a urine flow rate training data set, wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data; and abstract idea mathematical concept or mental process “obtaining a urine flow rate determination value ~ ” is a math process and/or data processing. (pages 17-18) the urine flow rate determination model is indicative of a mathematical relationship/concept, which is related to data processing itself or mathematical algorithm. obtaining a urine flow rate information by reflecting a ratio of an estimated urine volume calculated based on the urine flow rate determination value and the urine volume determination value to the urine flow rate determination value. abstract idea mathematical concept “obtaining a urine flow rate information …” is a math or mental process. (pages 14, 27-30). Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-14 are directed to an abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception as addressed below and presented in the above table. Step 2A: Prong One Regarding Claim 1, the limitations recited in Claim 1, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mathematical calculations and/or the mind, as presented in the above table. Nothing in the claim elements precludes the step from practically being performed in the mind and/or the mathematical calculations. For example, “obtaining one or more first feature data by using first sound data, wherein the first sound data reflect a sound of a urination process”, “obtaining a urine volume determination value by using the one or more first feature data and a pre-trained urine volume determination model, wherein the urine volume determination model is trained with a urine volume training data set, wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data” and “obtaining a urine flow rate determination value by using the one or more first feature data and a pre-trained urine flow rate determination model, wherein the urine flow rate determination model is trained with a urine flow rate training data set, wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data” in the context of this claim may encompass manually calculating or inferring the urine volume determination value and the urine flow rate determination value based on the collected data, where the features data and training data may be obtained by data processing itself performed by a generic computer functions of a generic computer component. (see at least pages 17-18 and 27-32). (MPEP 2106.04(a)(2)). The urine volume determination model and the urine flow rate determination model are indicative of a mathematical relationship/concept, which may be executed by a computer program to perform data processing itself or mathematical algorithm. For example, “obtaining a urine flow rate information by reflecting a ratio of an estimated urine volume calculated based on the urine flow rate determination value and the urine volume determination value to the urine flow rate determination value” in the context of this claim may encompass manually calculating or inferring the urine flow rate information based on the result of the mathematical calculations and the collected data (see at least pages 14 and 27-30). (MPEP 2106.04(a)(2)). Step 2A: Prong Two This judicial exception is abstract ideal itself and not integrated into a practical application. In particular, the specification details use of a computer processor to perform mathematical calculations or mental processes of “obtaining one or more first feature data by using first sound data, wherein the first sound data reflect a sound of a urination process”, “obtaining a urine volume determination value by using the one or more first feature data and a pre-trained urine volume determination model, wherein the urine volume determination model is trained with a urine volume training data set, wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data” and “obtaining a urine flow rate determination value by using the one or more first feature data and a pre-trained urine flow rate determination model, wherein the urine flow rate determination model is trained with a urine flow rate training data set, wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data”. Claim 1 does not present tangible or physical elements/components and/or integration of improvements to be indicative of specific features/structure/acts how and or with what to obtain first to third features data by using first to third sound data, and obtain/calculate urine volume determination value, the urine flow rate determination value and urine flow rate information. (See MPEP 2106.04(d)). Claim 1 does not present a technical solution to a technical problem by providing an improvement to the functioning of computer, or to any other technology or technical field related to obtain first to third features data by using first to third sound data, and obtain/calculate urine volume determination value, the urine flow rate determination value and urine flow rate information. (See MPEP 2106.04(d)). Therefore, there is no showing of integration into a practical application such as an improvement to the functioning of a computer, or to any other technology or technical field, or use of a particular machine. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, using a computer system to perform “obtaining one or more first feature data by using first sound data, wherein the first sound data reflect a sound of a urination process”, “obtaining a urine volume determination value by using the one or more first feature data and a pre-trained urine volume determination model, wherein the urine volume determination model is trained with a urine volume training data set, wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data” and “obtaining a urine flow rate determination value by using the one or more first feature data and a pre-trained urine flow rate determination model, wherein the urine flow rate determination model is trained with a urine flow rate training data set, wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data” amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept cannot provide statutory eligibility. Claim 1 is not patent eligible. Regarding Claims 2-7 and 12, the limitations are further directed to an abstract idea, as described in claim 1. The limitations of “obtaining a urination presence/absence determination value ... obtaining one or more adjusted first feature data … obtaining the urine volume determination value …” in Claim 3 may encompass manually calculating or inferring the urination presence/absence determination value and the adjusted first feature data using a mathematical model (the urination presence/absence determination model and the urine volume determination model). (MPEP 2106.04(a)(2)). The limitations of “obtaining a urination presence/absence classification value … obtaining an adjusted urine flow rate determination value …” in Claim 4 may encompass manually calculating or inferring the urination presence/absence determination value and the adjusted first feature data using a mathematical model. (MPEP 2106.04(a)(2)). The limitations of “obtaining a plurality of segmented urine volume determination value … obtaining the urine volume determination value…” in Claim 6 may encompass manually calculating or inferring the segmented urine volume determination value and the urine volume determination value using a mathematical model. (MPEP 2106.04(a)(2)). Regarding Claim 8, it is a method type claim having similar limitations as of claim 1 above. Therefore, it is rejected under the same rationale as of claim 1 above. Regarding Claims 9-11, the limitations are further directed to an abstract idea, as described in claim 1. For the reasons described above with respect to Claims 1-7, the judicial exceptions are not meaningfully integrated into a practical application, or amount to significantly more than the abstract idea. Regarding Claim 13, it is a system type claim having similar limitations as of claim 1 above. Therefore, it is rejected under the same rationale as of claim 1 above. The additional elements of the memory and the processor are merely recited at a high-level of generality to perform a generic computer function. Mere nominal recitation of a generic “computer system” does not take the claim out of the mathematical concepts and the mental process grouping. Thus, the claim recites an abstract idea. Regarding Claim 14, it is a system type claim having similar limitations as of claim 8 above. Therefore, it is rejected under the same rationale as of claims 1 and 8 above. The additional elements of the memory and the processor are merely recited at a high-level of generality to perform a generic computer function. Mere nominal recitation of a generic “computer system” does not take the claim out of the mathematical concepts and the mental process grouping. Thus, the claim recites an abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1-3, 7-14 are rejected under 35 U.S.C. 103 as being unpatentable over SONG et al. (US 20200054265 A1, hereinafter referred to as “SONG”) in view of KAMAI (WO 2021192475 A1, hereinafter referred to as “KAMAI” cited in IDS dated 10/17/2022). Regarding Claim 1, SONG teaches a method of obtaining urination information (Para 0016, “method for calculating information on urination”), comprising: obtaining one or more first feature data by using first sound data, wherein the first sound data reflect a sound of a urination process (Para 0016, “acquiring a sound signal related to urination of a user, calculating a urination parameter related to an effective value (or urination parameters related to an effective value and a spectral centroid) from the acquired sound signal”); obtaining a urine volume determination value by using the one or more first feature data and a pre-trained urine volume determination model, wherein the urine volume determination model is trained with a urine volume training data set (Para 0016, “calculating information on urination, comprising the steps of: acquiring a sound signal related to urination of a user; calculating a urination parameter related to an effective value (or urination parameters related to an effective value and a spectral centroid) from the acquired sound signal; and estimating a urine flow rate of the user with reference to the calculated urination parameter”; Para 0022, “estimate information on urination of a user using a urination information estimation model according to the sex of the user”; Para 0045-0046, “Next, the urination information estimation unit 130, according to one embodiment of the invention, may estimate a urine flow rate (e.g., a urine flow rate over time or a voided urine volume over time) of the user with reference to the urination parameter calculated by the urination parameter calculation unit 120. For example, when a urination parameter related to an effective (RMS) value is calculated for each of a plurality of sections (i.e., a plurality of frequency bands) of the acquired sound signal …,”), wherein … obtaining a urine flow rate determination value by using the one or more first feature data and a pre-trained urine flow rate determination model, wherein the urine flow rate determination model is trained with a urine flow rate training data set (Para 0016, “estimating a urine flow rate of the user with reference to the calculated urination parameter”; Para 0046, “estimate a urine flow rate of the user on the basis of at least one urine flow rate prediction model …”), wherein … obtaining a urine flow rate information by reflecting a ratio of an estimated urine volume calculated based on the urine flow rate determination value and the urine volume determination value to the urine flow rate determination value (Para 0016, “estimating a urine flow rate of the user with reference to the calculated urination parameter”; Para 0045-0046, “when a urination parameter related to an effective (RMS) value is calculated for each of a plurality of sections (i.e., a plurality of frequency bands) of the acquired sound signal …. estimate a urine flow rate of the user on the basis of at least one urine flow rate prediction model in which the urination parameter related to the effective value for each of the sections is used as a variable”). With respect to the limitations of “wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data” and “wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data”, SONG fails to explicitly disclose a plurality of features data from a plurality of sound data included in the training data set. However, KAMAI teaches “wherein the urine volume training data set comprises one or more second feature data generated based on second sound data recorded during a urination process and a value related to a urine volume corresponding to the second sound data … wherein the urine flow rate training data set comprises one or more third feature data generated based on third sound data recorded during a urination process and a value related to a urine flow rate corresponding to the third sound data” (“The identification model uses sound data indicating each of a plurality of levels of urination sound classified according to the momentum of urination as an input value, and corresponds to any level of the urination sound of the plurality of levels of urination sound. Machine learning is performed using whether or not urination is performed as an output value” in Claim 3 in page 14 of English machine translation). SONG and KAMAI are both considered to be analogous to the claimed invention because they are in the same field of calculating information on urination and identifying whether defecation, urination, and/or flatulence has occurred. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified SONG to incorporate the teachings of KAMAI by providing training data set including a plurality of features data from a plurality of sound data with which the urine flow rate information is calculated, taught by KAMAI at least at Claim 3 in page 14 of English machine translation. Regarding Claim 2, SONG teaches wherein the estimated urine volume is calculated by integrating the urine flow rate determination value over time (Para 0054, “the urination information estimation unit 130, according to one embodiment of the invention, may perform an operation on (e.g., take an integral of) the estimated urine flow rate in a section between the estimated start and end points of the urination to estimate a voided urine volume of the user.”). Regarding Claim 3, SONG teaches wherein the method comprises: obtaining a urination presence/absence determination value (Para 0008 teaches “estimating information on urination anytime or anywhere, using a urination parameter related to an effective (or root mean square (RMS)) value (or urination parameters related to an effective (RMS) value and a spectral centroid”) calculated from a sound signal related to urination of a user; Para 0016, “acquiring a sound signal related to urination of a user”) by using the first feature data and a pre-trained urination presence/absence determination model (Para 0022, “estimate information on urination of a user using a urination information estimation model according to the sex of the user”), wherein the urination presence/absence determination model is trained with a urination presence/absence training data set (Para 0016, “acquiring a sound signal related to urination of a user, calculating a urination parameter related to an effective value (or urination parameters related to an effective value and a spectral centroid) from the acquired sound signal”; Para 0008, “urination parameters”), wherein …, wherein the obtaining the urine volume determination value comprises: obtaining one or more adjusted first feature data by reflecting the urination presence/absence determination value to the one or more first feature data (Para 0008 teaches “estimating information on urination anytime or anywhere, using a urination parameter related to an effective (or root mean square (RMS)) value (or urination parameters related to an effective (RMS) value and a spectral centroid”) calculated from a sound signal related to urination of a user; Para 0016, “acquiring a sound signal related to urination of a user”); and obtaining the urine volume determination value by using the one or more adjusted first feature data and the urine volume determination model (At least at paragraphs 0016, 0022, 0045-0046 teach calculating the urine volume determination value, as set forth above in Claim 1). Note that, under the broadest reasonable interpretation, the acquired sound signal and urination parameters in SONG is indicative of a value/parameter whether to determine a urination of presence or absence. With respect to the limitations of “wherein the urination presence/absence training data set comprises one or more fourth feature data generated based on fourth sound data recorded during a urination process and a value related to a urination presence/absence corresponding to the fourth sound data”, SONG fails to explicitly disclose a plurality of features data from a plurality of sound data included in the training data set. However, KAMAI teaches “wherein the urination presence/absence training data set comprises one or more fourth feature data generated based on fourth sound data recorded during a urination process and a value related to a urination presence/absence corresponding to the fourth sound data” (“The identification model uses sound data indicating each of a plurality of levels of urination sound classified according to the momentum of urination as an input value, and corresponds to any level of the urination sound of the plurality of levels of urination sound. Machine learning is performed using whether or not urination is performed as an output value” in Claim 3 in page 14 of English machine translation). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified SONG to incorporate the teachings of KAMAI by providing the training data set including a plurality of features data from a plurality of sound data with which the urination presence/absence determination value is calculated, taught by KAMAI at least at Claim 3 in page 14 of English machine translation. 2. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over SONG in view of KAMAI and further in view of Doo et al. (KR 20210053977 A, hereinafter referred to as “Doo” cited in IDS dated 10/17/2023). Regarding Claim 5, SONG in view of KAMAI fails to explicitly disclose, but Doo teaches wherein the one or more first feature data is generated by transforming the first sound data into a spectrogram and segmenting the spectrogram into a plurality of segmented spectrograms having a preset time length (At least Fig. 4 and pages 4-5 in English machine translation teach a spectrogram of the sound data and segmenting the sound data to a plurality of sections in the time dimension, “perform a preprocessing process of segmenting the target sound data into a plurality of sections in the time dimension. For example, the data preprocessor 220 according to an embodiment of the present invention may segment the target sound data into many sections by shortening the time interval for segmenting the target sound data …. the data preprocessor 230 segments the target sound data into many sections by shortening the time interval for segmenting the target sound data, various waveforms (eg, urination) … segments the target sound data into many sections by shortening the time interval for segmenting the target sound data, and divides the segmented target sound data into many sections. When it is used as training data for a learning model” and “in the time dimension of the segmented sound data … RMS value in frequency dimension, RMS value in noise-removed data from acoustic data, centroid in frequency dimension, ratio between centroid and other features in frequency dimension”). Doo is considered to be analogous to the claimed invention because it is in the same field of analyzing urination-related sounds. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified SONG in view of KAMAI to incorporate the teachings of Doo by providing transforming the sound data into waveform in time domain and segmenting the sound data into a plurality of sections in a time or frequency domain, taught by Doo at least at Fig. 4 and pages 4-5 of English machine translation. Regarding Claim 6, SONG in view of KAMAI fails to explicitly disclose, but Doo teaches wherein the obtaining a urine volume determination value comprises: obtaining a plurality of segmented urine volume determination value for each of the plurality of segmented spectrograms by inputting each of the plurality of segmented spectrograms into the urine volume determination model (a learning model); and obtaining the urine volume determination value by adding the plurality of segmented urine volume determination value (At least Fig. 4 and pages 4-5 in English machine translation teach a spectrogram of the sound data and segmenting the sound data to a plurality of sections in the time dimension, and a plurality of segmented features used as input value to determine a plurality of segmented sections, “perform a preprocessing process of segmenting the target sound data into a plurality of sections in the time dimension. For example, the data preprocessor 220 according to an embodiment of the present invention may segment the target sound data into many sections by shortening the time interval for segmenting the target sound data …. the data preprocessor 230 segments the target sound data into many sections by shortening the time interval for segmenting the target sound data, various waveforms (eg, urination) … segments the target sound data into many sections by shortening the time interval for segmenting the target sound data, and divides the segmented target sound data into many sections. When it is used as training data for a learning model” and “in the time dimension of the segmented sound data … the urination section determining unit 230 according to an embodiment of the present invention may determine whether a urination sound is recognized in the segmented target sound data using a learning model. The learning model according to an embodiment of the present invention may be utilized as a clue for determining a urination section from the corresponding acoustic data when data related to at least one feature extracted from the acoustic data segmented in the time dimension is input.”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified SONG in view of KAMAI to incorporate the teachings of Doo by providing input values for a plurality of segmented features used to determine a plurality of segmented sections after the operations for transforming the sound data into waveform in time domain and segmenting the sound data into a plurality of sections in a time or frequency domain, taught by Doo at least at Fig. 4 and pages 4-5 of English machine translation. Regarding Claim 7, SONG in view of KAMAI fails to explicitly disclose, but Doo teaches when a time length of a last segmented spectrogram among the plurality of segmented spectrogram is shorter than the preset time length, padding on the last segmented spectrogram (At least Figs. 4, 5 and pages 6 in English machine translation teach a plurality of segmented sections in spectral time domain determining if a specific segment’s length is shorter than a predetermined level , “when a specific non-urination section (eg, section b1) is located between the urination sections (section a1, section a2) and the length of the specific non-urination section is less than or equal to a predetermined level, the specific ratio Since the urination section may be a urination state in an actual urination situation, the urination section determining unit 230 may determine that a discrimination error has occurred in a specific non-urination section … referring to FIG. 5 , the urination interval determining unit 230 includes a data point interval (section b1) of about 40 seconds and a data point interval of about 55 seconds (section b2) of urination in FIG. 4 … the two data point sections (section b1, section b2) were corrected from the non-urination section to the urination section, and the entire section (section a8) between about 30 seconds and about 60 seconds was single. can be identified by the urination interval”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified SONG in view of KAMAI to incorporate the teachings of Doo by providing a plurality of segmented sections in spectral time domain determining if a specific segment’s length is shorter than a predetermined level, taught by Doo at least at Figs. 4, 5 and page 6 of English machine translation. Regarding Claim 8, it has similar limitations as of Claim 1 above. Therefore, it is rejected under the same rationale as of Claim 1 above. Regarding Claim 9, it has similar limitations as of Claim 2 above. Therefore, it is rejected under the same rationale as of Claim 2 above. Regarding Claim 11, it has similar limitations as of Claims 5 and 6 above. Therefore, it is rejected under the same rationale as of Claims 5 and 6 above. Regarding Claim 12, it is dependent on Claim 1 and recites “a computer-readable non-transitory recording medium”, which is taught by SONG at paragraphs 0034 and 0082 (“digital equipment having a memory means and a microprocessor for computing capabilities”). Therefore, it is rejected under the same rationale as of Claim 1 above. Regarding Claim 13, it is a system type claim and has similar limitations as of Claim 1 above. Therefore, it is rejected under the same rationale as of Claim 1 above. The additional elements of the memory and the processor is taught by SONG at paragraphs 0034 and 0082 (“digital equipment having a memory means and a microprocessor for computing capabilities”). Regarding Claim 14, it is a system type claim and has similar limitations as of Claims 1 and 8 above. Therefore, it is rejected under the same rationale as of Claims 1 and 8 above. The additional elements of the memory and the processor is taught by SONG at paragraphs 0034 and 0082 (“digital equipment having a memory means and a microprocessor for computing capabilities”). Citation of Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. BELOTSERKOVSKY (WO 2012121772 A1) teaches “unique characteristic sounds produced as urine impacts the surface of the water are used to monitor men's urinary flow patterns and their dynamics. By detecting the intensity at selected acoustic frequencies, it is possible to accurately and precisely measure the urine flow rate. Techniques for analyzing urine flow and its dynamics employ sound levels that are detected with digital filters at two or more distinct frequency regions or channels of the sound spectrum. One frequency region that is designated the measurement channel is where the sound measurement intensity strongly depends on urine flow levels. Another frequency region that is designated the reference channel is where the sound measurement intensity is not dependent on urine flow levels. By using a combination of measurements from the measurement channel and the reference channel, the urine flow monitoring apparatus compensates for variations in operating conditions and other factors during use”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYUNG RO LEE whose telephone number is (571)272-3707. The examiner can normally be reached on Monday-Friday 8:30am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lee Rodak can be reached on (571) 270-5628. The fax phone number for the organization where this application or proceeding is assigned is 571-273-2555. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYUNG RO LEE/Examiner, Art Unit 2858 /LEE E RODAK/Supervisory Patent Examiner, Art Unit 2858
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576376
COATING COMPOSITION SCALE NETWORK DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12548639
DETERMINING THE INTRINSIC REACTION COORDINATE OF A CHEMICAL REACTION BY NESTED PATH INTEGRALS
2y 5m to grant Granted Feb 10, 2026
Patent 12510403
SYSTEMS AND METHODS FOR MONITORING OF MECHANICAL AND ELECTRICAL MACHINES
2y 5m to grant Granted Dec 30, 2025
Patent 12480926
SYSTEMS, DEVICES, AND METHODS FOR ULTRASONIC AGITATION MEDIATED KINETIC RELEASE TESTING OF COMPOUNDS
2y 5m to grant Granted Nov 25, 2025
Patent 12471522
RICE AND WHEAT NITROGEN NUTRITION MULTISPECTRAL DIAGNOSIS METHOD FOR PRECISE FERTILIZATION BY UNMANNED AERIAL VEHICLES
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
95%
With Interview (+18.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month