Prosecution Insights
Last updated: April 19, 2026
Application No. 18/265,546

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Non-Final OA §101§103
Filed
Jun 06, 2023
Examiner
LAGOY, KYRA RAND
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Suntory Holdings Limited
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 14 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
DETAILED CORRESPONDENCE This is a non-final office action on merits in response to the arguments and/or amendments filed on 11/05/2025 and the request for continued examination filed on 12/10/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims Claims 16 and 17 are cancelled. Amendments to claims 1, 18, and 19 are acknowledged and have been carefully considered. Claims 1-15, and 18-20 are pending and considered below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Under step 1, the analysis is based on MPEP 2106.03, and claims 1-15, and 20 are drawn to an information processing apparatus, claim 18 is drawn to an information processing method, and claim 19 is drawn to a non-transitory computer readable recording medium. Thus, each claim, on its face, is directed to one of the statutory categories (i.e., useful process, machine, manufacture, or composition of matter) of 35 U.S.C. §101. Step 2A Prong One Claim 1 recites the limitations of acquiring a gut score related to a gut condition of the user, using input information containing the sound information acquired by the sound information acquiring unit and learning information prepared in advance; outputting the gut score acquired by the gut score acquiring unit; acquiring device identifying information for identifying the type of device used to acquire abdominal sounds corresponding to the sound information, wherein multiple pieces of learning information are each prepared in association with the device identifying information; selecting learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit among the multiple pieces of learning information; and acquiring the gut score using the learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit. These limitations, as drafted, are processes that, under their broadest reasonable interpretations, cover performance of the limitations in the mind or by using a pen and paper. But for “a gut score acquiring unit”, “a gut score output unit”, and “a device identifying information acquiring unit” language, the claim encompasses a user simply observing abdominal sound information, identifying the device type, selecting the appropriate evaluation criteria, evaluating the information using predetermined standards, and assigning a gut score in their mind or by using a pen and paper. The mere nominal recitations of “a gut score acquiring unit”, “a gut score output unit”, and “a device identifying information acquiring unit” do not take the claim limitations out of the mental processes grouping. Thus, the claim recites a mental process which is an abstract idea. Independent claim 18 and 19 recites identical or nearly identical steps with respect to claim 1 (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this claim is therefore determined to recite an abstract idea under the same analysis. Under Step 2A Prong Two The claimed limitations, as per method claim 1, include: one or more processor; a memory device comprising instructions that, when executed, cause the one or more processor to implement: a sound information acquiring unit that acquires sound information regarding abdominal sounds, which are sounds emanating from the abdomen, of a user; a gut score acquiring unit that acquires a gut score related to a gut condition of the user, using input information containing the sound information acquired by the sound information acquiring unit and learning information prepared in advance; a gut score output unit that outputs the gut score acquired by the gut score acquiring unit, and a device identifying information acquiring unit that acquires device identifying information for identifying the type of device used to acquire abdominal sounds corresponding to the sound information, wherein multiple pieces of learning information are each prepared in association with the device identifying information, and the gut score acquiring unit selects learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit among the multiple pieces of learning information, and the gut score acquiring unit acquires the gut score using the learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit, wherein the information processing apparatus further comprises: a microphone for recording the abdominal sounds; and a display unit that displays the gut score output by the gut score output unit. Examiner Note: underlined elements indicate additional elements of the claimed invention identified as performing the steps of the claimed invention. The judicial exception expressed in claim 1 is not integrated into a practical application. The claim as a whole merely describes how to generally “apply” the concept of evaluating physiological information and assigning a gut condition score based on that evaluation in a computer environment. The claimed computer components (i.e., one or more processor; a memory device comprising instructions that, when executed, cause the one or more processor to implement; a sound information acquiring unit; a gut score acquiring unit; a gut score output unit; a device identifying information acquiring unit) are recited at a high level of generality and are merely invoked as tools to perform an existing process of observing health related data, selecting evaluation criteria, and assigning a score based on judgement and predetermined standards. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application. The judicial exception expressed in claim 1 is not integrated into a practical application. The claim recites the additional elements of acquiring sound information regarding abdominal sounds, which are sounds emanating from the abdomen, of a user; a microphone for recording the abdominal sounds; and a display unit that displays the gut score output by the gut score output unit. These limitations are recited at a high level of generality (i.e., as a general means of collecting input data and presenting and evaluation result), and amounts to merely data gathering and displaying a result, which is a form of insignificant extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Therefore, under step 2A, the claims are directed to the abstract idea, and require further analysis under Step 2B. Under step 2B Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A, the claim as a whole merely describes how to generally “apply” the concept of evaluating physiological information and assigning a gut condition score based on that evaluation in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. For claim 1, under step 2B, the additional elements of acquiring sound information regarding abdominal sounds, which are sounds emanating from the abdomen, of a user; a microphone for recording the abdominal sounds; and a display unit that displays the gut score output by the gut score output unit have been evaluated. The information processing apparatus comprising one or more processor performs a general function of receiving patient data for subsequent processing, which represents a well-understood, routine, and conventional activity in the field of health data collection and analysis. The specification discloses that the processor is used in its ordinary capacity as a data input device and does not describe any improvement to the computer itself or to the functioning of the overall computer system (see [0138]). Also noted in Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016), merely collecting information for analysis and displaying the result without a technological improvement does not add significantly more to an abstract idea. The use of the information processing apparatus is no more than collecting information before evaluating and scoring and does not integrate the abstract idea into a practical application. Therefore, the claim does not recite an inventive concept and is not patent eligible. Claims 2-4, 6-7, 9-11, and 14 recite no further additional elements, and only further narrow the abstract idea. The previously identified additional elements, individually and as a combination, do not integrate the narrowed abstract idea into a practical application for reasons similar to those explained above, and do not amount to significantly more than the narrowed abstract idea for reasons similar to those explained above. Claims 5, 8, 12-13, 15, and 20 recite the additional elements of the gut score acquiring unit further includes an excretion score acquiring unit (claim 5), the gut score acquiring unit further includes an eating-and-drinking score acquiring unit (claim 8), the gut score acquiring unit further includes an activity status score acquiring unit (claim 12), the gut score acquiring unit (claim 13), the gut score acquiring unit includes an element score acquiring unit (claim 15), the gut score acquiring unit comprises a gut-related score acquiring unit (claim 20). However, this additional element amounts to implementing an abstract idea on a generic computing device. As such, these additional elements, when considered individually or in combination with the prior devices, do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible. Therefore, the claims here fail to contain any additional element(s) or combination of additional elements that can be considered as significantly more and the claim is rejected under 35 U.S.C. 101 for lacking eligible subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 6, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Spiegel et al. (International Publication No. WO2016112127A1), referred to hereinafter as Spiegel, in view of Singh et al. (U.S. Patent Publication 2021/0090734A1), referred to hereinafter as Singh, and Patel et al. (U.S. Patent Publication 2019/0371311A1), referred to hereinafter as Patel. Regarding claim 1, Spiegel teaches an information processing apparatus (Spiegel [00132] “The abdominal statistics system 10 described herein may be readily implemented to include one or more computer processor devices (e.g., CPU, microprocessor, microcontroller, computer enabled ASIC, etc.) and associated memory (e.g., RAM, DRAM, NVRAM, FLASH, computer readable media, etc.) whereby programming stored in the memory and executable on the processor perform the steps of the various process methods described herein.”), comprising: one or more processor (Spiegel [00132] “The abdominal statistics system 10 described herein may be readily implemented to include one or more computer processor devices (e.g., CPU, microprocessor, microcontroller, computer enabled ASIC, etc.) and associated memory (e.g., RAM, DRAM, NVRAM, FLASH, computer readable media, etc.) whereby programming stored in the memory and executable on the processor perform the steps of the various process methods described herein.”); a memory device comprising instructions that, when executed, cause the one or more processor to implement (Spiegel [00132] “The abdominal statistics system 10 described herein may be readily implemented to include one or more computer processor devices (e.g., CPU, microprocessor, microcontroller, computer enabled ASIC, etc.) and associated memory (e.g., RAM, DRAM, NVRAM, FLASH, computer readable media, etc.) whereby programming stored in the memory and executable on the processor perform the steps of the various process methods described herein.”): a sound information acquiring unit that acquires sound information regarding abdominal sounds, which are sounds emanating from the abdomen, of a user (Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”); wherein the information processing apparatus further comprises: a microphone for recording the abdominal sounds (Spiegel [0046]” In the side view of FIG. 3, a sensor 12 is shown with housing 42 bonded (e.g., via adhesive 44) to a mounting flange 40, under which is an adhesive mounting ring 46 shown coupled to a bandage 48 (e.g. Tegaderm bandage). It is appreciated that other bandage types and brands may be utilized with the abdominal statistics system 10 for attachment to patient abdominal tissue 16. Held within the sensor housing 42 is a printed circuit board 50 (of any desired material in the art), upon which are attached a sensor electrical connection 56, a sensor microphone 52, and a sensor vibration actuator 54.”); and a display unit that displays (Spiegel [0064] “The abdominal statistics application software 32 preferably includes acoustic analog signal processing, digital signal processing, computation, scheduling, and data display systems along with user interactive systems including a touch screen display.”). Spiegel fails to explicitly teach a score acquiring unit that acquires a score related to a condition of the user, using input information containing the information acquired by the information acquiring unit and learning information prepared in advance; a score output unit that outputs the score acquired by the score acquiring unit; a device identifying information acquiring unit that acquires device identifying information for identifying the type of device used to acquire sounds corresponding to the sound information; wherein multiple pieces of learning information are each prepared in association with the device identifying information; the score acquiring unit selects learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit among the multiple pieces of learning information; the score acquiring unit acquires the score using the learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit; the score output by the score output unit. Singh teaches a score acquiring unit that acquires a score related to a condition of the user, using input information containing the information acquired by the information acquiring unit and learning information prepared in advance (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.”); a score output unit that outputs the score acquired by the score acquiring unit (Singh [0092] “In an embodiment, the control unit 106 can be configured to compute any or a combination of a mean and standard deviation of the classification scores to remove any deviation (anomaly etc.), if present, in the classification scores. The control unit can be configured to store an audio slice corresponding to an obtained higher classification score in any or a combination of database 114 or in a CNN training database. The CNN training database can serve as a growing training database for re-training the CNN model for improved accuracy. Further, the heart signal classification along with scores can be transferred to mobile application installed on remote computing or mobile device.”); the score acquiring unit selects learning information (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.”); the score acquiring unit acquires the score (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.”); the score output by the score output unit (Singh [0092] “In an embodiment, the control unit 106 can be configured to compute any or a combination of a mean and standard deviation of the classification scores to remove any deviation (anomaly etc.), if present, in the classification scores. The control unit can be configured to store an audio slice corresponding to an obtained higher classification score in any or a combination of database 114 or in a CNN training database. The CNN training database can serve as a growing training database for re-training the CNN model for improved accuracy. Further, the heart signal classification along with scores can be transferred to mobile application installed on remote computing or mobile device.”). Patel teaches a device identifying information acquiring unit that acquires device identifying information for identifying the type of device used to acquire sounds corresponding to the sound information (Patel [0039] “Once the phrase interpreter 312 receives the speech audio and the metadata, the phrase interpreter 312 (or some other component of the overall system or platform that performs the speech recognition) can decide which acoustic model would be the best for extracting phonemes. Some embodiments use only the model number or device type of the washing machine 306, and the phrase interpreter 312 is able to select an acoustic model that has been created or tuned for that specific device type. The same goes for the other possibilities of metadata, as described above. Furthermore, if the user of the washing machine 406 can be identified, then an acoustic model that is tuned for that specific user's voice can be implemented.”); wherein multiple pieces of learning information are each prepared in association with the device identifying information (wherein multiple pieces of learning information are each prepared in association with the device identifying information (Patel [0039] “Once the phrase interpreter 312 receives the speech audio and the metadata, the phrase interpreter 312 (or some other component of the overall system or platform that performs the speech recognition) can decide which acoustic model would be the best for extracting phonemes. Some embodiments use only the model number or device type of the washing machine 306, and the phrase interpreter 312 is able to select an acoustic model that has been created or tuned for that specific device type. The same goes for the other possibilities of metadata, as described above. Furthermore, if the user of the washing machine 406 can be identified, then an acoustic model that is tuned for that specific user's voice can be implemented.”); corresponding to the device identifying information acquired by the device identifying information acquiring unit among the multiple pieces of learning information (Patel [0039] “Once the phrase interpreter 312 receives the speech audio and the metadata, the phrase interpreter 312 (or some other component of the overall system or platform that performs the speech recognition) can decide which acoustic model would be the best for extracting phonemes. Some embodiments use only the model number or device type of the washing machine 306, and the phrase interpreter 312 is able to select an acoustic model that has been created or tuned for that specific device type. The same goes for the other possibilities of metadata, as described above. Furthermore, if the user of the washing machine 406 can be identified, then an acoustic model that is tuned for that specific user's voice can be implemented.”); using the learning information corresponding to the device identifying information acquired by the device identifying information acquiring unit (Patel [0039] “Once the phrase interpreter 312 receives the speech audio and the metadata, the phrase interpreter 312 (or some other component of the overall system or platform that performs the speech recognition) can decide which acoustic model would be the best for extracting phonemes. Some embodiments use only the model number or device type of the washing machine 306, and the phrase interpreter 312 is able to select an acoustic model that has been created or tuned for that specific device type. The same goes for the other possibilities of metadata, as described above. Furthermore, if the user of the washing machine 406 can be identified, then an acoustic model that is tuned for that specific user's voice can be implemented.”); It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the gastrointestinal acoustic monitoring system of Spiegel to incorporate the machine learning acoustic classification techniques taught by Singh. Spiegel teaches acquiring gastrointestinal (GI) acoustic signals from a patient’s abdomen, processing those signals, and providing data to users. Singh teaches converting physiological acoustic signals (heart sounds) into spectrograms, extracting feature vectors, and applying a trained convolutional neural network (CNN) model to classify the sounds and generate associated classification scores. Because both references relate to analyzing physiological acoustic signals to determine a health condition, applying Singh’s known machine learning spectrogram classification techniques to Spiegel’s GI acoustic signals would have been a predictable use of prior art elements according to their established functions, in order to improve automated diagnostic accuracy and provide quantitative scoring. The substitution of one known physiological acoustic signal (heart sounds) with another (gastrointestinal sounds) involves no change in underlying signal processing principles and would have been well within the level of ordinary skill in the art. It further would have been obvious to incorporate device dependent model selection as taught by Patel into the combined Spiegel and Singh system. Patel teaches selecting among multiple pre-trained acoustic models based on metadata identifying the recording device type, in order to account for device specific acoustic characteristics and thereby improve classification accuracy. Acoustic signal characteristics are known to vary depending on microphone type, hardware configuration, and signal preprocessing. A PHOSITA would have recognized that gastrointestinal acoustic monitoring systems, such as Spiegel’s system, may also employ different sensors, attachment configurations, or microphone types, each affecting signal characteristics. Therefore, applying Patel’s device specific model selection approach to select among multiple trained acoustic models based on device identifying information would have been an obvious design choice yielding predictable improvements in reliability and robustness. The combination of Spiegel, Singh, and Patel merely applies known signal acquisition, machine-learning classification, and device adaptive model selection techniques to a closely related physiological monitoring context. Each reference performs the same function after combination as it did separately: Spiegel acquires GI sounds, Singh classifies physiological acoustic signals using trained models and produces scores, and Patel selects an acoustic model based on device metadata. The resulting system is no more than the predictable use of prior art elements according to their established functions to improve automated health condition scoring accuracy. Accordingly, the claimed invention would have been obvious under 35 U.S.C. § 103 in view of Spiegel in combination with Singh and Patel. Regarding claim 2, Spiegel, Singh, and Patel teach the invention in claim 1, as discussed above, and further teach wherein the input information further contains life information regarding a life state of the user (Spiegel [00110] “(a) Abdominal statistics monitoring before, during and after a period of meal ingestion type, quantity, and schedule may be varied to enable development of a diagnostic model for an individual subject.”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the apparatus of claim 1 to further include input information containing life information regarding a life state of the user, as recited in claim 2. Spiegel teaches monitoring abdominal statistics before, during, and after meal ingestion, and varying meal type, quantity, and schedule to develop a diagnostic model for an individual subject. Meal timing, quantity, and type constitute lifestyle or behavioral information reflecting the user’s life state. Because gastrointestinal acoustic activity is known to be influenced by lifestyle factors, particularly dietary behavior, a PHOSITA would have recognized that incorporating such life information into the input data would predictably improve the personalization and diagnostic accuracy of the gut condition assessment. Accordingly, including life state information represents a predictable use of known health data to enhance physiological signal interpretation. Regarding claim 6, Spiegel, Singh, and Patel teach the invention in claim 2, as discussed above, and further teach wherein the life information contains eating-and-drinking information regarding an eating-and-drinking status of the user (Spiegel [00110] “(a) Abdominal statistics monitoring before, during and after a period of meal ingestion type, quantity, and schedule may be varied to enable development of a diagnostic model for an individual subject.”). It would have been obvious to a PHOSITA to further specify that the life information includes eating-and-drinking information regarding an eating-and-drinking status of the user, as recited in claim 6. Spiegel discloses collecting data regarding meal ingestion type, quantity, and schedule, which constitutes eating-related information. It is well established in the medical and health monitoring arts that digestive activity and gastrointestinal sounds are directly affected by dietary intake. Therefore, incorporating eating-and-drinking status information into the system to contextualize and refine gut condition scoring would have been an obvious design yielding predictable improvements in diagnostic modeling. The modification applies known dietary intake tracking to a known gastrointestinal monitoring system that is consistent with established physiological principles. Claims 18 and 19 are analogous to claim 1, thus claims 18 and 19 are similarly analyzed and rejected in a manner consistent with the rejection of claim 1. Claims 3-5, 7-8, 13-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Spiegel et al. (International Publication No. WO2016112127A1), referred to hereinafter as Spiegel, in view of Singh et al. (U.S. Patent Publication 2021/0090734A1), referred to hereinafter as Singh, and Patel et al. (U.S. Patent Publication 2019/0371311A1), referred to hereinafter as Patel, and further in view of Masamori et al. (International Publication No. WO2020075627A1), referred to hereinafter as Masamori. Regarding claim 3, Spiegel, Singh, and Patel teach the invention in claim 2, as discussed above. Spiegel, Singh, and Patel fail to explicitly teach wherein the life information contains excretion-related information regarding an excretion status of the user. Masamori teaches wherein the life information contains excretion-related information regarding an excretion status of the user (Masamori, page 6, “The calculation unit 12 may determine information regarding the contents in the digestive tract based on the obtained activity score. The information about the contents includes, for example, the presence / absence of the contents, the position of the contents, the moving speed of the contents, and the high possibility that the contents are excreted (for example, the possibility of being excreted when the user steps on the toilet). Height, etc.) and the time until the contents are excreted.”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the gastrointestinal acoustic monitoring system of Spiegel to further include excretion-related information regarding an excretion status of the user, as taught by Masamori. Masamori discloses determining information regarding digestive tract contents, including the likelihood of excretion and the time until contents are excreted, which constitutes excretion information reflecting a user’s excretion status. Gastrointestinal motility and abdominal acoustic activity are physiologically related to bowel movement and excretion events. A PHOSITA would have recognized that incorporating excretion status information into a system analyzing gastrointestinal sounds would improve the accuracy and contextual interpretation of gut condition assessment. Combining Spiegel’s abdominal sound monitoring with Masamori’s excretion status determination represents the predictable use of known physiological indicators to enhance diagnostic modeling, and therefore would have been obvious. Regarding claim 4, Spiegel, Singh, Patel, and Masamori teach the invention in claim 3, as discussed above, and further teach wherein the excretion-related information contains information indicated by Bristol Stool Form Scale input by the user (Masamori, page 6, “The information on excretion includes, for example, the time of excretion, the time of feeling feces, the amount of excrement (for example, a metaphorical expression based on the number of bananas), the hardness of excrement (for example, the classification of feces on the Bristol scale).”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to further specify that the excretion-related information includes information indicated by the Bristol Stool Form Scale, as taught by Masamori. Masamori discloses that excretion-related information may include hardness of excrement classified according to the Bristol scale. The Bristol Stool Form Scale is a well-established medical tool for evaluating bowel condition and gastrointestinal function. Because gastrointestinal acoustic activity is physiologically correlated with bowel motility and stool characteristics, a PHOSITA would have recognized that incorporating standardized stool classification data into a gastrointestinal acoustic monitoring system would improve the accuracy and interpretation of gut condition assessment. The combination represents the predictable use of a known clinical indicator of bowel health with known gastrointestinal sound analysis techniques to enhance diagnostic reliability, and therefore would have been obvious. Regarding claim 5, Spiegel, Singh, Patel, and Masamori teach the invention in claim 3, as discussed above, and further teach wherein the gut score acquiring unit further includes an excretion score acquiring unit that acquires an excretion score based on the excretion-related information, and acquires the gut score using the excretion score acquired by the excretion score acquiring unit (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.” and Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”, and Masamori, page 6, “The information on excretion includes, for example, the time of excretion, the time of feeling feces, the amount of excrement (for example, a metaphorical expression based on the number of bananas), the hardness of excrement (for example, the classification of feces on the Bristol scale).”, and Masamori, page 12, “The extraction step of the computing device 32 extracts information about the activity of the peristaltic movement from the measurement information about the bioactivity acquired by the acquisition device 31. Further, the calculation step of the calculation device 32 obtains an activity score indicating the degree of activity of the peristaltic movement, based on the information regarding the activity of the peristaltic movement. The detailed steps of the acquisition device 31 and the calculation device 32 are the same as the detailed steps of the acquisition unit 11 and the calculation unit 12 of the peristaltic movement automatic measurement device 1 described above. The arithmetic unit 32 can be realized as a physical server or a virtual server, for example. The peristaltic movement automatic measurement system 30 may include a plurality of acquisition devices 31 or a plurality of arithmetic devices 32.”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the gastrointestinal acoustic monitoring system of Spiegel to further include an excretion score acquiring unit that acquires an excretion score based on excretion information and to acquire a gut score using the excretion score, as recited in claim 5. Masamori teaches collecting excretion information, including time of excretion, amount of excrement, and stool hardness classified according to the Bristol Stool Form Scale, and further teaches extracting peristaltic movement information and calculating an activity score indicating the degree of peristaltic activity. This activity score constitutes a quantified physiological indicator related to bowel motility and excretion status. Singh teaches generating classification scores from physiological acoustic data using machine learning models, demonstrating that converting physiological signals into quantitative scores for diagnostic assessment was well known in the art. A PHOSITA would have recognized that assigning a quantitative score to excretion-related information, such as peristaltic activity or stool characteristics, and incorporating that score into an overall gut condition assessment would have been a predictable use of known scoring techniques to improve diagnostic interpretability and personalization. Combining Spiegel’s gastrointestinal acoustic monitoring with Masamori’s excretion scoring and Singh’s physiological scoring framework represents the predictable integration of multiple known physiological indicators into a composite health evaluation system, and therefore would have been obvious. Regarding claim 7, Spiegel, Singh, and Patel teach the invention in claim 6, as discussed above. Spiegel, Singh, and Patel fail to explicitly teach wherein the eating-and-drinking information contains at least one of information regarding the amount of water consumed, information regarding whether or not alcohol was consumed or the amount of alcohol consumed, information regarding whether or not a meal was taken or the content thereof, and information regarding whether or not a particular group of food was consumed or the amount thereof consumed. Masamori teaches wherein the eating-and-drinking information contains at least one of information regarding the amount of water consumed, information regarding whether or not alcohol was consumed or the amount of alcohol consumed, information regarding whether or not a meal was taken or the content thereof, and information regarding whether or not a particular group of food was consumed or the amount thereof consumed (Masamori, page 6, “Alternatively, the peristaltic movement automatic measurement device 1 may be provided with the input unit 17, and the user may be prompted to input information regarding the contents. The information input by the user includes, for example, information about food and drink put in the mouth, information about excretion, and the like. The information about the food and drink put in the mouth includes, for example, the time when the food and drink are put in the mouth, the type of food and drink (vegetables, meats, etc.), the amount of food and drink (for example, the user with respect to the entire food and drink provided. The ratio of eating and drinking) and the like.”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the gastrointestinal acoustic monitoring system of Spiegel to further include eating-and-drinking information such as the amount of water consumed, alcohol consumption, meal intake, food content, or specific food group consumption, as taught by Masamori. Masamori discloses prompting a user to input information regarding food and drink put in the mouth, including the time of ingestion, type of food (vegetables, meats), and amount of food and drink consumed. Such disclosures encompass meal content, food groups, and quantities of consumption, and reasonably include beverages such as water and alcohol. Because gastrointestinal acoustic activity and peristaltic movement are directly affected by dietary intake, a PHOSITA would have recognized that incorporating detailed eating-and-drinking information into a gastrointestinal monitoring system would predictably improve contextual interpretation and diagnostic accuracy of gut condition assessment. The modification represents the predictable integration of known dietary intake tracking with known gastrointestinal sound analysis techniques, and therefore would have been obvious. Regarding claim 8, Spiegel, Singh, and Patel teach the invention in claim 6, as discussed above, and further teach wherein the gut score acquiring unit further includes an score acquiring unit that acquires an score based on the information, and acquires the gut score using the score acquired by the score acquiring unit (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.” and Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”) Spiegel, Singh, and Patel fail to explicitly teach eating-and-drinking information. Masamori teaches eating-and-drinking information (Masamori, page 6, “Alternatively, the peristaltic movement automatic measurement device 1 may be provided with the input unit 17, and the user may be prompted to input information regarding the contents. The information input by the user includes, for example, information about food and drink put in the mouth, information about excretion, and the like. The information about the food and drink put in the mouth includes, for example, the time when the food and drink are put in the mouth, the type of food and drink (vegetables, meats, etc.), the amount of food and drink (for example, the user with respect to the entire food and drink provided. The ratio of eating and drinking) and the like.”). It would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the invention to modify the gastrointestinal acoustic monitoring system of Spiegel to further include an eating-and-drinking score acquiring unit that acquires an eating-and-drinking score based on eating-and-drinking information and to acquire the gut score using the eating-and-drinking score, as recited in claim 8. Masamori teaches prompting a user to input detailed information regarding food and drink consumption, including time, type (vegetables, meats), and amount of food and drink ingested. Singh teaches converting physiological input data into quantitative classification scores using computational models, demonstrating that generating numerical scores from health data for diagnostic evaluation was well known in the art. Because gastrointestinal activity and abdominal acoustic signals are directly influenced by dietary intake, a PHOSITA would have recognized that assigning a quantitative score to eating-and-drinking information and incorporating that score into an overall gut condition assessment would have been a predictable use of known scoring techniques to enhance personalization. The combination represents the predictable integration of known dietary intake tracking with known physiological scoring systems to generate a composite health metric, and therefore would have been obvious. Regarding claim 13, Spiegel, Singh, and Patel teach the invention in claim 1, as discussed above, and further teach wherein the learning information is generated such that learning input information containing sound information is taken as information that is to be input, and the gut score acquiring unit acquires the gut score acquired using the learning information (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.” and Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”). Spiegel, Singh, and Patel fail to explicitly teach a value of a predetermined output indicator regarding an activity state of the guts is taken as information that is to be output, and using the value of the output indicator. Masamori teaches a value of a predetermined output indicator regarding an activity state of the guts is taken as information that is to be output, and using the value of the output indicator (Masamori, page 8, “Therefore, the calculation unit 12 may infer the position, the moving direction, or the like of the content based on the change over time in the activity score. For example, as shown in FIG. 7, the activity scores of "ascending colon, transverse colon, descending colon, and sigmoid colon" arranged in order from the anus are (1) (2) (3) (4). It is assumed that the order has changed. State (1) has the respective activity scores of “3, 1, 1, 1”, state (2) has the respective activity scores of “1, 3, 1, 1”, and the state (3) Assume that each has an activity score of "1, 1, 3, 1" and state (4) has each of an activity score of "1, 1, 1, 3". In FIG. 7, in the state (1), the activity score of the ascending colon far from the anus is 3 and is high, but the activity score of the sigmoid colon near the anus is 1 and is low. Since it is estimated that the content is present at a location with a high activity score, it can be inferred that the content is present at a position far from the anus. After that, as the state changes to (2), (3), and (4), the part with a high activity score approaches the anus. From this event it can be inferred that the contents are moving towards the anus. Further, the calculation unit 12 may predict the likelihood of excretion of the content based on the estimated movement distance of the content, the time required for the movement, and the length of the digestive tract.”). A person of ordinary skill in the art would have found it obvious to generate learning information in which sound information is used as input and a gut activity indicator is used as output, and to acquire a gut score using the predicted output value. Singh teaches training a machine learning model using physiological sound data (heart sounds) as input and producing a physiological condition classification output using a trained CNN model, thereby demonstrating a supervised learning framework mapping acoustic features to a biological state indicator. Spiegel teaches acquiring gastrointestinal acoustic signals from a patient for physiological assessment, establishing the analogous signal domain, and Masamori teaches determining a quantitative indicator of gastrointestinal activity state, including movement of intestinal contents and likelihood of excretion, which provides the target output variable. Because the problem addressed in each reference is the interpretation of biological acoustic signals to determine a physiological condition, a PHOSITA would have been motivated to apply Singh’s known supervised learning mapping of acoustic features to physiological state to Spiegel’s gastrointestinal sounds using Masamori’s gut activity state indicator as the training output in order to automatically evaluate gut condition. This represents the predictable application of known machine learning signal classification techniques to a similar physiological signal and known target parameter, and therefore acquiring a gut score using a learned output indicator would have been obvious. Regarding claim 14, Spiegel, Singh, Patel, and Masamori teach the invention in claim 13, as discussed above, and further teach wherein the output indicator is at least one of a bowel movement state and the number of peristalsis movements of the guts per unit time (Masamori, page 8, “Therefore, the calculation unit 12 may infer the position, the moving direction, or the like of the content based on the change over time in the activity score. For example, as shown in FIG. 7, the activity scores of "ascending colon, transverse colon, descending colon, and sigmoid colon" arranged in order from the anus are (1) (2) (3) (4). It is assumed that the order has changed. State (1) has the respective activity scores of “3, 1, 1, 1”, state (2) has the respective activity scores of “1, 3, 1, 1”, and the state (3) Assume that each has an activity score of "1, 1, 3, 1" and state (4) has each of an activity score of "1, 1, 1, 3". In FIG. 7, in the state (1), the activity score of the ascending colon far from the anus is 3 and is high, but the activity score of the sigmoid colon near the anus is 1 and is low. Since it is estimated that the content is present at a location with a high activity score, it can be inferred that the content is present at a position far from the anus. After that, as the state changes to (2), (3), and (4), the part with a high activity score approaches the anus. From this event it can be inferred that the contents are moving towards the anus. Further, the calculation unit 12 may predict the likelihood of excretion of the content based on the estimated movement distance of the content, the time required for the movement, and the length of the digestive tract.”, and Masamori, page 11, “Examples of the information regarding the arithmetic unit 12 stored in the storage unit 13 include information regarding peristaltic movement, information regarding contents, information regarding the digestive tract, and the like. The information about the peristaltic movement includes, for example, a threshold of the activity score, a history of the activity score, a time of the peristaltic movement, a weighting coefficient for each position to be measured, a pattern of a combination of one or more activity scores, and the like.”). A person of ordinary skill in the art would have found it obvious to express the output indicator as the bowel movement condition or number of peristaltic movements because such values are quantitative representations of Masamori’s determined intestinal motility behavior. Selecting and reporting these particular parameters from the set of Masamori’s disclosed gastrointestinal activity metrics would have been a routine design choice to improve usability of the monitoring result, yielding predictable results. Therefore, the claimed output indicator would have been obvious. Regarding claim 15, Spiegel, Singh, and Patel teach the invention in claim 1, as discussed above, and further teach wherein the gut score acquiring unit and acquires the gut score ((Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.”, Singh [0054] “In an aspect, the method can include steps of: computing, at the processors, any or a combination of a mean and standard deviation of the classification scores to remove any deviation, if present, in the classification scores; and storing, in the database, an audio slice corresponding to an obtained higher classification score.” and Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”). Spiegel, Singh, and Patel fail to explicitly teach includes an element score acquiring unit that acquires element scores respectively for two or more evaluation elements based on the input information, and using the element scores acquired by the element score acquiring unit, and the score output unit further outputs a radar chart using the element scores acquired by the element score acquiring unit. Masamori teaches includes an element score acquiring unit that acquires element scores respectively for two or more evaluation elements based on the input information, and using the element scores acquired by the element score acquiring unit, and the score output unit further outputs a radar chart using the element scores acquired by the element score acquiring unit (Masamori, page 6, “The information on excretion includes, for example, the time of excretion, the time of feeling feces, the amount of excrement (for example, a metaphorical expression based on the number of bananas), the hardness of excrement (for example, the classification of feces on the Bristol scale).”, and Masamori, page 12, “The extraction step of the computing device 32 extracts information about the activity of the peristaltic movement from the measurement information about the bioactivity acquired by the acquisition device 31. Further, the calculation step of the calculation device 32 obtains an activity score indicating the degree of activity of the peristaltic movement, based on the information regarding the activity of the peristaltic movement. The detailed steps of the acquisition device 31 and the calculation device 32 are the same as the detailed steps of the acquisition unit 11 and the calculation unit 12 of the peristaltic movement automatic measurement device 1 described above. The arithmetic unit 32 can be realized as a physical server or a virtual server, for example. The peristaltic movement automatic measurement system 30 may include a plurality of acquisition devices 31 or a plurality of arithmetic devices 32.”)”., Masamori, page 6, “Alternatively, the peristaltic movement automatic measurement device 1 may be provided with the input unit 17, and the user may be prompted to input information regarding the contents. The information input by the user includes, for example, information about food and drink put in the mouth, information about excretion, and the like. The information about the food and drink put in the mouth includes, for example, the time when the food and drink are put in the mouth, the type of food and drink (vegetables, meats, etc.), the amount of food and drink (for example, the user with respect to the entire food and drink provided. The ratio of eating and drinking) and the like. The information on excretion includes, for example, the time of excretion, the time of feeling feces, the amount of excrement (for example, a metaphorical expression based on the number of bananas), the hardness of excrement (for example, the classification of feces on the Bristol scale). Can be mentioned.” and Masamori, ,page 10, “FIG. 12 shows an example of a flowchart for determining the information attached to the graphic corresponding to the activity score. In this example, the activity score has five levels, and the graphic is colored. First, when the activity score is 5 (S15: Yes), it is determined that the color attached to the figure is "red" (S16). When the activity score is not 5 (S15: No) and is 4 (S16: Yes), the color attached to the figure is determined to be "orange" (S17). When the activity score is not 4 (S16: No) but 3 (S18: Yes), the color attached to the figure is determined to be "yellow" (S19). When the activity score is not 3 (S18: No) and is 2 (S20: Yes), the color attached to the figure is determined to be "green" (S21). When the activity score is not 2 (S20: No), the activity score is 1, so that the color attached to the graphic is determined to be "blue" (S22). The display step of the display unit 16 may combine a plurality of pieces of information and attach them to a figure. For example, if the activity score is high, the number “3” may be attached to the figure, and the number “3” may be displayed in red. Furthermore, the display step of the display unit 16 may change the display area of the graphic corresponding to the activity score based on the activity score. As shown in FIG. 13, the higher the activity score, the larger the area of the graphic can be displayed. Conversely, the lower the activity score, the smaller the area of the graphic may be displayed. It is intuitive to understand the difference in size of objects with concrete shapes, rather than the difference in numbers and letters that need to be understood. Therefore, the display step of the display unit 16 changes the display area of the graphic corresponding to the activity score, so that the user can intuitively understand the degree of activity. The display step of the display unit 16 may display not only the graphic corresponding to the activity score but also the graphic corresponding to the contents. For example, a graphic with colors, characters, alphanumeric characters, symbols, images, or the like may be displayed at the position of the contents.”). A person of ordinary skill in the art would have found it obvious to acquire element scores for multiple gastrointestinal evaluation elements, determine a composite gut score from those element scores, and display the element scores in a radar chart format. Singh teaches extracting physiological audio features and generating multiple classification scores from biological sound data, which teaches acquisition of element scores from sensed biological input. Masamori teaches deriving multiple gastrointestinal condition indicators, including peristaltic activity, excretion information, and digestive state, and evaluating bowel condition using activity scores, which teaches combining multiple evaluation elements into an overall physiological assessment. Masamori further teaches graphically presenting score magnitudes using colored and size-varying graphics to allow intuitive comparison of multiple condition parameters. Because multivariate health metrics are conventionally visualized using comparative graphical formats, a PHOSITA would have been motivated to implement a radar chart as a known graphical representation for simultaneously displaying multiple score magnitudes. The modification substitutes one known comparative visualization format for another to improve interpretability and would have yielded predictable results; therefore the claimed radar chart output would have been obvious. Claim 20 is analogous to claim 13-15, thus claim 20 is similarly analyzed and rejected in a manner consistent with the rejection of claims 13-15. Claims 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Spiegel et al. (International Publication No. WO2016112127A1), referred to hereinafter as Spiegel, in view of Singh et al. (U.S. Patent Publication 2021/0090734A1), referred to hereinafter as Singh, and Patel et al. (U.S. Patent Publication 2019/0371311A1), referred to hereinafter as Patel, and further in view of Kinnunen et al. (U.S. Patent Publication 2018/0042540A1), referred to hereinafter as Kinnunen. Regarding claim 9, Spiegel, Singh, and Patel teach the invention in claim 2, as discussed above. Spiegel, Singh, and Patel fail to explicitly teach wherein the life information contains activity status information regarding an activity status of the user. Kinnunen teaches wherein the life information contains activity status information regarding an activity status of the user (Kinnunen [0068] “The ring or other device is configured to measure at least one biosignal of the user, and optionally the user's movements, which may be referred to as ‘raw data’ associated with the user. Further, the measured data is associated with the activity period and the rest period, as may be relevant. The term ‘activity period’ used herein refers to those periods of a day when the user is subjected to any physical activity, such as when the user is exercising, walking, playing or attending to normal day to day tasks. Further, the term ‘rest period’ used herein primarily relates to a sleeping period of the user in a day. However, the rest period may also include time period when the user is sitting or lying down to relax. The movements of the user are measured or obtained from a separate device, and used to determine whether the user is active or resting, i.e. to select the nature of the period.”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to configure the life information of the base reference to include activity status information as taught by Kinnunen. Kinnunen discloses determining whether a user is in an activity period or a rest period based on measured biosignals and movement data, which provides information regarding the user’s activity status. A PHOSITA would have recognized that activity state is a commonly used contextual parameter associated with physiological or user data because the interpretation and usefulness of such data depends on whether the user is active or at rest. Incorporating activity status information into life information therefore represents the predictable use of known contextual data to improve interpretation of collected user data, and involves applying a known technique to a known system to obtain predictable results. Accordingly, modifying the base reference to include activity status information as taught by Kinnunen would have been obvious. Regarding claim 10, Spiegel, Singh, Patel, and Kinnunen teach the invention in claim 9, as discussed above, and further teach wherein the activity status information contains at least one of sleep information regarding sleep and exercise information regarding exercise (Kinnunen [0068] “The ring or other device is configured to measure at least one biosignal of the user, and optionally the user's movements, which may be referred to as ‘raw data’ associated with the user. Further, the measured data is associated with the activity period and the rest period, as may be relevant. The term ‘activity period’ used herein refers to those periods of a day when the user is subjected to any physical activity, such as when the user is exercising, walking, playing or attending to normal day to day tasks. Further, the term ‘rest period’ used herein primarily relates to a sleeping period of the user in a day. However, the rest period may also include time period when the user is sitting or lying down to relax. The movements of the user are measured or obtained from a separate device, and used to determine whether the user is active or resting, i.e. to select the nature of the period.” and Kinnunen [0084] “In another example, the deep data analysis includes determining a sleeping pattern of the user. Specifically, the data from the motion sensor may be processed by the mobile communication device to determine the sleeping pattern of the user. For example, based on the data from the motion sensor when the user went to bed and woke up can be identified. Also, based on the data from the motion sensor how long the user slept can be determined. Therefore, the data (i.e. when the user went to bed, when the user woke up and how long the user slept) enables in defining the sleeping pattern of the user.”.). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to configure the activity status information to include sleep information and/or exercise information as taught by Kinnunen. Kinnunen discloses determining whether a user is in an activity period, including exercising or other physical activities, and a rest period corresponding to sleeping, and further determining a sleeping pattern including sleep duration and timing. A PHOSITA would have recognized that specific activity subclasses such as sleep and exercise is a routine refinement of general activity status because these states have distinct physiological and contextual significance and are commonly recorded separately to improve interpretation of user data. Incorporating such known activity subclasses therefore represents the predictable use of known techniques to improve data granularity and usability, and involves applying a known classification scheme to a known system to obtain predictable results. Accordingly, modifying the base reference to include sleep information and/or exercise information as taught by Kinnunen would have been obvious. Regarding claim 11, Spiegel, Singh, Patel, and Kinnunen teach the invention in claim 9, as discussed above, and further teach wherein the activity status information is information acquired by an activity tracker that acquires the level of activity of the user (Kinnunen [0068] “The ring or other device is configured to measure at least one biosignal of the user, and optionally the user's movements, which may be referred to as ‘raw data’ associated with the user. Further, the measured data is associated with the activity period and the rest period, as may be relevant. The term ‘activity period’ used herein refers to those periods of a day when the user is subjected to any physical activity, such as when the user is exercising, walking, playing or attending to normal day to day tasks. Further, the term ‘rest period’ used herein primarily relates to a sleeping period of the user in a day. However, the rest period may also include time period when the user is sitting or lying down to relax. The movements of the user are measured or obtained from a separate device, and used to determine whether the user is active or resting, i.e. to select the nature of the period.”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to obtain the activity status information using an activity tracker as taught by Kinnunen. Kinnunen discloses a wearable device (a ring) that measures user movement and biosignals to determine whether the user is in an activity period or rest period, which acquires a level of user activity. A PHOSITA would have recognized such a wearable movement monitoring device as an activity tracker because activity trackers conventionally measure motion and physiological signals to determine user activity level. Utilizing such a known device to acquire activity information represents the predictable use of known wearable sensing technology to gather contextual user data and would have been an obvious implementation choice for obtaining activity status information in the base system. Regarding claim 12, Spiegel, Singh, Patel, and Kinnunen teach the invention in claim 9, as discussed above, and further teach wherein the gut score acquiring unit further includes an activity status score acquiring unit that acquires an activity status score based on the activity status information, and acquires the gut score using the activity status score acquired by the activity status score acquiring unit (Singh [0020] “An aspect of the present disclosure pertains to a system for early detection of valvular heart disorders in a patient. The system can include: a recording unit that can be configured to record a set of heart sounds of the patient and store the set of heart sounds in a database operatively coupled to the recording unit; and a control unit having processors and a memory that can be operatively coupled to the processors. The memory storing instructions can be executable by the processors to enable the control unit to: segment the set of heart sounds into a plurality of slices, each of a predetermined length, and each of the plurality of slices can include at least one audio slice; convert the at least one audio slice into corresponding spectrograms; obtain a feature vector corresponding to the spectrograms; compare the obtained feature vector with a predetermined set of feature vectors that can be stored in the database; and classify each of the spectrograms into any or a combination of a normal spectrogram and an abnormal spectrogram, based on the comparison of the obtained feature vector with the predetermined set of feature vectors, to obtain classification scores associated with the spectrograms., and Singh [0022] “In an aspect, the control unit can be configured to classify, using a deep convolutional neural network (CNN) trained model, each of the spectrograms into any or a combination of the normal spectrogram and the abnormal spectrogram.”, Singh [0054] “In an aspect, the method can include steps of: computing, at the processors, any or a combination of a mean and standard deviation of the classification scores to remove any deviation, if present, in the classification scores; and storing, in the database, an audio slice corresponding to an obtained higher classification score.” and Spiegel [0022] “The abdominal statistics system of the present description includes multiple product configurations including a low profile rapidly deployable sensor element that can be conveniently attached to the abdomen of a patient by either a belt or adhesive attachment method. The system acquires acoustic signals as gastrointestinal (Gl) sounds, processes these signals, and provides actionable data to patients and their providers.”, and Kinnunen [0068] “The ring or other device is configured to measure at least one biosignal of the user, and optionally the user's movements, which may be referred to as ‘raw data’ associated with the user. Further, the measured data is associated with the activity period and the rest period, as may be relevant. The term ‘activity period’ used herein refers to those periods of a day when the user is subjected to any physical activity, such as when the user is exercising, walking, playing or attending to normal day to day tasks. Further, the term ‘rest period’ used herein primarily relates to a sleeping period of the user in a day. However, the rest period may also include time period when the user is sitting or lying down to relax. The movements of the user are measured or obtained from a separate device, and used to determine whether the user is active or resting, i.e. to select the nature of the period.” and Kinnunen [0092] “In an embodiment, the mobile communication device is configured to calculate a readiness score for assessing readiness of the user. Specifically, based on long data, trends, cross-correlation analysis of the deep data analysis (i.e. heart rate variability, hypnogram, stress level and the like) the readiness score is calculated. Further, the long data, trends, cross-correlation analysis may be associated with a time period (for example a day, a week or a month) for which the deep data analysis is performed. Therefore, the measured user movements, and biosignals such as heart rate, sleep factor, heart rate variability and stress level for such time period are correlated to calculate the readiness score and thereby assessing readiness of the user.”). A person of ordinary skill in the art would have found it obvious to modify Spiegel’s gastrointestinal monitoring system, which processes abdominal acoustic signals to provide actionable physiological data, by incorporating a score derived from user activity status as taught by Kinnunen and by applying a classification and score framework from Singh’s. Singh teaches generating quantitative classification scores from physiological signal features using machine learning analysis, while Kinnunen teaches calculating a physiological readiness score by correlating biosignals with activity period versus rest period information and user movement data. Because gastrointestinal motility and acoustic activity are well known to vary with physical activity state (rest versus active periods), a PHOSITA would have been motivated to incorporate an activity status score into Spiegel’s gut condition evaluation and use it as an input to the overall gut score in order to improve accuracy and contextual relevance of the physiological assessment. This represents the predictable use of known scoring techniques to enhance interpretation of physiological sensor data. Therefore, acquiring an activity status score and using it to determine a gut score would have been obvious. Response to Arguments Applicant’s arguments and amendments, see Remarks/Amendments submitted 11/05/2025 with respect to the rejection of claims 1-15, and 18-20 have been carefully considered and are addressed below. Claim Rejections - 35 USC § 101 Applicant’s arguments and amendments have been fully considered but are not persuasive. Applicant states that amended claim 1 is directed to an improvement in a specific technology because it now recites acquiring device identifying information and selecting learning information associated with the identified device type. The amended limitation adds the step of selecting one of multiple stored pieces of learning information based on the type of device used to record abdominal sounds. Under the broadest reasonable interpretation, this constitutes choosing an evaluation standard based on the source of input data. Claim 1 recites acquiring device identifying information, selecting learning information corresponding to the device type, applying the selected learning information to evaluate the sound information, and assigning a gut score. These steps describe observing data, selecting criteria, applying predetermined standards, and generating a score. These activities constitute evaluation and judgment, which fall within the mental processes grouping of abstract ideas. The addition of device specific model selection does not remove the claim from this grouping, as selecting among stored evaluation models remains a form of conditional reasoning that can be performed in the mind. Additionally, the amended limitations do not integrate the judicial exception into a practical application. The claim does not improve microphone technology, signal acquisition, signal processing techniques, or computer functionality. Instead, the processor and memory are used in their ordinary capacities to store multiple models. The additional elements of acquiring sound information, recording abdominal sounds using a microphone, and displaying a gut score amount to data gathering and result presentation and are forms of insignificant extra-solution activity. As explained in Electric Power Group, LLC v. Alstom S.A., collecting information, analyzing it, and displaying results, without improving the underlying technology, does not add significantly more to an abstract idea. Accordingly, even with the amendments, claim 1 remains directed to a mental process and does not recite additional elements that integrate the judicial exception into a practical application or amount to significantly more. The rejection under 35 U.S.C. § 101 is therefore maintained. Claim Rejections - 35 USC § 103 Applicant’s arguments traversing the prior art rejection in the previous Office Action have been fully considered. However, those arguments are rendered moot because the present rejection under 35 U.S.C. §103 relies on a different set of prior art references (Spiegel, Singh, Patel, Masamori, and Kinnunen), which teach or suggest the limitations of the claims. Accordingly, Applicant’s prior arguments are not responsive to the current grounds of rejection. The rejection of claims 1-15, and 18-20 under 35 U.S.C. §103 is therefore maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Tsai et al. (U.S. Patent Publication 2016/0354053 A1) teaches a physiological sound recognition system that receives a body sound, extract features, classifies them in to categories, and compares the results to normal or abnormal reference sounds to assess disease risk while filtering noise. Inoue et al. (International Publication WO 2020/202738 A1) teaches an intestinal flora analysis system that collects fecal sample from a user for testing, uses the results to generate user specific questions, evaluates the correlation between the test results and the user’s answers, and provides personalized feedback via the user’s smartphone to help improve intestinal health. Muir et al. (International Publication 2020/118372 A1) teaches a method of monitoring a subject’s gastrointestinal region by obtaining an abdominal signal of bowel sounds, identifying the individual bowel sounds, determining parameter values for each sound, and indicating the presence or absence of at least one GI symptom. Spiegelet al. (CN Publication 104736043 A) teaches a multisensory wireless abdominal monitoring system that continuously monitors gastrointestinal and abdominal wall function, and generates clinically interpretable information for immediate clinical action in various inpatient and outpatient settings. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYRA R LAGOY whose telephone number is (703)756-1773. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at (571)272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.L./Examiner, Art Unit 3685 /KAMBIZ ABDI/Supervisory Patent Examiner, Art Unit 3685
Read full office action

Prosecution Timeline

Jun 06, 2023
Application Filed
Feb 21, 2025
Non-Final Rejection — §101, §103
May 08, 2025
Examiner Interview Summary
May 08, 2025
Applicant Interview (Telephonic)
Jul 09, 2025
Response Filed
Aug 08, 2025
Final Rejection — §101, §103
Sep 18, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Examiner Interview Summary
Nov 05, 2025
Response after Non-Final Action
Nov 07, 2025
Response after Non-Final Action
Dec 10, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month