DETAILED ACTION
Applicant’s arguments, filed on 01/02/2026, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Applicants have amended their claims, filed on 01/02/2026, and therefore rejections newly made in the instant office action have been necessitated by amendment.
Claims 11-19 are the current claims hereby under examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 11-12 and 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Harris (WO 2018107008).
Regarding independent claim 11, Harris teaches an auscultatory sound analysis system ([0002]: “The present invention relates generally to acquiring and analyzing the breath sounds of a subject and more particularly to a system and method for analyzing the breath sounds to determine if one or more conditions exist that may be signs of disease”) comprising:
(a) auscultatory sound signal acquisition means for acquiring an in-body auscultatory sound signal from a patient ([0026]: “Fig. 1 illustrates a block diagram of an exemplary auscultation device 100 in accordance with the present invention.”; [0028]: “the external system may include instructions to a subject for positioning the auscultation device 100 relative to anatomical structures, for recording one or more audio samples, etc.”; [0037]: “detection system 300 receives breath sound data from auscultation device 100”);
(b) auscultatory sound signal sampling means for digitally sampling and converting the in-body auscultatory sound signal into auscultatory sound discrete data ([0035]: “The audio sample is acquired for a predetermined duration of time, and according to a prescribed sample rate, under control of the processor 102. Preferably, the duration is sufficient to allow for multiple inspiration and expiration breathing cycles. In one embodiment, the predetermined duration is at least 10 seconds. Further, the sample rate may be selected to facilitate compression for streaming of audio in real time, to be high enough to meet signal processing requirements, and to be low enough to be compatible with BLE or other communications technologies. By way of example, a sampling rate in the range of about 8 kHz to about 12 kHz may be suitable for this purpose, although any suitable sampling rate may be used. Associated audio data are communicated for processing to identify one or more conditions, e.g., after conversion of the microphone-acquired audio signal to data in digital form.”. The audio data being recorded over a set period of time indicates that it is discrete data.); and
(c) spectrogram conversion means for converting the auscultatory sound discrete data into an auscultatory sound spectrogram ([0043]: “Referring again to Fig. 5, at element 504, intensity mapping component 302 determines a time-frequency representation based on the breath sound data. A time-frequency representation may be determined for breath sound data obtained from each location on a user. In one or more embodiments, the time-frequency representation may be a 3D time-frequency representation or spectrogram.”; [0040]: “the detection system 300 includes computer-readable, processor-executable instructions 414 stored in the memory 306 for carrying out the methods described herein. For example, memory 306 comprises processor-executable instructions corresponding to one or more of intensity mapping component 302 and condition identifier component 304, as discussed in greater detail below”. The processor contains the intensity mapping component and the condition identifier component, therefore the processing is the spectrogram conversion means.), wherein
on a basis of the auscultatory sound spectrogram acquired by the spectrogram conversion means, a measurement of strengths of a signal component in at least one predetermined frequency range is performed ([0045]-[0046]: “condition identifier component 304 analyzes the time-frequency representation to identify one or more of a line of high-intensity frequencies or a band of high-intensity frequencies. A line of high-intensity frequencies that satisfies one or more predetermined thresholds may be deemed to correspond to a wheeze and a band of high-intensity frequencies that satisfies one or more predetermined thresholds may be deemed to correspond to a crackle … An edge or line may be identified as a region of continuous high-amplitude signal for frequencies over a specified frequency range. For example, condition identifier component 304 may attempt to identify a set of continuous high-intensity frequencies between 100 Hz and 800 Hz, however other ranges may be used. A frequency may be determined to be high-intensity (e.g., a peak) when it exceeds a predetermined threshold amplitude. The threshold amplitude may be based on the intensities (e.g., amplitudes) of other adjacent frequencies within the representation”; [0040]: “the detection system 300 includes computer-readable, processor-executable instructions 414 stored in the memory 306 for carrying out the methods described herein. For example, memory 306 comprises processor-executable instructions corresponding to one or more of intensity mapping component 302 and condition identifier component 304, as discussed in greater detail below”. The processor contains the intensity mapping component and the condition identifier component, therefore the processing is the spectrogram conversion means.), the measurement being executed a plurality of times at intervals corresponding to a plurality of auscultatory sound spectrograms obtained at the respective intervals ([0057]: “this may be performed by periodically or successively by processing microphone-captured audio signal in intervals, such as 10-second intervals”), the signal component exceeding a certain threshold value is extracted for each of the measurements ([0046]: “A frequency may be determined to be high-intensity (e.g., a peak) when it exceeds a predetermined threshold amplitude. The threshold amplitude may be based on the intensities (e.g., amplitudes) of other adjacent frequencies within the representation. Condition identifier component 304 may also be configured to identify one or more harmonics of each high intensity frequency”), and the strengths of the signal component are output along a time axis covering a period corresponding to the plurality of intervals ([0046]: “Fig. 7a illustrates a time-frequency representation, or spectrogram, generated from first breath sound data, e.g., breath sound data including patient wheezing. The time-frequency representation depicts the intensity of a frequencies over a period time that the audio signal was acquired”).
Regarding claim 12, Harris teaches the auscultatory sound analysis system according to claim 11, incorporating: a communication computation device provided with a display function ([0038]: “detection system 300 of Fig. 4 includes a general purpose processor 402 and a bus 404 employed to connect and enable communication between the processor 402 and the components of the detection system 300 in accordance with known techniques. The detection system 300 typically includes a user interface adapter 406, which connects the processor 402 via the communication bus 404 to one or more interface devices, such as a keyboard, mouse, and/or other interface devices, which can be any user interface device, such as a touch sensitive screen, digitized entry pad, etc. The bus 404 also connects a display device 408, such as an LCD screen or monitor, to the processor 402 via a display adapter.”).
Regarding independent claim 16, Harris teaches an auscultatory sound analysis system ([0002]: “The present invention relates generally to acquiring and analyzing the breath sounds of a subject and more particularly to a system and method for analyzing the breath sounds to determine if one or more conditions exist that may be signs of disease”) comprising:
(a) auscultatory sound signal acquisition means for acquiring an auscultatory sound signal ([0026]: “Fig. 1 illustrates a block diagram of an exemplary auscultation device 100 in accordance with the present invention.”; [0028]: “the external system may include instructions to a subject for positioning the auscultation device 100 relative to anatomical structures, for recording one or more audio samples, etc.”; [0037]: “detection system 300 receives breath sound data from auscultation device 100”);
(b) auscultatory sound signal sampling means for digitally sampling and converting the auscultatory sound signal into auscultatory sound discrete data ([0035]: “The audio sample is acquired for a predetermined duration of time, and according to a prescribed sample rate, under control of the processor 102. Preferably, the duration is sufficient to allow for multiple inspiration and expiration breathing cycles. In one embodiment, the predetermined duration is at least 10 seconds. Further, the sample rate may be selected to facilitate compression for streaming of audio in real time, to be high enough to meet signal processing requirements, and to be low enough to be compatible with BLE or other communications technologies. By way of example, a sampling rate in the range of about 8 kHz to about 12 kHz may be suitable for this purpose, although any suitable sampling rate may be used. Associated audio data are communicated for processing to identify one or more conditions, e.g., after conversion of the microphone-acquired audio signal to data in digital form.”. The audio data being recorded over a set period of time indicates that it is discrete data.); and
(c) spectrogram conversion means for converting the auscultatory sound discrete data into an auscultatory sound spectrogram ([0043]: “Referring again to Fig. 5, at element 504, intensity mapping component 302 determines a time-frequency representation based on the breath sound data. A time-frequency representation may be determined for breath sound data obtained from each location on a user. In one or more embodiments, the time-frequency representation may be a 3D time-frequency representation or spectrogram.”; [0040]: “the detection system 300 includes computer-readable, processor-executable instructions 414 stored in the memory 306 for carrying out the methods described herein. For example, memory 306 comprises processor-executable instructions corresponding to one or more of intensity mapping component 302 and condition identifier component 304, as discussed in greater detail below”. The processor contains the intensity mapping component and the condition identifier component, therefore the processing is the spectrogram conversion means.), wherein
on a basis of the auscultatory sound spectrogram acquired by the spectrogram conversion means, a measurement of strengths of a signal component in at least one predetermined frequency range is performed ([0045]-[0046]: “condition identifier component 304 analyzes the time-frequency representation to identify one or more of a line of high-intensity frequencies or a band of high-intensity frequencies. A line of high-intensity frequencies that satisfies one or more predetermined thresholds may be deemed to correspond to a wheeze and a band of high-intensity frequencies that satisfies one or more predetermined thresholds may be deemed to correspond to a crackle … An edge or line may be identified as a region of continuous high-amplitude signal for frequencies over a specified frequency range. For example, condition identifier component 304 may attempt to identify a set of continuous high-intensity frequencies between 100 Hz and 800 Hz, however other ranges may be used. A frequency may be determined to be high-intensity (e.g., a peak) when it exceeds a predetermined threshold amplitude. The threshold amplitude may be based on the intensities (e.g., amplitudes) of other adjacent frequencies within the representation”; [0040]: “the detection system 300 includes computer-readable, processor-executable instructions 414 stored in the memory 306 for carrying out the methods described herein. For example, memory 306 comprises processor-executable instructions corresponding to one or more of intensity mapping component 302 and condition identifier component 304, as discussed in greater detail below”. The processor contains the intensity mapping component and the condition identifier component, therefore the processing is the spectrogram conversion means.), the measurement being executed a plurality of times at intervals corresponding to a plurality of auscultatory sound spectrograms obtained at the respective intervals ([0057]: “this may be performed by periodically or successively by processing microphone-captured audio signal in intervals, such as 10-second intervals”), the signal component exceeding a certain threshold value is extracted for each of the measurements ([0046]: “A frequency may be determined to be high-intensity (e.g., a peak) when it exceeds a predetermined threshold amplitude. The threshold amplitude may be based on the intensities (e.g., amplitudes) of other adjacent frequencies within the representation. Condition identifier component 304 may also be configured to identify one or more harmonics of each high intensity frequency”), and the strengths of the signal component are output along a time axis covering a period corresponding to a plurality of the intervals ([0046]: “Fig. 7a illustrates a time-frequency representation, or spectrogram, generated from first breath sound data, e.g., breath sound data including patient wheezing. The time-frequency representation depicts the intensity of a frequencies over a period time that the audio signal was acquired”).
Regarding claim 17, Harris teaches the auscultatory sound analysis system according to claim 16, incorporating: a communication computation device provided with a display function ([0038]: “detection system 300 of Fig. 4 includes a general purpose processor 402 and a bus 404 employed to connect and enable communication between the processor 402 and the components of the detection system 300 in accordance with known techniques. The detection system 300 typically includes a user interface adapter 406, which connects the processor 402 via the communication bus 404 to one or more interface devices, such as a keyboard, mouse, and/or other interface devices, which can be any user interface device, such as a touch sensitive screen, digitized entry pad, etc. The bus 404 also connects a display device 408, such as an LCD screen or monitor, to the processor 402 via a display adapter.”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Harris as applied to claim 11 above, and further in view of Sato (WO 2015170772). Citations to WO 2015170772 will refer to the English Machine Translation that accompanies this Office Action.
Regarding claim 13, Harris teaches the auscultatory sound analysis system according to claim 11, incorporating: a body temperature thermometer ([0027]: “processor 102 may also be coupled to an optional memory 1 10 and/or an optional temperature sensor 1 12 configured to receive a temperature reading from a subject”).
However, Harris does not teach the system incorporating an electrocardiograph.
Sato discloses a device for measuring breathing of a user. Specifically, Sato teaches the device incorporating an electrocardiograph (Page 5: “a detection unit equipped with a sound sensor and an electrocardiogram sensor on a surface that is pressed against the skin of a human body”). Harris and Sato are analogous arts as they are both related to devices that measure the breathing of a user to determine health conditions.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the electrocardiogram from Sato into the system from Harris as it allows the system to measure information about the user’s heart, which can provide more information for the user about their health status.
Claims 14-15 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Harris as applied to claims 11 and 16 above, and further in view of Magar (US 20200315537).
Regarding claim 14, Harris teaches the auscultatory sound analysis system according to claim 11, wherein data generated in the auscultatory sound analysis system is uploaded into a server on an Internet ([0039]: “The detection system 300 may be associated with such other computer systems in a local area network (LAN) or a wide area network (WAN), and operates as a server in a client/server arrangement with another computer”).
However, Harris is silent on what type of server is used.
Magar discloses a sensor to collect physiological data and communicate the data to a mobile device. Specifically, Magar teaches the server being a cloud server ([0084]: “A server (e.g., servers 250) may include a web server … In some instances a server, such as a cloud server, may be associated with and/or in communication with one or more user accounts (accessing or communicating with the cloud server via the one or more external devices 210, for example). The server may be configured to dispatch updates to the client software, such as by tracking the implementations of the client software on the one or more external devices 210 and/or communicating with the one or more user accounts.”). Harris and Magar are analogous arts as they are both related to systems that measure physiological data and communicate it with another device.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the cloud server from Magar into the system from Harris as Harris is silent on the type of server, and Magar discloses a suitable server in an analogous device.
Regarding claim 15, the Harris/Magar combination teaches the auscultatory sound analysis system according to claim 14, wherein data related to an analysis is downloaded from the cloud server on the Internet (Harris, [0039]: “The detection system 300 may be associated with such other computer systems in a local area network (LAN) or a wide area network (WAN), and operates as a server in a client/server arrangement with another computer”; Magar, [0084]: “A server (e.g., servers 250) may include a web server … In some instances a server, such as a cloud server, may be associated with and/or in communication with one or more user accounts (accessing or communicating with the cloud server via the one or more external devices 210, for example). The server may be configured to dispatch updates to the client software, such as by tracking the implementations of the client software on the one or more external devices 210 and/or communicating with the one or more user accounts.”. The detection system from Harris communicates the analysis to the cloud server, which can be downloaded by the user.).
Regarding claim 18, Harris teaches the auscultatory sound analysis system according to claim 16, wherein data generated in the auscultatory sound analysis system is uploaded into a server on an Internet ([0039]: “The detection system 300 may be associated with such other computer systems in a local area network (LAN) or a wide area network (WAN), and operates as a server in a client/server arrangement with another computer”).
However, Harris is silent on what type of server is used.
Magar discloses a sensor to collect physiological data and communicate the data to a mobile device. Specifically, Magar teaches the server being a cloud server ([0084]: “A server (e.g., servers 250) may include a web server … In some instances a server, such as a cloud server, may be associated with and/or in communication with one or more user accounts (accessing or communicating with the cloud server via the one or more external devices 210, for example). The server may be configured to dispatch updates to the client software, such as by tracking the implementations of the client software on the one or more external devices 210 and/or communicating with the one or more user accounts.”). Harris and Magar are analogous arts as they are both related to systems that measure physiological data and communicate it with another device.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the cloud server from Magar into the system from Harris as Harris is silent on the type of server, and Magar discloses a suitable server in an analogous device.
Regarding claim 19, the Harris/Magar combination teaches the auscultatory sound analysis system according to claim 18, wherein data related to an analysis is downloaded from the cloud server on the Internet (Harris, [0039]: “The detection system 300 may be associated with such other computer systems in a local area network (LAN) or a wide area network (WAN), and operates as a server in a client/server arrangement with another computer”; Magar, [0084]: “A server (e.g., servers 250) may include a web server … In some instances a server, such as a cloud server, may be associated with and/or in communication with one or more user accounts (accessing or communicating with the cloud server via the one or more external devices 210, for example). The server may be configured to dispatch updates to the client software, such as by tracking the implementations of the client software on the one or more external devices 210 and/or communicating with the one or more user accounts.”. The detection system from Harris communicates the analysis to the cloud server, which can be downloaded by the user.).
Response to Arguments
All of applicant’s argument regarding the rejections and objections previously set forth have been fully considered and are persuasive unless directly addressed subsequently.
Applicant’s arguments with respect to claims 11-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN K MCCORMACK whose telephone number is (703)756-1886. The examiner can normally be reached Mon-Fri 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at 5712727540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.K.M./Examiner, Art Unit 3791
/MATTHEW KREMER/Primary Examiner, Art Unit 3791