Prosecution Insights
Last updated: April 19, 2026
Application No. 17/427,005

Systems and Methods for Sound Mapping of Anatomical and Physiological Acoustic Sources Using an Array of Acoustic Sensors

Final Rejection §103
Filed
Jul 29, 2021
Examiner
PORTILLO, JAIRO H
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Medical College of Wisconsin, Inc.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
181 granted / 335 resolved
-16.0% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
42 currently pending
Career history
377
Total Applications
across all art units

Statute-Specific Performance

§101
20.5%
-19.5% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 335 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf et al ("Acoustic Imaging of Heart Using Microphone Arrays") (“Kajbaf”) in view of Moghaddasi et al (“Imaging of heart acoustic based on the sub-space methods using a microphone array”) (“Moghaddasi”). Regarding Claim 1, while Kajbaf teaches a method for generating a sound map that depicts a spatial distribution of acoustic sources within a subject (p738, Abstract, “A simultaneous multisensor recording is used to record the heart sound from human chest wall. By using beamforming techniques the recorded data are combined to form a two or three dimensional representation of heart sound distribution. This map shows the acoustic energy of heart in different locations.” Fig. 2 shows a two-dimensional spatial distribution, Fig. 3 shows a three-dimensional spatial distribution), the steps of the method comprising: (a) acquiring acoustic signal data from a subject using an array of acoustic sensors coupled to a surface of the subject and arranged around an anatomical region-of-interest (p740-741, IV. Experimental Tests, “An experimental setup is prepared for acoustic imaging of heart. It consists of software and hardware units…Hardware has three major parts: microphone array is made up of Panasonic electret condenser microphones WM52B, preprocessing unit which amplifies and filters microphone signal in order to enhance sound signal to noise ratio and data acquisition card (DAQ) which is an Advantech PCI-1713 card playing the role of interfacing with a Pentium IV platform PC…The signals are recorded and saved on a PC, they are segmented using the described method and beamforming is done. Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.”); (b) providing relative position data that indicate a relative position of acoustic sensors in the array of acoustic sensors (p739, C. Array arrangement, Fig. 1, different relative position of microphones test to identify optimal array arrangement, with the spacing preset. “The result of simulations for 3-by-3 array at the speed of 40 m/s is presented in figure 2. This array has the least total error and standard deviation of the error among other simulated arrays.” Figs. 2 and 3 show microphone relative position data included in the figures as red dots); and (c) reconstructing from the acoustic signal data and using the relative position data, a plurality of sound maps that depicts a spatial distribution of acoustic sources in the subject, the plurality of sound maps each corresponding to a different time point (p740-741, IV. Experimental Tests, “The signals are recorded and saved on a PC, they are segmented using the described method and beamforming is done. Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.” Multiple three dimensional sound maps are reconstructed, the sound maps corresponding to different time points is further supported by Fig. 4 showing multiple energy profile generated in a 3 second span, where the energy profile is the basis of each sound map as noted in p740, “Figure 4 illustrates a PCG signal and its segmented energy profile. These labeled signals are used separately in the beamformer to form time-space maps. More details can be read in [7].”), Kajbaf fails to teach combining the plurality of sound maps to generate a four-dimensional sound map that depicts a spatiotemporal distribution of the acoustic sources in the subject, wherein reconstructing the four-dimensional sound map comprises combining the plurality of sound maps to generate the four-dimensional sound map. However Moghaddasi teaches an imaging method of heart acoustics (p133, Abstract) comprising Generating multi-dimensional sound maps that vary over time (Fig. 8, p138-139, 3.2 Sound source localization in the heart, “According to Fig. 8, in a normal heart, four main sound sources are detected via the GD-MUSIC method, and in each temporal phase of the heart cycle two of them are present.” Time-based evaluation of localized sources, sources shown to vary over time in two-dimensional image); generating a four-dimensional sound map that depicts a spatiotemporal distribution of the acoustic sources in the subject (p140, Table 3, the sound source localization is compared to a reference four-dimensional echocardiography for validation, where one of ordinary skill in the art would recognize that with 4D-echocardigraphy “3-dimentional plots of the heart could be illustrated in a number of following time shots;” (p139, 3.2.1. Validation of the sound source localization in the heart via 4D-echocardigraphy). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to consider that the three-dimensional sound map of acquired sound sources, specifically multiple three-dimensional sound maps acquired over time, taught in Kajbaf is akin to the plotted multi-dimension sound sources over time in Moghaddasi and thus would equivalently work as a comparison point to 4D echocardiography when adding the teachings of Moghaddasi. Correspondingly, if the time-based three-dimension sound map of Kajbaf is being compared to a four-dimensional display of 4D echocardiography as taught by Moghaddasi, the sound maps of Kajbaf are in essence being combined to reconstruct a four-dimensional sound map by being compared to a four-dimensional sound map with respect to time. Regarding Claim 2, Kajbaf and Moghaddasi teach the method of claim 1, and Kajbaf teaches wherein the sound map is reconstructed using a source localization algorithm implemented with a hardware processor and a memory (p738, I. Introduction, A. Delay-and-sum beamformer, Abstract, sound data is saved and processed on a personal computer). Regarding Claim 3, Kajbaf and Moghaddasi teach the method of claim 2, and Kajbaf teaches wherein the source localization algorithm includes a beamforming algorithm (See Claim 2 Rejection). Regarding Claim 5, Kajbaf and Moghaddasi teach the method of claim 1, wherein the sound map depicts the spatiotemporal distribution of the acoustic sources as sound intensity in time and space being encoded by a spectrum of colors (See Claim 1 Rejection, Moghaddasi: Fig. 8, sound intensity of acoustic sources encoded by a spectrum of colors in time and space for a two-dimensional image over time, would be equivalently applied when the image in space is three-dimensional). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Moghaddasi and further in view of Ari et al (“Detection of cardiac abnormality from PCG signal using LMS based least square SVM classifier”) (“Ari”). Regarding Claim 6, while Kajbaf and Moghaddasi teach the method of claim 1, their combined efforts fail to teach further comprising generating spectral data by applying a wavelet transform to the acoustic signal data and using the spectral data when reconstructing the sound map in order to guide determination of the acoustic sources. However Ari teaches a cardiac abnormality analyzer from PCG signals (p8019, Abstract) comprising generating spectral data by applying a wavelet transform to acoustic signal data and using the spectral data when reconstructing the sound map in order to guide determination of the acoustic sources (p8022-8023, 4.2 Feature extraction process, “In time–frequency domain wavelet based feature extraction technique has been successfully used to get meaningful features from non-stationary heart sound signals… The spectrum of the heart sound cycle is divided into sub-bands to extract the discriminating information from the normal and abnormal heart sounds… This forms 32-element initial feature vectors for the cases under study: aortic insufficiency, aortic stenosis, atrial septal defect, mitral regurgitation, mitral stenosis, normal heart sound, respectively.”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to apply the wavelet transform and wavelet analysis of Ari for the heart sounds of Kajbaf as a way to evaluate the heart sound signals for abnormalities and the source of their abnormalities, enabling a healthcare provider to provide appropriate treatment. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Moghaddasi and further in view of Ari and further in view of Jeong et al (US 2019/0150771) (“Jeong”). Regarding Claim 7, while Kajbaf, Moghaddasi, and Ari teach the method of claim 6, their combined efforts fail to teach wherein the spectral data is used to guide the determination of the acoustic sources by associating the spectral data with bandwidths of sound frequencies associated with different organs . However Jeong teaches a body sound analyzer (Abstract) teaches there are bandwidths of sound frequencies associated with different organs ([0003], [0083]) and teaches that sounds and frequencies from sources will differ based on depth and the organ ([0088]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to consider the frequency-specific features of an organ and typical depth as taught by Jeong when doing the source localization of Kajbaf as a way to identify relevant sound data specific to the organ you are analyzing. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Moghaddasi and further in view of Mahajan et al (US 2017/0119255) (“Mahajan”) and further in view of Hwang et al (US 2017/0060298) (“Hwang”). Regarding Claim 8, while Kajbaf and Moghaddasi teach the method of claim 1, their combined efforts fail to teach wherein the relative position data are provided by a conductive elastic band coupled to the array of acoustic sensors. However Mahajan teaches an integrated cardio-respiratory system (Abstract) measuring cardiac data through an acoustic sensing array and respiratory data through an acoustic sensing array ([0014], [0036]) where relative position data is provided by a band coupled to the array of acoustic sensors ([0035] system may be mounted on a belt, position data provided by motion and orientation sensors, [0036] “thoracic motion array 72, motion and orientation sensors 74”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to provide the relative position data of Kajbaf with sensors mounted on a band as taught by Mahajan as a means to secure the microphone arrays with confidence to the subject. Yet their combined efforts fail to teach the relative position data are provided by a conductive elastic band. However Hwang teaches a smart interaction device for measuring movement of a garment (Abstract, [0131], Fig. 7) where relative position data from chest movement is provided by a conductive elastic band ([0131]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the thoracic motion array of Mahajan may be accomplished by an integrated conductive elastic band as taught by Hwang as the application of known technique for measuring chest movement (Hwang: a conductive elastic band integrated into a wearable) to the known systems tracking movement of chest-mounted sensors (Kajbaf and Hwang) ready for improvement to yield predictable results of providing accurate data of the microphone positions within the array of Kajbaf, for an optimally accurate sound map. Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Moghaddasi and further in view of Irish et al (US 9,924,921”) (“Irish”). Regarding Claim 9, while Kajbaf and Moghaddasi teach the method of claim 1, their combined efforts fail to teach wherein the relative position data are provided by tracking positions of each acoustic sensor in the array of acoustic sensors. However Irish teaches a wearable acoustic array-based monitoring system (Abstract, Fig. 6, Col. 10, L. 58 – Col. 11, L. 8, a worn device with an acoustic sensor array, Col. 9, L. 14-40, acoustic sensors measuring snapping sound from a wearer’s joint) wherein the accuracy of the determined location of each acoustic sensor, and therefore the origin of measured sounds, is verified by tracking positions of each acoustic sensor (Col. 14, L. 32 – Col. 15, L. 28, to ensure accuracy of measured sounds, location is tracked for each acoustic sensor. This is done by a calibration transmitter that transmits acoustic pulses and radio-frequency pulses that transmit to each acoustic sensor that is paired with an RF detector. The measurement of arrival of each pulse is used to identify the 3-dimensional location of the acoustic sensor). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the position of each acoustic sensor in the acoustic sensor array of Kajbaf is tracked as taught by Irish as Kajbaf teaches that the system should be set in an optimal configuration with specific placements of sensors (Kajbaf: p2, C. Array arrangement) and Irish’s teachings provide a way to verify said specific placement of sensors. Regarding Claim 10, Kajbaf, Moghaddasi, and Irish teach the method of claim 9, wherein tracking the positions of each acoustic sensor in the array of acoustic sensors comprises at least one of optical or radio frequency (RF) tracking (See Claim 9 Rejection). Claim(s) 11-12, 15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Mahajan et al (US 2017/0119255) (“Mahajan”) and further in view of Hwang et al (US 2017/0060298) (“Hwang”) and further in view of Brockway et al (US 2014/0213921) (“Brockway”). Regarding Claim 11, while Kajbaf teaches a sound map generating system (p738, Abstract, “A simultaneous multisensor recording is used to record the heart sound from human chest wall. By using beamforming techniques the recorded data are combined to form a two or three dimensional representation of heart sound distribution. This map shows the acoustic energy of heart in different locations.” Fig. 2 shows a two-dimensional spatial distribution, Fig. 3 shows a three-dimensional spatial distribution), comprising: a sensor array configured to be worn around an anatomical region-of-interest, comprising: a plurality of acoustic sensors (p740-741, IV. Experimental Tests, “An experimental setup is prepared for acoustic imaging of heart. It consists of software and hardware units…Hardware has three major parts: microphone array is made up of Panasonic electret condenser microphones WM52B, preprocessing unit which amplifies and filters microphone signal in order to enhance sound signal to noise ratio and data acquisition card (DAQ) which is an Advantech PCI-1713 card playing the role of interfacing with a Pentium IV platform PC…The signals are recorded and saved on a PC, they are segmented using the described method and beamforming is done. Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.”); a computing device in communication with the sensor array and being configured to: receive acoustic signal data from the plurality of acoustic sensors (p740-741, IV. Experimental Tests, “An experimental setup is prepared for acoustic imaging of heart. It consists of software and hardware units…Hardware has three major parts: microphone array is made up of Panasonic electret condenser microphones WM52B, preprocessing unit which amplifies and filters microphone signal in order to enhance sound signal to noise ratio and data acquisition card (DAQ) which is an Advantech PCI-1713 card playing the role of interfacing with a Pentium IV platform PC…The signals are recorded and saved on a PC, they are segmented using the described method and beamforming is done. Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.”), receive relative position data from the system (p739, C. Array arrangement, Fig. 1, different relative position of microphones test to identify optimal array arrangement, with the spacing preset. “The result of simulations for 3-by-3 array at the speed of 40 m/s is presented in figure 2. This array has the least total error and standard deviation of the error among other simulated arrays.” Figs. 2 and 3 show microphone relative position data included in the figures as red dots); and reconstruct from the acoustic signal data using the relative position data, a sound map that depicts a spatial distribution of acoustic sources in a subject wearing the sensor array (p740-741, IV. Experimental Tests, “The signals are recorded and saved on a PC, they are segmented using the described method and beamforming is done. Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.”), Kajbaf fails to teach wherein each acoustic sensor of the plurality of acoustic sensors comprise a dual sensor configured to measure both acoustic signals and cardiac electrical activity; and the sensor array comprising an elastic motion sensor coupling each of the acoustic sensors to form the sensor array. However Mahajan teaches an integrated cardio-respiratory system (Abstract) measuring cardiac data through an acoustic sensing array and respiratory data through an acoustic sensing array ([0014], [0036]) where relative position data is provided by a band coupled to the array of acoustic sensors ([0035] system may be mounted on a belt, position data provided by motion and orientation sensors, [0036] “thoracic motion array 72, motion and orientation sensors 74”); and the system further comprises multiple electrocardiogram sensors and the computing device is further configured to receive cardiac electrical signal data from each electrocardiogram sensor ([0035], [0046]-[0047] for data segmentation purposes, [0075] for patient health contextualization). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to provide the relative position data of Kajbaf with sensors mounted on a band as taught by Mahajan as a means to secure the microphone arrays with confidence to the subject. Also, it would have been obvious to include ECG sensing in the system of Kajbaf as taught by Mahajan as ECG provides context to aberrations in patient state and provides a consistent way to divide patient data into smaller segments (using, for example, R-waves as the segmentation points). Yet their combined efforts fail to teach the sensor array comprising an elastic motion sensor coupling each of the acoustic sensors to form the sensor array. However Hwang teaches a smart interaction device for measuring movement of a garment (Abstract, [0131], Fig. 7) where relative position data from chest movement is provided by a conductive elastic band ([0131]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the thoracic motion array of Mahajan may be accomplished by an integrated conductive elastic band as taught by Hwang as the application of known technique for measuring chest movement (Hwang: a conductive elastic band integrated into a wearable) to the known systems tracking movement of chest-mounted sensors (Kajbaf and Hwang) ready for improvement to yield predictable results of providing accurate data of the microphone positions within the array of Kajbaf, for an optimally accurate sound map. Yet their combined efforts fail to teach wherein each acoustic sensor of the plurality of acoustic sensors comprise a dual sensor configured to measure both acoustic signals and cardiac electrical activity. However Brockway teaches a system for evaluating cardiac risk with acoustic sensing and ECG sensing (Abstract, [0063], Fig. 6) where the system may utilize multiple dual sensors, the dual sensors comprising an acoustic sensor and an ECG sensor paired together in a single patch (Fig. 6, [0063]-[0064] in Fig. 6, multiple patches are provided, supporting use in an array. Each patch device is described as a combined ECG and acoustical sensor 801 and note that the use of multiple sensors provides redundancy in your data). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the acoustic sensors of Kajbaf comprise ECG sensors as taught by Brockway as a simple substitution of one form of orienting ECG sensors and acoustic sensors on a subject (Mahajan: separately) for another (Brockway: together) to obtain predictable results of accurately measured patient ECG and sound data. Furthermore, it would be obvious to expand this structure to have all of the acoustic sensors of Kajbaf and Mahajan be dual acoustic and ECG sensors as taught in Brockway as this will provide redundancy in data, enabling both the capture of ECG if some ECG sensors fail to make proper contact and the cross-referencing of ECG to ensure the final utilized PQRST complexes are representative of the patient’s health. Regarding Claim 12, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, and Mahajan further teaches wherein the system further comprises multiple electrocardiogram sensor and wherein the computing device is further configured to receive cardiac electrical signal data from each electrocardiogram sensor ([0035], [0046]-[0047] for time reference and data segmentation purposes, [0075] for patient health contextualization), Kajbaf teaches storing system data (p741, IV. Experimental Tests, recorded signals are stored on a PC), and Brockway teaches the plurality of acoustic sensors further comprise an electrocardiogram sensor (See Claim 12 Rejection). Regarding Claim 15, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, and Mahajan teaches wherein the elastic motion sensor is sized to be worn around a chest of a subject (Abstract, [0044]) and the computing device is further configured to process the relative position data to determine an expansion and contraction of the elastic motion sensor during respiration ([0070]-[0071] chest wall and diaphragmatic movement is also judged and relative changes associated with breathing profile are used to judge pulmonary condition), thereby generating respiration data that is stored by the computing device (Kajbaf: p741, IV. Experimental Tests, recorded signals are stored on a PC). Regarding Claim 17, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, wherein each of the plurality of acoustic sensors comprises a microphone (See Claim 11 Rejection). Regarding Claim 18, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, wherein: the computing device further comprises a display; and the computing device generates the display, the display comprising a visual depiction of the sound map (See Claim 11 Rejection), and Mahajan teaches a user may interact with the data through a graphical user interface of a computing device ([0039]-[0040] interaction through a browser program of a data transport device). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the display of Kajbaf be a graphical user interface as taught by Mahajan as a specific teaching on how the display should be configured, allowing consistency across applications of the invention. Regarding Claim 19, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, and Mahajan teaches wherein the computing device comprises a mobile device that is in communication with the sensor array via a wireless connection (Fig. 4, [0016] data communication may be wireless, [0039]-[0042] data transport device 102 / mobile device in communication with the sensor array / cardiac respiratory sensor array 104). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the sound map generating system of Kajbaf, Mahajan, and Hwang in communication with a mobile device over a wireless connection as taught by Mahajan as a streamlined way to provide the final sound map to a healthcare provider for review. Regarding Claim 20, Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, further comprising a second sensor array configured to be worn around a second anatomical region-of- interest, comprising: a second plurality of acoustic sensors (See Claim 11 Rejection, Kajbaf a first array at the front of the chest, a second array at the back); and a second elastic motion sensor coupling each of the second plurality of acoustic sensors to form the second sensor array (See Claim 11 Rejection, a specific elastic motion sensor applied to the second array would be obvious for the same reasons given above). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Mahajan and further in view of Hwang and further in view of Brockway and further in view of Cozic et al (“Development of a cardiac acoustic mapping system”) (“Cozic”). Regarding Claim 13, while Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 12, wherein: the computing device further comprises a display, the computing device generates a visual depiction of the sound map (See Claim 12 Rejection), and Mahajan teaches a user may interact with the data through a graphical user interface of a computing device ([0039]-[0040] interaction through a browser program of a data transport device). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the display of Kajbaf be a graphical user interface as taught by Mahajan as a specific teaching on how the display should be configured, allowing consistency across applications of the invention. Yet their combined efforts fail to teach the GUI comprising a visual depiction of the sound map and the cardiac electrical signal data. However Cozic teaches a cardiac acoustic mapping system (p431, Abstract) comprising a user interface (p432-433, 2.2 Cardiac acoustic mapping system) and displaying a visual depiction of the sound map and the cardiac electrical signal data (Fig. 7, p, 2.5 Cubic convolution interpolation, “Maps are displayed on the video monitor using a greyscale or a colour look-up table of 128 levels.)’ shows acoustic map along with ECG data, PCG data, and maximum amplitude plot). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the display of Kajbaf be displaying a visual depiction of the sound map and the cardiac electrical signal data as taught by Cozic as way to track multiple forms of relevant health information simultaneously. Further, Mahajan notes that plots of ECG data and cardiac acoustic data may be considered in relation to one another as they both represent cardiac data ([0047]). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Mahajan and further in view of Hwang and further in view of Brockway and further in view of Oren et al (US 2019/0231267) (“Oren”). Regarding Claim 14, while Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 11, their combined efforts fail to teach wherein the elastic motion sensor comprises a graphene elastic motion sensor to which each of the plurality of acoustic sensors is coupled. However Oren teaches a method for creating wearable sensors (Abstract) comprising producing graphene elastic motion sensor that may be applied to a variety of devices that may requires tracking ([0085] graphene elastic tape acting as a motion sensor may be applied to a varierty of material such as a wearable glove or a plant surface, conforming to such material of real-time tracking of motion behavior). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the elastic motion sensor of Hwang be a graphene elastic motion sensor as taught by Oren as a simple substitution of one form of configuring the flexible conductive material (Hwang: [0086] a conductive thread made with a carbon nanotube fiber) for another (Oren: a conductive tape made with graphene) to obtain predictable results of accurately assessing relative position changes in the acoustic sensors. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kajbaf in view of Mahajan and further in view of Hwang and further in view of Brockway and further in view of Cozic. Regarding Claim 16, while Kajbaf, Mahajan, Hwang, and Brockway teach the sound map generating system of claim 15, wherein: the computing device further comprises a display, the computing device generates a visual depiction of the sound map (See Claim 11 Rejection), and Mahajan teaches a user may interact with the data through a graphical user interface of a computing device ([0039]-[0040] interaction through a browser program of a data transport device). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the display of Kajbaf be a graphical user interface as taught by Mahajan as a specific teaching on how the display should be configured, allowing consistency across applications of the invention. Yet their combined efforts fail to teach the GUI comprising a visual depiction of the sound map and the respiration data. However Cozic teaches a cardiac acoustic mapping system (p431, Abstract) comprising a user interface (p432-433, 2.2 Cardiac acoustic mapping system) and displaying a visual depiction of the sound map and secondary signal data (Fig. 7, p, 2.5 Cubic convolution interpolation, “Maps are displayed on the video monitor using a greyscale or a colour look-up table of 128 levels.)’ shows acoustic map along with ECG data, PCG data, and maximum amplitude plot). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the display of Kajbaf be displaying a visual depiction of the sound map and a secondary data as taught by Cozic as way to track multiple forms of relevant health information simultaneously. Response to Arguments Applicant’s amendments and arguments filed 8/11/2025 with respect to the 35 USC 112(b) rejections have been fully considered and are persuasive. The rejection(s) is/are withdrawn. Applicant’s amendments and arguments filed 8/11/2025 with respect to the 35 USC 102(a)(1) rejection under Sapsanis have been fully considered and are persuasive. The rejection(s) is/are withdrawn. Applicant’s amendments filed 8/11/2025 with respect to the 35 USC 102(a)(1) rejection of Claim 1 under Kajbaf have been fully considered and are persuasive. The rejection(s) is/are withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kajbaf and Moghaddasi as argued below. Applicant’s arguments filed 8/11/2025 with respect to the prior art rejection of Claim 1 under Kajbaf with the incorporated teachings of Claim 4 have been fully considered, but are not persuasive. Applicant argues on pages 7-8 that the references fail to disclose a process for generating a 4D sound map, let alone one that depicts a spatiotemporal distribution of acoustic sources. Moghaddasi is cited to disclose a process of generating a four-dimensional sound map, but Moghaddasi is merely validating its acoustic localization again 4D echocardiography data. Examiner agrees that Moghaddasi does not directly teach a four-dimensional sound map from acoustic sources and recognizes a lack of clarity in the rejection. Examiner will explain further now. Examiner’s rejection of Claim 4 dated 2/10/2025 intends to highlight two teachings. One that the multi-dimensional sound map has visuals generated over time for the S1 and S2 segment. Two, that this data/visualization is directly comparable to a four dimensional sound map. Kajbaf shows three-dimensional sound maps, shows S1 and S2 heart sounds are occurring within a one second time scale in Fig. 4, and teaches “Figure 4 illustrates a PCG signal and its segmented energy profile. These labeled signals are used separately in the beamformer to form time-space maps. More details can be read in [7].” (p740) and “Finally a two or three dimensional image is presented on the screen. Figure 5 shows a two dimensional image of heart. It can be seen that heart sound is generated from different locations in different time intervals.” Thus we can see that multiple three-dimensional sound maps are being generated with respect to time, within a scale of under one second for S1 and S2 heart sounds. However this is not explicitly taught as “reconstructing” a four-dimensional sound map. Examiner’s addition of Moghaddasi is to support the consideration that a system visualizing a three-dimensional sound map that is changed over time, one is “reconstructing” a four-dimensional sound map that depicts a spatiotemporal distribution of the acoustic sources in the subject by enabling a three dimensional comparison to a four dimensional technique over time, the addition of time returning the final dimension. In the current claim, the term “reconstruction” does not necessitate retaining of the four-dimensional data. Examiner notes that this specifically renders the statement “Moghaddasi is conspicuously silent with regard to any combination of sound maps obtained at different time for the purpose of generating a single 4D sound map that depicts a spatiotemporal distribution of acoustic sources” (Emphasis Added) on page 8 of the submitted Remarks moot as this is not the claim language. Applicant argues on page 8 that Moghaddasi acoustic sources were localized in temporal segments of a phonocardiogram: S1 and S2, not singular points in time during which different sounds are recorded. Examiner respectfully disagrees. The sound maps of Kajbaf and Moghaddasi represent heart sound segments that Kajbaf shows in Fig. 4 to represent events that occur in a time scale of a tenth of a second. Examiner considers this scale to represent an “a specific moment in time at which an event or measurement occurs” (Collins Dictionary), the event being a heart sound. Applicant argues on page 8 that Moghaddasi fails to motivate a 4D dimensional sound map as the acoustic localization data of Moghaddasi is not directly compared to the 4D echocardiography data. Examiner respectfully disagrees. While matching of the coordinate axes might be stated as impossible, a distance representing a three dimensional separation is generated from identified peaks over two time segments. That is a comparison of spatiotemporal changes. Applicant argues on page 8 that the combination of Kajbaf and Moghaddasi would not instruct a person skilled in the art to reconstruct a “four-dimensional sound map” as described in amended claim 1. Examiner respectfully disagrees in view of the arguments above. Thus the rejection stands. Applicant’s amendments and arguments filed 8/11/2025 with respect to the 35 USC 103 rejection of Claim 8 have been fully considered, and is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kajbaf, Mahajan. Hwang, and Brockway. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAIRO H PORTILLO whose telephone number is (571)272-1073. The examiner can normally be reached M-F 9:00 am - 5:15 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jacqueline Cheng can be reached at (571)272-5596. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAIRO H. PORTILLO/ Examiner Art Unit 3791 /JACQUELINE CHENG/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Jul 29, 2021
Application Filed
Feb 03, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Nov 23, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593996
PULSE WAVE TRANSIT TIME MEASUREMENT DEVICE AND LIVING BODY STATE ESTIMATION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12569148
BLOOD-VISCOSITY MEASUREMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12557997
PROXIMITY SENSOR CIRCUITS AND RELATED SENSING METHODS
2y 5m to grant Granted Feb 24, 2026
Patent 12543998
Conductive Instrument
2y 5m to grant Granted Feb 10, 2026
Patent 12539043
LESION VISUALIZATION USING DUAL WAVELENGTH APPROACH
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
85%
With Interview (+31.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 335 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month