DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 10-13, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kako (WO 2022/153359 A1, citations made to English machine translation) in view of Ohgi et al. (US 2025/0254489 A1 with foreign priority date of 10/25/22), hereinafter “Ohgi.”
As to claim 1, Kako discloses a method comprising:
at a device including a display, an environmental sensor, a non-transitory memory and one or more processors coupled with the display (p. 2 ¶05-07, Fig. 1. “The microphone position presentation system includes an acquisition unit 110, a microphone position presentation device 120, and a presentation unit 150.” “The microphone position presenting device 120 is configured by loading a special program into a known or dedicated computer having, for example, a central processing unit (CPU: Central Processing Unit), a main storage device (RAM: Random Access Memory), or the like.” “The acquisition unit 110 includes a spatial sensing unit 111.”),
the environmental sensor and the non-transitory memory:
obtaining an acoustic model of an environment, wherein the acoustic model indicates a set of one or more acoustical properties of the environment (p. 3 ¶07 and p. 6 ¶04, Fig. 5. “Sound wave propagation is simulated from the shape of space (room model), and the FDTD method (finite-difference time-domain method) is used to predict the incoming sound at a virtual listening position, and the spatial transmission function is calculated. do. In addition, so far, we have explained to obtain acoustic characteristics by using the space and the shape of the object existing in the space in the simulated space, but further, the reflection considering the object constituting the space and the material of the object. Factors may be considered. As the reflection coefficient, for example, the object may be estimated from a camera image or the like, and the reflection coefficient corresponding to the estimated object may be used, or the reflection coefficient may be directly given from the outside. The reflectance coefficient may be obtained by other methods.”);
determining a placement location for a microphone within the environment based on the set of one or more acoustical properties of the environment (p. 4 ¶03, Fig. 5. “The microphone position calculation unit 123 receives the point cloud data R indicating the shape of the space, calculates where in the shape of the space the microphone is to be installed so as to satisfy the desired acoustic conditions (S123), and calculates the microphone. Outputs the installation position N of [the microphone].”); and
displaying, on the display, a representation of the environment and a visual indicator that is overlaid onto the representation of the environment in order to indicate the placement location for the microphone (p. 7 ¶04-05, Fig. 5. “The output unit 125 receives the shape of the space after complementation and the microphone position N, generates information indicating the microphone position N in the shape of the space after complementation, for example, image data, and outputs the information to the presentation unit 150 (S125).” “The presentation unit 150 receives information indicating the microphone position N in the shape of the space after complementation and presents it to the user (S150). The presentation unit 150 comprises display means such as a display.”).
Kako does not expressly disclose the placement based on a pickup pattern of the microphone.
Ohgi discloses the placement based on a pickup pattern of the microphone (Ohgi, ¶0046, ¶0056 and claim 5. “Receiving information related to directivity of the speakers or the microphones, wherein the placement distribution is calculated further based on the information related to the directivity.”).
Kako and Ohgi are analogous art because they are from the same field of endeavor with respect to determining microphone placement.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use microphone directivity, as taught by Ohgi. The motivation would have been for improved accuracy by taking into account the direction in which the microphone is designed to pick up sound.
As to claim 2, Kako in view of Ohgi discloses wherein obtaining the acoustic model comprises: obtaining a visual mesh that indicates dimensions of the environment and placement of objects in the environment (p. 3 ¶07. “The space model coupling unit 115 receives a plurality of point cloud data, combines the plurality of point cloud data, restores the shape of the space (S115), and outputs the point cloud data R indicating the shape of the space. For example, while rotating LiDAR, multiple point cloud data with different elevation angles are obtained, and a three-dimensional scene is reconstructed by combining multiple point cloud data using an iterative closest point (ICP) algorithm. As the spatial model coupling technique, various conventional techniques can be used.”); and
generating the acoustic model based on the visual mesh of the environment (p. 3 ¶07 and p. 6 ¶04, Fig. 5. “Sound wave propagation is simulated from the shape of space (room model), and the FDTD method (finite-difference time-domain method) is used to predict the incoming sound at a virtual listening position, and the spatial transmission function is calculated.).
As to claim 3, Kako in view of Ohgi discloses wherein the set of one or more acoustical properties indicates how sound propagates within the environment (p. 6 ¶04, Fig. 5. “Sound wave propagation is simulated from the shape of space (room model), and the FDTD method (finite-difference time-domain method) is used to predict the incoming sound at a virtual listening position, and the spatial transmission function is calculated. do. In addition, so far, we have explained to obtain acoustic characteristics by using the space and the shape of the object existing in the space in the simulated space, but further, the reflection considering the object constituting the space and the material of the object.”); and
wherein determining the placement location for the microphone comprises selecting the placement location based on how the sound propagates within the environment (p. 4 ¶03 and p. 6 ¶05, Fig. 5. “The microphone position calculation unit 123 receives the point cloud data R indicating the shape of the space, calculates where in the shape of the space the microphone is to be installed so as to satisfy the desired acoustic conditions (S123), and calculates the microphone. Outputs the installation position N of [the microphone].” “The sound pressure probability distribution estimation unit 123E uses the virtual sound source position S and the calculated spatial transmission function as inputs, and uses the spatial transmission function to distribute the sound pressure probability distribution at each position (assumed listening position)with respect to the virtual sound source position. (S123E), and using the sound pressure probability distribution, the position of the microphone satisfying the desired sound condition is obtained and output.”).
As to claim 4, Kako in view of Ohgi discloses wherein the set of one or more acoustical properties indicates respective locations of objects that are capable of generating sounds (Kako, p. 2 ¶03 and p. 7 ¶02. “The acoustic characteristics are influenced by the positional relationship between an object that reflects or absorbs sound, a sound source, and a microphone existing in the space.”); and
wherein determining the placement location for the microphone comprises selecting the placement location based on the respective locations of the objects that are capable of generating the sounds in order to capture the sounds that the objects are capable of generating (Kako, p. 2 ¶03 and p. 7 ¶02. “The acoustic characteristics are influenced by the positional relationship between an object that reflects or absorbs sound, a sound source, and a microphone existing in the space. Point cloud data is used to construct an object that reflects or absorbs sound in virtual space. In other words, a virtual space may be constructed using point cloud data, acoustic characteristics may be simulated in the virtual space, and the microphone installation position may be determined.” “The sound pressure probability distribution estimation unit 123E obtains the installation position of the microphone so as to satisfy the conditions relating to the desired sound.”).
As to claim 5, Kako in view of Ohgi discloses wherein the set of one or more acoustical properties indicates a location of a musical instrument within the environment (Kako, p. 3 ¶10 and p. 6 ¶05. “The microphone position presenting device 120 receives the virtual sound source position S.” “The sound pressure probability distribution estimation unit 123E obtains the installation position of the microphone so as to satisfy the conditions relating to the desired sound.” Obvious that the sound source could be a musical instrument.); and
wherein determining the placement location for the microphone comprises selecting the placement location based on the location of the musical instrument within the environment in order to capture sounds being generated by the musical instrument (Kako, p. 3 ¶10 and p. 6 ¶05. “The microphone position presenting device 120 receives the virtual sound source position S.” “The sound pressure probability distribution estimation unit 123E obtains the installation position of the microphone so as to satisfy the conditions relating to the desired sound.” Obvious that the sound source could be a musical instrument.).
As to claim 6, Kako in view of Ohgi discloses wherein the set of one or more acoustical properties indicates a location of a second microphone with a second pickup pattern (Kako, p. 2 ¶03 and p. 7 ¶03. “The installation positions of a plurality of microphones may be determined in order to operate as a microphone array instead of a single microphone.” “The sound pressure probability distribution estimation unit 123E may obtain a predetermined number of microphone installation positions, or may receive the number of microphones to be installed as an input and obtain the microphone installation positions according to the number of microphones.”); and
wherein determining the placement location for the microphone comprises selecting the placement location based on the location of the second microphone in order to reduce an overlap between the pickup pattern of the microphone and the second pickup pattern of the second microphone (Kako, p. 2 ¶03 and p. 7 ¶03. “The installation positions of a plurality of microphones may be determined in order to operate as a microphone array instead of a single microphone.” “The sound pressure probability distribution estimation unit 123E may obtain a predetermined number of microphone installation positions, or may receive the number of microphones to be installed as an input and obtain the microphone installation positions according to the number of microphones.” Ohgi, ¶0046 and ¶0056. “The processor 12 according to the third modified example receives information related to the directivity of [microphones] and calculates the [microphone] placement distribution based on the information related to the directivity of [microphones].”).
The motivation is the same as claim 1 above.
As to claim 7, Kako in view of Ohgi discloses wherein the set of one or more acoustical properties are a function of material properties of the environment (Kako, p. 2 ¶03. “The acoustic characteristics are influenced by the positional relationship between an object that reflects or absorbs sound, a sound source, and a microphone existing in the space. Point cloud data is used to construct an object that reflects or absorbs sound in virtual space. In other words, a virtual space may be constructed using point cloud data, acoustic characteristics may be simulated in the virtual space, and the microphone installation position may be determined.”); and
wherein determining the placement location for the microphone comprises determining the placement location for the microphone based on the material properties of the environment (Kako, p. 2 ¶03. “The acoustic characteristics are influenced by the positional relationship between an object that reflects or absorbs sound, a sound source, and a microphone existing in the space. Point cloud data is used to construct an object that reflects or absorbs sound in virtual space. In other words, a virtual space may be constructed using point cloud data, acoustic characteristics may be simulated in the virtual space, and the microphone installation position may be determined.”).
As to claim 8, Kako in view of Ohgi discloses wherein the placement location allows the microphone to detect sounds from a threshold portion of the environment (Ohgi, ¶0030, Fig. 5. “However, the application program can also receive a different sound pressure for each position in the acoustic space interface 101… The application program divides an acoustic space set in the acoustic space interface 101 into a plurality (nine in the example of FIG. 5) of sound pressure setting regions 102A. The user inputs a target sound pressure for each of the plurality of sound pressure setting regions 102A.”).
The motivation would have been to indicate which regions of the acoustic space the user wants to target.
As to claim 10, Kako in view of Ohgi discloses wherein the placement location results in a detected sound quality that satisfies a threshold sound quality (Kako, p. 6 ¶05. “For example, when "high SNR with respect to sound source S .sub.1 " is set as the condition related to the desired sound, FIG. 10A shows the simulated SNR distribution, and FIG. 10B shows the sum of the log of the probability distribution, and the power distribution is shown. It is a visualization of the joint distribution as a probability distribution. It can be seen that the values of the parts surrounded by the broken lines are large. When the SNR of the sound source S .sub.1 is increased, the sound source S .sub.2 becomes noise.”).
As to claim 11, Kako in view of Ohgi discloses wherein the pickup pattern indicates a directivity pattern of the microphone (Ohgi, ¶0046, ¶0056 and claim 5. “Receiving information related to directivity of the speakers or the microphones, wherein the placement distribution is calculated further based on the information related to the directivity.”).
The motivation is the same as claim 1 above.
As to claim 12, Kako in view of Ohgi discloses determining an identifier that identifies the microphone (Ohgi, ¶0046, ¶0056 and claim 5); and
retrieving the pickup pattern associated with the identifier from a datastore that stores pickup patterns for a plurality of microphones (Ohgi, ¶0046, ¶0056 and claim 5. “The processor 12 according to the third modified example receives information related to the directivity of [microphones].” Receiving data from a datastore which stores the microphone data is well known, routine and conventional and would have been obvious to one of ordinary skill in the art.).
The motivation is the same as claim 1 above.
As to claim 13, Kako in view of Ohgi discloses wherein determining the placement location comprises: automatically selecting the placement location from a plurality of candidate locations for placing the microphone based on respective scores associated with the plurality of candidate locations (Kako, p. 3 ¶10. “The microphone position presenting device 120 receives the virtual sound source position S and the point cloud data R indicating the shape of the space, and places the microphone in the space shape estimated from the point cloud data R so as to satisfy the desired acoustic conditions.”).
As to claim 17, Kako in view of Ohgi discloses wherein the representation of the environment includes a pass-through representation of the environment and the visual indicator is overlaid onto the pass-through representation of the environment (Kako, p. 5 ¶11 – p. 6 ¶01. “For example, the sound source information input unit 123C receives the shape of the complemented space, which is the output of the defect complementing unit 123B (indicated by a broken line in FIG. 5), and displays the shape of the complemented space via a display device such as a display. It may be presented to the user, and the user may specify where to place the sound source in the complemented space by using a mouse or the like. Alternatively, the point cloud data R acquired from LiDAR may be automatically input by using the object recognition as in Reference 4.”).
Claims 18 and 20 are directed towards substantially the same subject matter as claim 1 and is therefore rejected using the same motivation as claim 1 above.
As to claim 19, it is rejected under claim 18 using the same motivation as claim 2 above.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kako in view of Ohgi, as applied to claim 1 above, and further in view of Bryan (US 2021/0125629 A1).
As to claim 9, Kako in view of Ohgi does not expressly disclose wherein the placement location results in a direct-to-reverberant ratio (DRR) that is less than a threshold DRR.
Kako in view of Ohgi as modified by Bryan discloses wherein the placement location results in a direct-to-reverberant ratio (DRR) that is less than a threshold DRR (Bryan, ¶0096. “the acoustic improvement system can utilize the pop noise detection model (e.g., a plosive estimator model) and/or the DRR model to generate the microphone distance/placement metric… Indeed, the acoustic improvement system can weight and/or scale the output of the DRR model and the pop noise detection model to determine the microphone distance/placement metric.”).
Kako, Ohgi and Bryan are analogous art because they are from the same field of endeavor with respect to acoustical device improvement.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to account for DRR, as taught by Bryan. The motivation would have been to improve the acoustic quality of the recording (Bryan, ¶0040).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kako in view of Ohgi, as applied to claim 1 above, and further in view of Fujiwara (US 2020/0084366 A1).
As to claim 14, Kako in view of Ohgi discloses determining a plurality of candidate locations for placing the microphone (Ohgi, ¶0056. “calculate a microphone placement distribution corresponding to the target sound pressure distribution that was received in the acoustic space that was received based on a prescribed model, and display the calculated placement distribution on the display unit 15.”);
displaying, on the display, respective visual indicators of the plurality of candidate locations for placing the microphone (Ohgi, ¶0056. “calculate a microphone placement distribution corresponding to the target sound pressure distribution that was received in the acoustic space that was received based on a prescribed model, and display the calculated placement distribution on the display unit 15.”).
The motivation would have been to provide multiple microphone locations for the acoustic space.
Kako in view of Ohgi does not expressly disclose detecting a user selection of one of the respective visual indicators of the plurality of candidate locations that are displayed on the display.
Fujiwara discloses detecting a user selection of one of the respective visual indicators of the plurality of candidate locations that are displayed on the display (Fujiwara, ¶0111 and ¶0113. “the system controlling unit 3002 determines recommended candidate positions for the selected number of microphones.” “The user can select the arrangement position of the microphone from among the intersection points of the grid 8000 based on the display of FIGS. 15A to 15C.”).
Kako, Ohgi and Fujiwara are analogous art because they are from the same field of endeavor with respect to optimizing audio recording.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have the user select the microphone position from among the candidates, as taught by Fujiwara. The motivation would have been to provide the user with different options for where the microphone can be located.
Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Kako in view of Ohgi, as applied to claim 1 above, and further in view of Klinke et al. (US 2021/0306782 A1), hereinafter “Klinke.”
As to claim 15, Kako in view of Ohgi does not expressly disclose generating a simulation that simulates sound capture by the microphone at the placement location and providing an option to play a sound recording that indicates a result of the simulation.
Kako in view of Ohgi as modified by Klinke discloses generating a simulation that simulates sound capture by the microphone at the placement location and providing an option to play a sound recording that indicates a result of the simulation (Klinke, ¶0114. “The resulting output audio signal or audio stream is the simulated output 1770 that can be used as the output audio signal as if the audio device was placed in an audio room specific to the output signal and when audio emitted from external speakers was captured on the microphones of the audio device.”).
Kako, Ohgi and Klinke are analogous art because they are from the same field of endeavor with respect to optimizing audio device performance.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to output a simulated audio signal, as taught by Klinke. The motivation would have been to allow the user to hear the sound as if it were captured from a microphone placed in a specific location.
As to claim 16, Kako in view of Ohgi as modified by Klinke discloses playing back a sound recording that simulates sound capture by the microphone from the placement location (Klinke, ¶0114. “The resulting output audio signal or audio stream is the simulated output 1770 that can be used as the output audio signal as if the audio device was placed in an audio room specific to the output signal and when audio emitted from external speakers was captured on the microphones of the audio device.”).
The motivation is the same as claim 15 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES K MOONEY whose telephone number is (571)272-2412. The examiner can normally be reached Monday-Friday, 9:00 AM -5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 5712727848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES K MOONEY/Primary Examiner, Art Unit 2695