DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the filing of 10/5/2023. Claims 1-20 are pending and have been considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1: is directed to idea of itself (abstract idea) without significantly more for the following reason(s):
Step 1: a process/method claim.
Step 2A, Prong 1: the limitations, “determine whether a driving event sound is real or artificial using (i) the set of driving event sound characteristics corresponding to each driving event sound, and (ii) the indication of whether each driving event sound is from the real or artificial source” and “wherein each set of driving event sound characteristics is classified according to whether the set corresponds to one of the plurality of driving event sounds from the real source or from the artificial source” are Mental Processes (observation, evaluation, judgment, and/or opinion).
Step 2A, Prong 2: the additional elements individually or as a whole do not integrate the judicial exception into a practical application.
The additional element, “one or more processors” is applying abstract idea using a computer environment (i.e., “apply it”, MPEP 2106.05(f)). It invokes a generic computer (one or more processors) merely as a tool to perform the judicial exception or an existing process by using of a computer or other machinery in its ordinary capacity.
The additional element, “obtaining, a set of driving event sound characteristics for each of a plurality of driving event sounds” and “for each driving event sound in the plurality of driving event sounds, obtaining, an indication of whether the driving event sound is from a real or artificial source” is/are merely data gathering and insignificant extra-solution activities (pre-solution activity) (MPEP 2106.05 (g)).
The additional element, “training a machine learning model”” is generally linking the use of the judicial exception to a particular technological environment or field of use (machine learning) (MPEP 2106.05(h)).
When considered as a whole, the claimed invention fails to recite any improvement in any technology or technical field (MPEP 2106.05(a)) or recite any meaningful limitations (MPEP 2106.05(e)). The limitations are no more than mere automation of a mental process to determine driving event sound.
Step 2B: the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea when considered both individually and as a whole.
under Step 2B, limitation(s) that are insignificant extra-solution activity under step 2A, Prong 2, need to be re-evaluated to determine whether they are well-understood, routine, conventional activities.
Specifically, the limitation, “obtaining, a set of driving event sound characteristics for each of a plurality of driving event sounds” and “obtaining, an indication of whether the driving event sound is from a real or artificial source” is just receiving/obtaining data which is mere judicial-recognized well-understood, routine, conventional activity (MPEP 2106.05(d)(II).
When considered a whole, the claimed invention still fails to amount to significantly more than applying a judicial exception in a field of use (determine driving event sound) using a generic computer.
Other independent claim(s) (e.g., claim 9 [a server] and claim 17 [computer readable memory]) recite similar claim language and thus is/are rejected for the same reasons as that of claim 1.
Dependent claims 2-8, 10-16, 18-20 [merely obtaining/collecting data, comparing and determining; without significantly more] fail to recite additional elements that could integrate the judicial exception into a practical application or amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-7, 12-15, 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 4 use the term “the audio”; claim 4 depends on claim 3, that depends on claim 2. Claims 2 and 3 recite three [3] different instances/recitations of audio [see below]. it is not clear whether the applicant intends to refer to:
(a) audio playback data from an application executing on a client device; or
(b) audio playback data from a device communicatively coupled to the client device; or
(c) ambient audio
Therefore, this recitation of the audio in claim 4 is unclear and thus indefinite [claims 5-7 are rejected for their dependencies on claim 4].
Claims 12 and 20, are rejected for similar reasons as in claim 4.
[claims 13-15 are rejected for their dependencies on claim 12].
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Choi et al. (US 2020/0042285).
Regarding claim 1:
Choi discloses a system and method for training a machine learning model to determine whether a driving event sound is real or artificial (abstract; see figures), the method comprising:
obtaining, by one or more processors (para 49,102,132), a set of driving event sound characteristics for each of a plurality of driving event sounds (figures 1-3; para 195, partially reproduced herein with emphasis {collect in-vehicle acoustic signals generated in the travelling vehicle. … may collect in-vehicle acoustic signals such as a sound outputted from the speaker 3, a sound generated inside the vehicle, a sound generated outside the vehicle, a sound including speech of a user, and a sound including speech of a passenger other than the user…}[collecting/obtaining acoustic/sound signals of driving/traveling events]; para 10-25);
for each driving event sound in the plurality of driving event sounds, obtaining, by the one or more processors, an indication of whether the driving event sound is from a real or artificial source (para 220 {…the acoustic signal in the vehicle may include, for example, a sound outputted from the speaker 3, a sound generated inside the vehicle, a sound generated outside the vehicle, a spoken utterance of the user, and a sound including speech of another passenger}); and
training, by the one or more processors, a machine learning model to determine whether a driving event sound is real or artificial using (i) the set of driving event sound characteristics corresponding to each driving event sound, and (ii) the indication of whether each driving event sound is from the real or artificial source (para 221 {… analyze the acoustic signal in the vehicle collected by the collector 141… to thereby determine whether the acoustic signal is normal noise or abnormal noise. In addition, the determiner 142 may analyze the features of the acoustic signal and use a first deep neural network model that has been trained to determine whether the acoustic signal is normal noise or abnormal noise generated in the vehicle to determine whether the noise in the vehicle is abnormal or abnormal. In this case, the first deep neural network model may be trained through training data…}; see para 223,241; para 225 {… acoustic events including normal noise and abnormal..}; Note: [Applicant(s) is/are reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claim. So the Examiner considers ‘normal and abnormal noise are interpreted as real and artificial sound’ within the broad meaning of the term. The Examiner is not limited to Applicant’s definition, which is not specifically set forth in the claims. In re Tanaka et al., 193 USPQ 139, (CCPA) 1977.]),
wherein each set of driving event sound characteristics is classified according to whether the set corresponds to one of the plurality of driving event sounds from the real source or from the artificial source (para 224; figure 6-14; para 241 {..acoustic event[s]}; para 259 {..analyzing the characteristics of the acoustic signal and using the first deep neural network model that has been trained to determine whether the acoustic signal is normal noise or abnormal noise generated in the vehicle, it may be determined whether the noise in the vehicle is abnormal or abnormal…}; see para 232, 272; and see throughout the disclosure).
Regarding claim 9,17:
Choi discloses all of the subject matter as described above in claim 1, and further discloses server device comprising one or more processors (figure 1; para 84,95; para 103-105,125,132); and a non-transitory computer-readable memory storing instructions (para 52,285; figs; and throughout), for performing the functions as described above in claim 1, thus claims 9,17 are rejected with similar rationale under the teachings of the prior art.
Regarding claim 2,10,18:
Choi discloses all of the subject matter as described above and comprising (a) obtaining, by the one or more processors, audio playback data from an application executing on a client device; (b) obtaining, by the one or more processors, audio playback data from a device communicatively coupled to the client device; or (c) obtaining, by the one or more processors, ambient audio [NOTE: the claim language recites three elements with an “or” [also see claim 3]; thus the examiner interprets this optional recitation as either one of (a), (b), or (c); therefore the prior art reading on one of the elements would anticipate the claim {i.e. claim 1+2}. The examiner here considers option three or (c) [ambient audio] to reject claim 2; and further dependent claim(s) 3-7] (see Choi, paragraphs 31 [environmental noise generated in the vehicle]; and throughout).
Regarding claim 3,11,19:
Choi discloses all of the subject matter as described above and applying, by the one or more processors, the audio playback data from the application or the device or the ambient audio to the machine learning model to determine whether a driving event sound in the audio is artificial ([see Note above]; Choi para 96,99 [machine learning]; para 224,241,259; and throughout).
Regarding claim 4,12,20:
Choi discloses all of the subject matter as described above and determining, by the one or more processors, that the audio includes the driving event sound (see Note above; Choi, para 27-34 [acoustic event, as audio for driving]; para 225; and throughout).
Regarding claim 5,13:
Choi discloses all of the subject matter as described above and comparing, by the one or more processors, the audio playback data from the application or the device or audio fingerprints included in the ambient audio to one or more audio fingerprints of predetermined driving event sounds (see Note above; Choi, para 124,239; para 254 [comparing the feature information of the spoken utterance of the user with the feature information of a plurality of voice actors' speech stored…]; para 255; and throughout).
Regarding claim 6,14:
Choi discloses all of the subject matter as described above and a first machine learning model and further comprising: training a second machine learning model using (i) a set of audio streams, and (ii) an indication of the driving event sound corresponding to at least some of the audio streams in the set of audio streams (Note: see Note above in claim 2, and this claim 6, does not recite the branch including “ambient audio” thus is anticipated as described above).
Regarding claim 7,15:
Choi discloses all of the subject matter as described above and applying, by the one or more processors, the audio playback data from the application or the device or the ambient audio to the second machine learning model to determine whether the audio includes the driving event sound (para 24 [first model]; para 25 [second deep neural network model]; para 40,235,263; and throughout).
Regarding claim 8,16:
Choi discloses all of the subject matter as described above and providing, by the one or more processors, the trained machine learning model to a client device for the client device to determine whether a driving event sound is artificial (para 64 [acoustic control apparatus as a personalized device]; para 95 [AI server 20 may include a web server or an application server that enables remote control of the operation of the acoustic control apparatus 100 using the acoustic control system operating application or the acoustic control system operating web browser installed in the user terminal 30d]; and throughout).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Seifert (US 20180211528) discloses a system and method for vehicle acoustic-based emergency vehicle detection.
Lutter (US 6778073) discloses method and apparatus for managing audio devices.
Shin (US 20200051566) discloses artificial intelligence device for providing notification to user using audio data and method for the same.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIRDEPAL SINGH whose telephone number is (571)270-1688. The examiner can normally be reached 8:00-5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hannah S Wang can be reached on (571) 272-9018. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HIRDEPAL SINGH/Primary Examiner, Art Unit 2631