Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is responsive to the application filed 2/22/2024.
2. Claims 11 and 16-34 are pending in this application. Claims 11, 25 and 34 are independent claims. This action is made Non-Final.
Claim Rejections - 35 USC § 102
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
4. Claim(s) 11 and 16-34 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shi et al (“Shi” US 2021/0373670).
Regarding claim 11, Shi discloses a computer-implemented method comprising:
obtaining audio components of media content (see fig 9, S302; e.g., obtain audio file containing multimedia file);
selecting, by a classifier and from among the audio components of the media content (see paragraphs [0035], [0041], [0051] and [0077]-[0078]; e.g., neural network selects abrupt change in audio based on pitch), a most prominent audio component that is representative of a scene in the media content (see paragraphs [0045], [0060] and [0073]; e.g., abrupt change in scene audio based on pitch change), and
generating an output haptics signal based at least on the most prominent audio component that was selected by the classifier from among the audio components of the media content (see paragraphs [0077]-[0078]; e.g., “obtaining a feature parameter of each of the audio segments; inputting the feature parameter of each of the audio segments to a trained deep neural network model; and determining the target audio segment from the multiple audio segments according to an output result of the deep neural network model…detecting an audio power of each of the audio segments; and controlling the vibration element in the computer device to perform a second type of vibration operation according to the audio power of each of the audio segments.”).
Regarding claim 16, Shi discloses comprising:
performing sound source separation the audio components to generate sub-components for each audio components, wherein the classifier selects the most prominent audio component based at least on the audio components and the generated sub-components (see fig. 9, S302, S304 and S900; also see paragraph [0041]; e.g., divide target audio into multiple audio segments and detect highest pitch).
Regarding claim 17, Shi discloses, comprising:
assigning each of the one or more audio components a respective prominence value, and wherein the output haptics signal is generated based at least on a prominence value of the selected most prominent audio component (see paragraphs [0042]-[0046]; e.g., pitch change range).
Regarding claim 18, Shi discloses, comprising: transmitting the output haptics signal for output by one or more haptics actuators concurrent to the media content (see claim 1 above; e.g. “vibration operation according to the audio power of each of the audio segments”).
Regarding claim 19, Shi discloses, wherein the audio components are contained in one or more separate audio channels (see paragraphs [0036] and [0048]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”).
Regarding claim 20, Shi discloses, wherein the classifier selects the most prominent audio component based at least on audio channel information indicating a category of sound in the respective audio channel (see paragraph [0036]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”).
Regarding claim 21, Shi discloses, wherein the classifier selects the most prominent audio component based at least on input media data representative of the media content (see paragraph [0036]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”).
Regarding claim 22, Shi discloses, wherein:
the media content is video game content, and the input media data comprises data relating to the state of the video game content (see paragraph [0048]; e.g., game content).
Regarding claim 23, Shi discloses, wherein the output haptics signal is a multichannel haptics signal (see paragraphs [0042]-[0046]; e.g., vibration associated music, video or game).
Regarding claim 24, Shi discloses, comprising selecting, by the classifier and from among the audio components of the media content, next-most prominent audio component that is representative of the scene in the media content, wherein the output haptics signal is generated based at least on the most prominent audio component and the next-most prominent audio component that were selected by the classifier from among the audio components of the media content (see paragraphs [0042]-[0046]; e.g., pitch change range).
Claims 25-33 are similar in scope to claims 11 and 16-24, respectively, and are therefore rejected under similar rationale.
Claim 34 is similar in scope to claim 11 and is therefore rejected under similar rationale.
Conclusion
5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yoo (US 2016/0175718).
6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RASHAWN N TILLERY/Primary Examiner, Art Unit 2174