DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 31 is rejected 35 U.S.C. 101 as not falling within one of the four statutory categories of invention and directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 31 recites “[a] program for causing…” and is not limited to the statutory categories of invention and therefore rejected as being software per se. In other words, “a program” is neither a process, machine, manufacture, or composition of matter. A non-limiting example of acceptable language would be: “A non-transitory computer-readable medium storing a computer program executable by using at least one processor…”. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-8, 10-11, 13-15, 17-20, and 30-31 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Crockett et al. (US Pub 20180020310).
Regarding claim 1, Crockett discloses an information processing apparatus comprising a control unit that determines an output parameter forming metadata of an object of content on a basis of the content or one or a plurality of pieces of attribute information of the object (para 0045 – “An adaptive audio pre-processor may include source separation and content type detection functionality that automatically generates appropriate metadata through analysis of input audio. For example, positional metadata may be derived from a multi-channel recording through an analysis of the relative levels of correlated input between channel pairs. Detection of content type, such as ‘speech’ or ‘music’”; also see 0054 – “Such attributes may include content type (dialog, music, effect, Foley, background/ambience, etc.) as well as audio object information such as spatial attributes (3D position, object size, velocity, etc.) and useful rendering information (snap to speaker location, channel weights, gain, bass management information, etc.)”; ).
Regarding claim 2, Crockett discloses wherein the content is 3D audio content (para 0033, 0054).
Regarding claim 3, Crockett discloses wherein the output parameter is at least any of three-dimensional position information (para 0045 – “positional metadata may be derived from a multi-channel recording through an analysis of the relative levels of correlated input between channel pairs”; para 0033, 0042, 0054) and a gain of the object (para 0054 – gain; 0090, 0108).
Regarding claim 4, Crockett discloses wherein the control unit calculates the attribute information on a basis of audio data of the object (para 0045 – “An adaptive audio pre-processor may include source separation and content type detection functionality that automatically generates appropriate metadata through analysis of input audio. For example, positional metadata may be derived from a multi-channel recording through an analysis of the relative levels of correlated input between channel pairs. Detection of content type, such as ‘speech’ or ‘music’, may be achieved, for example, by feature extraction and classification.”).
Regarding claim 5, Crockett discloses wherein the attribute information is a content category indicating a type of the content, an object category indicating a type of the object, or an object feature amount indicating a feature of the object (see para 0054, 0071).
Regarding claim 6, Crockett discloses wherein the attribute information is indicated by a character or a numerical value that is understandable by a user (para 0054 – “Such attributes may include content type (dialog, music, effect, Foley, background/ambience, etc.) as well as audio object information such as spatial attributes (3D position, object size, velocity, etc.) and useful rendering information (snap to speaker location, channel weights, gain, bass management information, etc.)” – character such as dialog or music etc… and numerical value such as 3D position or gain ).
Regarding claim 7, Crockett discloses wherein the content category is at least any of a genre, a tempo, a tonality, a feeling, a recording type (para 0052 – “broadcast (TV and set-top box), music, gaming, live sound, user generated content (“UGC”)”), and presence or absence of a video (para 0053 – “content formats including cinema, TV, live broadcast (and sound), UGC, games and music” – broadcast is with presence of video and music can be absence of a video).
Regarding claim 8, Crockett discloses wherein the object category is at least any of an instrument type, a reverb type, a tone type, a priority (para 0081 – “Sources may be pre-assigned a priority during manufacturing based on their classification, for example, a telecommunications source may have a higher priority than an entertainment source”), and a role.
Regarding claim 10, Crockett discloses wherein the control unit determines the output parameter for each of the objects on a basis of a mathematical function having the object feature amount as an input (para 0004 – such as panning law).
Regarding claim 11, Crockett discloses wherein the control unit determines the mathematical function on a basis of at least any one of the content category or the object category (para 0039, 0046).
Regarding claim 13, Crockett discloses wherein the control unit displays a user interface for adjusting or selecting an internal parameter to be used for determination of the output parameter based on the attribute information, and adjusts the internal parameter or selects the internal parameter in accordance with an operation on the user interface by a user (para 0045, 0055).
Regarding claim 14, Crockett discloses wherein the internal parameter is a parameter of a mathematical function for determining the output parameter with an object feature amount indicating a feature of the object as the attribute information as an input, or a parameter for adjusting the output parameter of the object on the basis of a determination result of the output parameter on the basis of the mathematical function (para 0064).
Regarding claim 15, Crockett discloses wherein the control unit optimizes an internal parameter to be used for determination of the output parameter based on the attribute information on a basis of audio data of each of the objects of a plurality of pieces of the content designated by a user and the output parameter of each of the objects of the plurality of pieces of the content determined by the user (para 0054 - 0055).
Regarding claim 17, Crockett discloses wherein the control unit causes the attribute information to be displayed on a display screen of a tool configured to produce or edit the content (para 0045, 0054, 0086).
Regarding claim 18, Crockett discloses wherein the control unit causes the display screen to display a determination result of the output parameter (para 0054, 0090).
Regarding claim 19, Crockett discloses wherein the control unit causes the display screen to display an object feature amount indicating a feature of the object as the attribute information (para 0054).
Regarding claim 20, Crockett discloses wherein the display screen is provided with a user interface for selecting the object feature amount to be displayed (para 0045).
Regarding claim 30, see rejection of claim 1.
Regarding claim 31, see rejection of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Crockett et al. (US Pub 20180020310) in view of LeBoeuf et al. (US Pub 2011/0075851).
Regarding claim 9, Crockett discloses processing apparatus of claim 5.
Crockett does not disclose wherein the object feature amount is at least any of a rise, duration, a sound pitch, a note density, a reverb intensity, a sound pressure, a time occupancy rate, a tempo, and a Lead index.
LeBoeuf discloses wherein the object feature amount is at least any of a rise, duration, a sound pitch, a note density, a reverb intensity, a sound pressure, a time occupancy rate, a tempo, and a Lead index (para 0009 – tempo, para 0021- pitch trakcing, para 0020 - amplitude-detection which can be sound pressure).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of LeBoeuf in order to use multi-stage analysis system that delivers high-level metadata features, sound object identifiers, stream labels or other symbolic metadata to the application (LeBoeuf, abstract).
Claims 12, 16, and 21-29 are rejected under 35 U.S.C. 103 as being unpatentable over Crockett et al. (US Pub 2018/0020310) in view of Terrell et al. (US Pub 9,304,988).
Regarding claim 12, Crockett discloses processing apparatus of claim 10.
Crockett does not disclose wherein the control unit adjusts the output parameter of the object on a basis of determination results of the output parameter based on the mathematical function obtained for a plurality of the objects.
Terrell discloses wherein the control unit adjusts the output parameter of the object on a basis of determination results of the output parameter based on the mathematical function obtained for a plurality of the objects (col. 8, line 60-col. 9, line 7).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 16, Crockett discloses processing apparatus of claim 5.
Crockett does not disclose wherein a range of the output parameter is defined in advance for each of the object categories, and the control unit determines the output parameter of the object in the object category in such a manner that the output parameter has a value within the range.
Terrell discloses wherein a range of the output parameter is defined in advance for each of the object categories, and the control unit determines the output parameter of the object in the object category in such a manner that the output parameter has a value within the range (col. 8, lines 50-67).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 21, Crockett discloses processing apparatus of claim 5.
Crockett does not disclose wherein the display screen is provided with a user interface for adjusting an internal parameter to be used for determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for adjusting an internal parameter to be used for determination of the output parameter based on the attribute information (col. 18, lines 12-22, col. 9, lines 33-49).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 22, Terrell discloses wherein the control unit determines the output parameter again on a basis of the adjusted internal parameter in accordance with an operation on the user interface for adjusting the internal parameter, and updates display of a determination result of the output parameter on the display screen (col. 9, lines 13-32).
Regarding claim 23, Terrell discloses wherein the display screen is provided with a user interface for storing the adjusted internal parameter (col. 9, line 33-49).
Regarding claim 24, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for selecting an internal parameter to be used for determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for selecting an internal parameter to be used for determination of the output parameter based on the attribute information (col. 1, line 59-60, col. 8, lines 3-15).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 25, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for adding a new internal parameter to be used for determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for adding a new internal parameter to be used for determination of the output parameter based on the attribute information (col. 9, line 08-32, col. 8, lines 3-15).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 26, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for selecting an algorithm for determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for selecting an algorithm for determination of the output parameter based on the attribute information (col. 9, line 33-49).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 27, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for adding a new algorithm for determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for adding a new algorithm for determination of the output parameter based on the attribute information (col. 9, line 33-49 – “It can be appreciated that the use of five processors is purely illustrative and the principles described herein may be implemented using any Suitable audio effect or audio processor.”).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 28, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for designating whether to replace a specific output parameter among a plurality of the output parameters with an output parameter newly determined on a basis of the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for designating whether to replace a specific output parameter among a plurality of the output parameters with an output parameter newly determined on a basis of the attribute information (col. 8, line 34-49).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Regarding claim 29, Crockett discloses processing apparatus of claim 17.
Crockett does not disclose wherein the display screen is provided with a user interface for presenting a recommended algorithm or a recommended internal parameter as the algorithm for determination of the output parameter based on the attribute information or the internal parameter used for the determination of the output parameter based on the attribute information.
Terrell discloses wherein the display screen is provided with a user interface for presenting a recommended algorithm or a recommended internal parameter as the algorithm for determination of the output parameter based on the attribute information or the internal parameter used for the determination of the output parameter based on the attribute information (col. 2, line 9-26).
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Crockett with the teachings of Terrell in order to enable intelligent, content-aware processing in audio.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAFIZ E HOQUE whose telephone number is (571)270-1811. The examiner can normally be reached on M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on (571)272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAFIZ E HOQUE/Primary Examiner, Art Unit 2652