DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 9/23/2022, 11/27/2023, and 09/09/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-19 are rejected under 35 U.S.C. 103 as unpatentable over Hoe (KR 20190105805 A, September 18, 2019), hereinafter Hoe, in view of Koshizen et al. (US 20200312323 A1, October 1, 2020), hereinafter Koshizen.
Regarding claim 1, Hoe teaches acquiring a plurality of datasets each of which is formed by a combination of first performance data of a first performance by a performer (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument"), second performance data of a second performance (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") performed together with the first performance (Hoe ¶0021: "The performer terminal is characterized in that it includes a third step of playing the accompaniment sound provided from the first generation unit or the second generation unit together with the individual instrument performance sound"), and a satisfaction label configured to indicate a degree of satisfaction of the performer (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Hoe does not explicitly disclose executing machine learning of a satisfaction estimation model, by using the plurality of datasets, the machine learning being configured by training the satisfaction estimation model such that, for each of the datasets, a result of estimating a degree of satisfaction of the performer from the first performance data and the second performance data matches the degree of satisfaction indicated by the satisfaction label.
However, Koshizen suggests executing machine learning of a satisfaction estimation model, by using the plurality of datasets (Koshizen ¶0045: "The model generation part 42 generates the prediction model through learning using the history of the conversation with the user stored in the conversation history DB 33 and stores the prediction model in the prediction model DB 32."), the machine learning being configured by training the satisfaction estimation model such that, for each of the datasets (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example."), a result of estimating a degree of satisfaction of the performer from the first performance data and the second performance data (Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.") matches the degree of satisfaction indicated by the satisfaction label (Koshizen ¶0069: "The conversation registration part 45 registers the situation of the conversation at the point when the conversation with the user is made, the details of the conversation, and the user's satisfaction with the conversation in an associated manner in the conversation history DB 33, for example.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method of Hoe by adding the satisfaction estimation of Koshizen to predict data in which the user is interested (Koshizen ¶0007).
Regarding claim 2, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 1 as discussed above.
Hoe further teaches that the second performance is a performance by a performance agent that performs together with the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound"); and estimating the degree of satisfaction of the performer from a collaborative performance feature amount (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score") calculated based on the first performance data and the second performance data (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Koshizen further suggests that the machine learning is configured by training the satisfaction estimation model such that, for each of the datasets (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example."), the result of the estimating the degree of satisfaction of the performer (Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.") matches the degree of satisfaction indicated by the satisfaction label (Koshizen ¶0069: "The conversation registration part 45 registers the situation of the conversation at the point when the conversation with the user is made, the details of the conversation, and the user's satisfaction with the conversation in an associated manner in the conversation history DB 33, for example.").
Regarding claim 3, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 2 as discussed above.
Hoe further teaches that the second performance is automatically performed by the performance agent (Hoe ¶0020: "the second generation unit generates… the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Regarding claim 4, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 3 as discussed above.
Hoe further teaches that the first performer data include at least one or more of a performance sound, the first performance data, or an image for the first performance by the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Regarding claim 5, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 1 as discussed above.
Koshizen further suggests that the satisfaction label is configured to indicate the degree of satisfaction estimated from at least one reaction of the performer by using an emotion estimation model (Koshizen ¶0057: "A method of calculating the satisfaction of the user is not particularly limited, and the satisfaction calculation part 44 prepares happy, angry, sad, and fun image patterns of the user, for example, in advance and compares the image patterns with a facial expression of the user. Also, the satisfaction calculation part 44 prepares happy, angry, sad, and fun voice patterns in advance and compares the voice patterns with user's voice. Note that the satisfaction calculation part 44 may prepare both the happy, angry, sad, and fun image patterns and the voice patterns of the user and calculate satisfaction from the facial expression and the voice of the user.").
Regarding claim 6, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 5 as discussed above.
Koshizen further suggests that the at least one reaction of the performer includes at least one or more of a voice, an image, or biological information for the performer during a collaborative performance with the second performance (Koshizen ¶0057: "A method of calculating the satisfaction of the user is not particularly limited, and the satisfaction calculation part 44 prepares happy, angry, sad, and fun image patterns of the user, for example, in advance and compares the image patterns with a facial expression of the user. Also, the satisfaction calculation part 44 prepares happy, angry, sad, and fun voice patterns in advance and compares the voice patterns with user's voice. Note that the satisfaction calculation part 44 may prepare both the happy, angry, sad, and fun image patterns and the voice patterns of the user and calculate satisfaction from the facial expression and the voice of the user.").
Regarding claim 7, Hoe (in view of Koshizen) teaches a trained model establishment comprising the features of claim 1 as discussed above.
Hoe further teaches that the second performance is a performance by a performance agent that performs together with the performer (Hoe ¶0020: "a second step in which the second generation unit… provides the performance sound to the player terminal as an accompaniment sound"), and the second performance is automatically performed by the performance agent based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound").
Regarding claim 8, Hoe teaches acquiring first performance data of a first performance by a performer (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument") and second performance data of a second performance (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") performed together with the first performance (Hoe ¶0021: "The performer terminal is characterized in that it includes a third step of playing the accompaniment sound provided from the first generation unit or the second generation unit together with the individual instrument performance sound"); estimating a degree of satisfaction of the performer from the first performance data and the second performance data that have been acquired (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Hoe does not explicitly disclose using a trained satisfaction estimation model generated by machine learning; and outputting information pertaining to a result of the estimating the degree of satisfaction.
However, Koshizen suggests using a trained satisfaction estimation model generated by machine learning (Koshizen ¶0045: "The model generation part 42 generates the prediction model through learning using the history of the conversation with the user stored in the conversation history DB 33 and stores the prediction model in the prediction model DB 32."); and outputting information pertaining to a result (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example.") of the estimating the degree of satisfaction (Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method of Hoe by adding the satisfaction estimation of Koshizen to predict data in which the user is interested (Koshizen ¶0007).
Regarding claim 9, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 8 as discussed above.
Hoe further teaches that the second performance is a performance by a performance agent configured to perform together with the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound"), and the estimating includes estimating the degree of satisfaction from a collaborative performance feature amount (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score") calculated based on the first performance data and the second performance data (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Koshizen further suggests using the trained satisfaction estimation model (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example." Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.").
Regarding claim 10, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 9 as discussed above.
Hoe further teaches that the second performance is automatically performed by the performance agent (Hoe ¶0020: "the second generation unit generates… the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Regarding claim 11, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 10 as discussed above.
Hoe further teaches that the first performer data include at least one or more of a performance sound, the first performance data, or an image for the first performance by the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Regarding claim 12, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 8 as discussed above.
Hoe further teaches that the first performance data are performance data of an actual performance of the performer, or performance data that include features extracted from the actual performance of the performer (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument").
Regarding claim 13, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 8 as discussed above.
Hoe further teaches that the second performance is a performance by a performance agent that performs together with the performer (Hoe ¶0020: "a second step in which the second generation unit… provides the performance sound to the player terminal as an accompaniment sound"), and the second performance is automatically performed by the performance agent based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound").
Regarding claim 14, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 8 as discussed above.
Hoe further teaches supplying first performer data pertaining to the first performance to each of a plurality of performance agents that include the performance agent, and generating, at the plurality of performance agents, a plurality of pieces of second performance data for a plurality of second performances that includes the second performance (Hoe ¶0083: "In addition, the first generation unit (231) can separate the sounds from each of the different instrumental sounds for the same song by measure, key name, and chord, and then combine them to create a new accompaniment sound. For example, this is a method of creating accompaniment sounds by selecting the first measure from the first drum sound of the same song and combining the second measure from the second drum sound of the same song. Of course, you can create new accompaniment sounds in the same way with instruments other than drums."); estimating the degree of satisfaction of the performer with respect to each of the plurality of performance agents (Hoe ¶0084: "In the above-described manner, the artificial intelligence server (20) can independently analyze the sound of each musical instrument and create an optimized accompaniment sound."), according to the estimation method (Hoe ¶0080: "The artificial intelligence server (20) provides points in return for receiving evaluation scores."); and selecting one performance agent to be recommended from among the plurality of performance agents based on the degree of satisfaction estimated for each of the plurality of performance agents (Hoe ¶0080: "For accompaniment notes with high evaluation scores, when a request for accompaniment notes with the same or similar individual instrument sounds comes in, it is given priority as the accompaniment note to be provided.").
Koshizen further suggests using the trained satisfaction estimation model (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example." Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.").
Regarding claim 15, Hoe (in view of Koshizen) teaches an estimation method comprising the features of claim 8 as discussed above.
Hoe further teaches supplying first performer data pertaining to the first performance to the performance agent (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument"), and generating the second performance data of the second performance at the performance agent (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound"); estimating the degree of satisfaction of the performer with respect to the performance agent (Hoe ¶0084: "In the above-described manner, the artificial intelligence server (20) can independently analyze the sound of each musical instrument and create an optimized accompaniment sound."), according to the estimation method (Hoe ¶0080: "The artificial intelligence server (20) provides points in return for receiving evaluation scores."); and modifying an internal parameter value of the performance agent that is used to generate the second performance data, the generating, the estimating, and the modifying being iteratively executed to adjust the internal parameter value so as to raise the degree of satisfaction (Hoe ¶¶ 0083-0084: "In addition, the first generation unit (231) can separate the sounds from each of the different instrumental sounds for the same song by measure, key name, and chord, and then combine them to create a new accompaniment sound. For example, this is a method of creating accompaniment sounds by selecting the first measure from the first drum sound of the same song and combining the second measure from the second drum sound of the same song. Of course, you can create new accompaniment sounds in the same way with instruments other than drums. In the above-described manner, the artificial intelligence server (20) can independently analyze the sound of each musical instrument and create an optimized accompaniment sound.").
Koshizen further suggests using the trained satisfaction estimation model (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example." Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.").
Regarding claim 16, Hoe teaches at least one processor resource; and at least one memory resource that contains at least one program that is executed by the at least one processor resource (Hoe ¶0026: "The performer terminal (10) is composed of multiple units (several to several million units). The types of performer terminals (10) include smartphones, computers, smartphones, tablets, and musical instruments themselves"), the at least one processor resource being configured to, by executing the at least one program, acquire a plurality of datasets each of which is formed by a combination of first performance data of a first performance by a performer (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument"), second performance data of a second performance (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") performed together with the first performance (Hoe ¶0021: "The performer terminal is characterized in that it includes a third step of playing the accompaniment sound provided from the first generation unit or the second generation unit together with the individual instrument performance sound"), and a satisfaction label configured to indicate a degree of satisfaction of the performer (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Hoe does not explicitly disclose: execute machine learning of a satisfaction estimation model, by using the plurality of datasets, the machine learning being configured by training the satisfaction estimation model such that, for each of the datasets, a result of estimating a degree of satisfaction of the performer from the first performance data and the second performance data matches the degree of satisfaction indicated by the satisfaction label.
However, Kozishen suggests: execute machine learning of a satisfaction estimation model, by using the plurality of datasets (Koshizen ¶0045: "The model generation part 42 generates the prediction model through learning using the history of the conversation with the user stored in the conversation history DB 33 and stores the prediction model in the prediction model DB 32."), the machine learning being configured by training the satisfaction estimation model such that, for each of the datasets (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example."), a result of estimating a degree of satisfaction of the performer from the first performance data and the second performance data (Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.") matches the degree of satisfaction indicated by the satisfaction label (Koshizen ¶0069: "The conversation registration part 45 registers the situation of the conversation at the point when the conversation with the user is made, the details of the conversation, and the user's satisfaction with the conversation in an associated manner in the conversation history DB 33, for example.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the system of Hoe by adding the satisfaction estimation of Koshizen to predict data in which the user is interested (Koshizen ¶0007).
Regarding claim 17, Hoe (in view of Koshizen) teaches a trained model establishment system comprising the features of claim 16.
Hoe further teaches that the second performance is a performance by a performance agent that performs together with the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound"), and the second performance is automatically performed by the performance agent (Hoe ¶0020: "the second generation unit generates… the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Regarding claim 18, Hoe teaches at least one processor resource; and at least one memory resource that contains at least one program that is executed by the at least one processor resource (Hoe ¶0026: "The performer terminal (10) is composed of multiple units (several to several million units). The types of performer terminals (10) include smartphones, computers, smartphones, tablets, and musical instruments themselves"), the at least one processor resource being configured to, by executing the at least one program, acquire first performance data of a first performance by a performer (Hoe ¶0012: "A first storage unit that stores individual instrument performance sounds of the player terminals transmitted through the communication unit by type of instrument") and second performance data of a second performance (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") performed together with the first performance (Hoe ¶0021: "The performer terminal is characterized in that it includes a third step of playing the accompaniment sound provided from the first generation unit or the second generation unit together with the individual instrument performance sound"); estimate a degree of satisfaction of the performer from the first performance data and the second performance data that have been acquired (Hoe ¶0080: "The performer terminal (10) provides the result of playing with the provided accompaniment sound as a score to the artificial intelligence server (20) for evaluation thereof. The artificial intelligence server (20) provides points in return for receiving evaluation scores.").
Hoe does not explicitly disclose using a trained satisfaction estimation model generated by machine learning, and outputting information pertaining to a result of the estimating the degree of satisfaction.
However, Kozishen suggests using a trained satisfaction estimation model generated by machine learning (Koshizen ¶0045: "The model generation part 42 generates the prediction model through learning using the history of the conversation with the user stored in the conversation history DB 33 and stores the prediction model in the prediction model DB 32."), and outputting information pertaining to a result (Koshizen ¶0052: "The model generation part 42 learns the prediction model again by classifying the history of the conversation into some clusters using the beacon ID, the advancing direction distance, the keyword/field, and the satisfaction as parameters, for example.") of the estimating the degree of satisfaction (Koshizen ¶0048: "In this manner, it is possible to estimate the satisfaction P related to the details S of the speech in advance using the prediction model θ.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the system of Hoe by adding the satisfaction estimation of Koshizen to predict data in which the user is interested (Koshizen ¶0007).
Regarding claim 19, Hoe (in view of Koshizen) teaches an estimation system comprising the features of claim 18 as discussed above.
Ho further teaches that the second performance is a performance by a performance agent that performs together with the performer (Hoe ¶0020: "a second step in which the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound"), and the second performance is automatically performed by the performance agent (Hoe ¶0020: "the second generation unit generates… the accompaniment sound and provides the performance sound to the player terminal as an accompaniment sound") based on first performer data pertaining to the first performance of the performer (Hoe ¶0020: "the second generation unit generates a performance sound of another instrument that matches the individual instrument performance sound of the player terminal that requested the accompaniment sound.").
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHILIP G SCOLES/
Examiner, Art Unit 2837
/JEFFREY DONELS/Primary Examiner, Art Unit 2837