DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
Figures 1A and 1B should be designated by a legend such as --Prior Art-- because only that which is old is illustrated. See MPEP § 608.02(g). Corrected drawings in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. The replacement sheet(s) should be labeled “Replacement Sheet” in the page header (as per 37 CFR 1.84(c)) so as not to obstruct any portion of the drawing figures. If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
In Figure 2, numbers 1-12 are shown with corresponding elements, but there is no mention of such reference numbers in correlation with the Figure in the Specification.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Much of the indefiniteness throughout the claims stems from the 35 USC 101 rejection that will be outlined below, along with a lack of connection between steps and elements within the claimed embodiments.
Claim 1 recites playing a video and displaying a virtual musical instrument. Please clarify whether the video and virtual instrument are played/displayed by the same means or on the same device, and also whether these steps occur at the same time.
Further in claim 1, please clarify whether the matching step is automatic or driven by a user action.
Still further, please clarify if the same display means for playing and displaying also output the audio, and whether the display means is a graphical user interface or some other interactive means allowing for interactions with the graphic element.
Claim 3 recites the limitation "”the respective image" in line 4. There is insufficient antecedent basis for this limitation in the claim, given there is no previous mention of a respective image or even a singular image, only a plurality of images.
Further, please clarify the correlation between the plurality of images and the graphic element, given both are in the video and displayed with the virtual musical instrument. And the correlation between the region of the image to the video.
Still further, please clarify if the same identifier relates to both the virtual musical instrument and graphic element.
Claim 4, please clarify how the components are simultaneously displayed and matched at the same time as a positional relationship between said components is also displayed.
Further, please clarify what is intended by positional relationship of the components (i.e. their positions to each other within the video, the display, the region, the image, the virtual instrument, etc.).
Claim 7, the recitation that “when no musical instrument graphic element is recognized” is indefinite, given there is no previous mention of a recognition step.
Claim 8 recites the limitation "the interactions of the virtual musical instrument and a player" in lines 5-6. There is insufficient antecedent basis for this limitation in the claim, given there is no previous mention of interaction of the virtual musical instrument and a player.
Further, please clarify that the interactions are occurring between the virtual instrument and the player, or that the interactions are separate for the virtual instrument and player.
Similarly, claim 8 recites the limitation "the interactions of the plurality of components" in lines 9-10. There is insufficient antecedent basis for this limitation in the claim, given there is no previous mention of interactions of the components.
Claim 9, please clarify whether the first and second components are related wot eh one or plurality of components recited in preceding claim 8.
Further, please clarify whether these components are displayed or metadata data attached to the virtual instrument given movement trajectories of the interactions of such components are determined and used for outputting the audio.
Still further, claim 9 recites the limitation "the real-time relative movement trajectories" in lines 6-7. There is insufficient antecedent basis for this limitation in the claim, given there is no previous mention of real-time relative movement trajectories.
Lastly, in claim 9, please clarify for which element (i.e. one of the components?) the simulated pressure, real-time volume, real-time pitch and real-time tempo are determined.
Claim 10, please clarify what the Applicant intends by “in a different optical ranging layer from a first camera and a second camera” (lines 2-3), “determining a real-time binocular ranging difference according to the first real-time imaging position and the second real-time imaging position” (lines 11-12), and between which two elements this difference is determined, and lastly, “determining a binocular ranging result of the first component based on a negative correlation with the real-time binocular ranging difference and a positive correlation with the focal length and an inter-camera distance, and the inter-camera distance being between the first camera and the second camera” (lines 13-16).
Claim 12, please clarify what is intended by “a posting operation”
Claim 14, please clarify from where the background audio is obtained.
Claims 15-17, please clarify what is intended by “volume weight(s)”, given an audio volume does not have a weight like the measurable volume of an object.
Claim 15, please clarify whether the played audio and the plurality of virtual musical instruments are related to the previously outputted played audio and the virtual musical instrument, and if so, how then is the played audio output based on both interactions with the graphic elements and the indefinite volume weights.
Claim 17, please clarify whether the candidate musical style is related to the candidate instruments previously recited.
Claim 17, last lines, please clarify how a volume weight is determined according to a music style.
Claims 19 and 20 are rejected for similar reasons as those outlined above in claim 1.
The remaining claims, not specifically addressed, depend from, and therefore include, the rejected limitations outlined above.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to an abstract idea without significantly more.
Claims 1-18 recite playing, displaying and outputting data. These limitations are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind and includes no the recitation of computer/processing components. That is, nothing in the claim element precludes the step from practically being performed in the mind.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, even if the claim were to recite an additional element such as – using a processor to perform the above steps, the processor would be recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of ranking information based on a determined amount of use) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element would not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the addition of an element of using a processor to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Claims 19 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite playing, displaying and outputting data.
These limitations, as drafted, are instructions and an apparatus that, under their broadest reasonable interpretation, covers performance or functionality of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting “by a processor”, or “processing circuitry”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by a processor” language, “outputting” in the context of the claims encompass the user manually calculating or constructing elements.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a computer or processing circuitry. The computer and circuitry are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer or circuitry amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
Claims 1, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the Chinese publication to Wang (CN 111679742 A) (English Translation provided by the Examiner).
In terms of claims 1, 19 and 20, Wang teaches an AR-based interaction control method, apparatus, electronic device, and storage medium, and specifically discloses (see paragraph 56-144, and Figs. 1-4): capturing images of a real scene in successive frames by capturing a video with a camera in an AR device (equivalent to playing a video); when the image of the real scene is compared to the preset image of the physical instrument for similarity and the similarity is greater than a threshold, exhibiting an AR effect of a virtual musical instrument model corresponding to a physical musical instrument in combination with a real scene image (equivalent to displaying a virtual musical instrument in the video, when the virtual musical instrument is matched with at least one musical instrument graphic element in the video); receiving a triggering operation for a virtual musical instrument model, the triggering operation may be an instrument tapping action by a user presented in front of the AR effect screen captured by the camera, or other interactive action, or the like, controlling the AR device to play sound effects corresponding to the triggering operation (equivalent to outputting played audio of the virtual instrument according to interactions with the at least one musical instrument graphic element matched with the virtual musical instrument in the video).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-8, 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of that which is well-known in the art.
As for claim 2, given Wang teaches matching the virtual musical instrument with at least one musical graphic element, it would have been obvious, to one of ordinary skill in the art, at the time of the effective filing date, to perform such a step for each of a plurality of images, since it has been held that mere duplication of the essential working steps or parts of a device involves only routine skill in the art. In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960).
Further, as recited in the international Office Action for parent application CN 202110618725, Wang teaches: A preset three-dimensional scene model can be used to characterize a real scene, is presented at an equal scale in the same coordinate system as the real scene, for example, using a real scene as a certain gallery, the exhibition center comprises a plurality of display areas, the pre-set three-dimensional scene model characterizing the real scene may also include the gallery and the various presentation areas in the gallery, and the pre-set three-dimensional scene model is presented 1:1 in the same coordinate system as the real scene, i.e. If the pre-set three-dimensional scene model is placed in the world coordinate system in which the real scene is located, the pre-set three-dimensional scene model coincides with the real scene. The presentation pose data of the virtual musical instrument model in the preset three-dimensional scene model herein may include, but is not limited to, at least one of position data, pose data, and look data of the virtual musical instrument model when presented in the preset three-dimensional scene model, among other data, position data, pose data, and look data of virtual clocks such as those mentioned above when presented in a real scene; while achieving image overlays in video in terms of image frames is the most commonly used technical means by those skilled in the art.
As for claim 3, a duplication of steps would have again been obvious for the reasons cited above, and Wang again discloses: A preset three-dimensional scene model can be used to characterize a real scene, is presented at an equal scale in the same coordinate system as the real scene, for example, using a real scene as a certain gallery, the exhibition center comprises a plurality of display areas, the pre-set three-dimensional scene model characterizing the real scene may also include the gallery and the various presentation areas in the gallery, and the pre-set three-dimensional scene model is presented 1:1 in the same coordinate system as the real scene, i.e. If the pre-set three-dimensional scene model is placed in the world coordinate system in which the real scene is located, the pre-set three-dimensional scene model coincides with the real scene. The presentation pose data of the virtual musical instrument model in the preset three-dimensional scene model herein may include, but is not limited to, at least one of position data, pose data, and look data of the virtual musical instrument model when presented in the preset three-dimensional scene model, among other data, position data, pose data, and look data of virtual clocks such as those mentioned above when presented in a real scene; while implementing image overlays in video from within or outside of an image frame is a conventional technique of a person skilled in the art.
In addition, Wang teaches displaying specific areas or regions, and displaying a display/visual effect, such that displaying respective or additional markers, indicators, icons, etc. for images in AR/VR would have been obvious to one of ordinary skill.
As for claim 4, Wang teaches the virtual musical instrument including multiple types of virtual components (see the Summary of the Invention section), wherein specific content of the components is related to the display, such as shooting positions, coordinates and angles, and trigger positions determining the components.
As for claim 5, Wang teaches displaying a respective virtual instrument based on one real instrument, thereby making it obvious to display a plurality of respective virtual instruments based on a plurality of real instruments; additionally, Wang teaches the presence of feature information or introduction information (feature information can be either location information or image information), allowing for the selection of instruments (see paragraph starting with “In some implementations”).
As for claims 6 and 7, Wang teaches displaying and determining a respective virtual instrument based on a real instrument and feature information, such as location information or image information. Additionally, providing a plurality of candidate virtual elements selected for display without obtaining a corresponding virtual element is a commonly used technique by those of ordinary skill in the art.
As for claim 8, Wang teaches: [79] S104, based on the trigger operation, controlling the AR device to play a sound effect corresponding to the trigger operation. [80] For Example, considering that physical instruments include different types of physical components, and different types of physical components can produce different sound effects, in order to better simulate the playing effect of virtual musical instrument models, the corresponding virtual instrument models obtained based on physical instruments may also include multiple types of virtual components, and different types of virtual components can present different sound effects when triggered. For example, types of virtual components can be divided according to the types of sound effects produced, and this disclosure does not limit the way virtual component types are divided. [81] The sound effects corresponding to different virtual components in the virtual musical instrument model can be pre-configured and stored locally or in the cloud. When any virtual component is detected to be triggered, the sound effect corresponding to the triggered virtual component can be obtained and played. [82] In some embodiments of this disclosure, the method of controlling the AR device to play the sound effect corresponding to the trigger operation based on the trigger operation can be as follows: detect the trigger position of the trigger operation on the displayed virtual musical instrument model, and then, based on the trigger position, determine the virtual component that was triggered on the virtual musical instrument model, and then control the AR device to play the sound effect corresponding to the triggered virtual component; while adjusting the corresponding real-time pitch, real-time volume, and real-time sound speed/tempo of a musical instrument based on the relative motion trajectory of the musical instrument image with respect to the player is a common technique using known characteristics of music.
As for claim 15, outputting audio at differing volumes is well known in the art.
As for claim 18, the display of a respective music score according to a plurality of virtual instruments to facilitate use of the instruments is well-known and a commonly used technique in the art (see for example Fu (CN 112752149 A) (description of Figure 5) and Wang (CN 209543642 U) (Figure 2)).
Claims 9-11, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of that which is well-known as applied to claims 1, 8 and 15 above, and further in view of the Chinese publication to Tajik (CN 111713090 A).
As for claim 9, the additional features are not disclosed by Wang. However, Tajik discloses a mixed reality instrument, and specifically discloses (see paragraphs 24-99, FIGS. 1A-10C) that a mixed reality system may utilize one or more cameras to detect images of real objects; the audio parameters may be determined based on collision parameters (e.g., impact projection time, impact point, impact force vector, impact object mass, impact speed, etc.) between virtual objects or between a virtual object and the user; audio parameters may include pitch, speed, and timbre; outputting audio of the virtual object; the pitch of the audio depends on the location of the user strike; the timbre depends on the direction vector of the user touching the virtual musical instrument; the amplitude of the audio signal depends on the speed at which the user strikes the virtual instrument, and the above technical features play the same role in Tajik as some of the distinguishing technical features play in the present invention, both for outputting the audio of the virtual instrument. Namely, applying the above-described additional technical features of Tajik to Wang, to further solve its technical problem, would have been obvious to one of ordinary skill in the art; while the particular method of how to implement a particular audio output in conjunction with the above-described actions and the like may be set based on conventional technical selections.
As for claim 10, Tajik discloses that in some examples, wearable head device 2102 may include a left brace 2130 and a right brace 2132, where left brace 2130 includes a left speaker 2134 and right brace 2132 includes a right speaker 2136. The quadrature coil electromagnetic receivers 2138 may be positioned in the left brace, or in another suitable location in the wearable head unit 2102. An inertial measurement unit (IMU) 2140 may be positioned in the right brace 2132, or in another suitable location in the wearable head device 2102. The wearable head device 2102 may also include a left depth (e.g., time of flight) camera 2142 and a right depth camera 2144. The depth cameras 2142, 2144 may be suitably oriented in different directions so as to together cover a wider field of view; the audio parameters may be determined at stage 960 using any suitable techniques. Some audio parameters may be determined based on relative positions and orientations of a user of the MRE, sources of audio signals in the MRE (e.g., real or virtual objects to which the audio signals correspond). For example, an audio parameter corresponding to the total volume of the audio signal may be determined based on the distance between the user and the virtual object (reflecting that the perceived volume of the audio signal decreases as the distance between the listener and the source increases). The audio parameters will typically be determined so as to simulate audio signals that will be heard by a listener at a user position and orientation in the MRE relative to the source of the audio signals in the MRE; while measuring the distance between an object and a user based on binocular range differences is a commonly used technique known in the art to those of ordinary skill.
As for claim 11, Wang again teaches displaying specific areas or regions, and displaying a display/visual effect, such that displaying respective or additional markers, identifiers, indicators, icons, etc. for musical effect characteristics would have been obvious to one of ordinary skill.
As for claim 16, Tajik discloses that audio parameters may also be generated from crash parameters. For example, an audio parameter specifying a start time of an audio signal should correspond to a time at which a collision is predicted to occur. Further, the amplitude of the audio signal may depend on the speed at which the user strikes the virtual musical instrument (e.g., as determined by sensors such as cameras 142 and 144); similarly, the pitch of the audio signal may depend on where the user touches the virtual instrument (e.g., location 1030 in FIG. IOC); and the timbre of the audio signal may depend on the direction vector of the user's contact with the virtual musical instrument. Some collision parameters may be identified from sensors (e.g., cameras) of the mixed reality system 612 as described above. For example, the camera may be used as an input to a machine vision algorithm to determine the location, direction, and speed of a collision with the virtual musical instrument. In some embodiments, some collision parameters may be identified from sensors external to the mixed reality system 612. Thus, audio parameters (e.g. Amplitude, pitch, timbre) may be generated from these values. In some examples, the audio parameters may be generated from instrument parameters that depend on the crash parameters; for example, the pitch of the tone from the virtual piano may depend on the particular location being struck by the user (e.g., the particular piano key), and may be determined, for example, based on contact sensors, finger sensors, etc.; the audio parameters may be determined at stage 960 using any suitable techniques. Some audio parameters may be determined based on relative positions and orientations of a user of the MRE, sources of audio signals in the MRE (e.g., real or virtual objects to which the audio signals correspond). For example, an audio parameter corresponding to the total volume of the audio signal may be determined based on the distance between the user and the virtual object (reflecting that the perceived volume of the audio signal decreases as the distance between the listener and the source increases). The audio parameters will typically be determined so as to simulate audio signals that will be heard by a listener at a user position and orientation in the MRE relative to the source of the audio signals in the MRE; in connection with Tajik to achieve the volume of the instrument according to the relative distance of the instrument from the center of the picture of said video is also a conventional technique of one of ordinary skill in the art. It would have been obvious to one of ordinary skill in the art to combine Tajik and the above common general knowledge on the basis of Wang in order to obtain the solution claimed in this claim, so that the solution claimed in this claim does not have outstanding essential features and significant advancements.
As for claim 17, Tajik teaches determining and displaying an energy and immersion type of an effect (corresponding to style), therefore, in combination with the well-known differing output volumes, given Wang teach the display of feature information, it would have been obvious to allow Wang to display additional information, such as a selected style.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Tajik.
Tajik discloses that at stage 960, audio parameters may be determined based on the collision parameters identified at stage 950 and based on parameters associated with the virtual object being collided (e.g., parameters 700 described above). These audio parameters may be used at stage 970 to generate an audio signal for the virtual object. The audio parameters determined at stage 960 may include any parameters related to the generation of an audio signal; the particular audio parameters used will depend on the apparatus for which the audio signal will be generated. For example, such audio parameters may include pitch, speed, and timbre (e.g., for examples where audio signals are generated using a sound engine); an identity of one or more base tones and envelope parameters (e.g., for examples where an audio signal is generated using a waveform synthesizer); an identity of one or more audio samples (e.g., for examples where an audio signal is generated by playing the samples). The audio parameters may additionally include various parameters for processing the audio signal, such as gain and attenuation parameters for performing gain-based signal processing; an equalization curve for performing frequency based signal processing; reverberation parameters for applying artificial reverberation and echo effects; and a voltage controlled oscillator (VCO) parameter for applying a time-based modulation effect; at stage 970, an audio signal may be generated from the audio signal determined at stage 960. Any suitable technique may be used to generate the audio signal. In some examples, a sound engine may be used to generate audio signals, e.g., from audio parameters for pitch and speed and audio parameters identifying MIDI instruments to be used in signal generation. In some examples, the waveform synthesis engine may be used to generate an audio signal using conventional audio synthesis techniques based on audio parameters such as pitch, envelope parameters, and identity of one or more underlying tones. In some examples, the audio signal may be generated by playing one or more pre-recorded audio samples based on audio parameters (e.g., pitch, timbre) that may be used as an index into a database of audio samples; it is also a conventional technical selection of a person skilled in the art to provide that the person skilled in the art can autonomously select the corresponding synthesized audio; additionally, searching for similar songs in a trellis to play is also a routine technical choice for those skilled in the art. It follows that it would have been obvious to one of ordinary skill in the art to combine Tajik and the above common general knowledge on the basis of Wang in order to obtain the solution claimed in this claim, so that the solution claimed in this claim does not have outstanding essential features and significant advancements.
Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of the Chinese publication to Zhu et al. (CN 109462776 A).
As for claim 13, the additional features are not disclosed by Wang. However, Zhu et al. discloses a video special effects adding method, apparatus, terminal device and storage medium, and specifically discloses (see paragraphs 75-170, and Figures 1A-5): rendering an image of an instrument matching a set instrument type at a video position associated with a target image frame in the video; simultaneously finding music matching the set instrument type from the preset music library and playing the music during display of the target image frame; when it is determined that the user triggers a preset stop action when the distance between the hands is less than a preset threshold, the image of the instrument disappears, and the above technical feature plays the same role in Zhu et al. as the technical feature plays in the present invention, all for stopping the output of the audio of the virtual instrument. Namely, Zhu et al. provides insight as to how to apply the above-described additional technical features to Wang to further solve its technical problem; the combination with Wang and Zhu et al. would have been obvious given one of ordinary skill would have thought to stop the output of the audio of the virtual instrument for the distance between the components of the instrument exceeding the threshold.
As for claim 14, Zhu et al. discloses: In one particular example, a user records a video of an instrument that is not physically playing, the pre-set starting joint motion is holding the arms at an angle (e.g., 30°), and rendering of the instrument begins in the recorded video from an image frame where the user is identified to make the pre-set motion, at which point an image of the instrument (e.g., an accordion) is rendered between the hands of the user. Subsequently by identifying a tensioning action of both arms of the user, resize and position of the image of the musical instrument, even different sound effects can be triggered depending on the tensioning action, the tempo of the action can adjust the tempo of the music, and when the user is in full contact with both hands or the distance between the hands is less than a preset distance threshold, it is determined that the user triggers a preset stop action, then the image of the instrument disappears, during which the optional background sound effect can be added for the purpose of cooperating with the instrument sound effect; as another example, the user records video without background dancing, the pre-set starting joint motion is a bowing motion, adding video special effects in the recorded video starting from the image frames where the user was identified to make the pre-set motion, e.g., rendering a stage background map behind the user, or rendering a viewer background map in front of the user. Subsequently by identifying the user dancing action, music matching the dancing action is constantly triggered, e.g., the user makes a dancing action in Waltz, corresponding to playing a composition in Waltz. The user's dancing motion is thus constantly identified so that the entire dance continues to be accompanied by the playing of music, and in addition, the user can choose sound effects that match the dancing motion according to his/her preferences. It follows that it would have been obvious to one of ordinary skill to combine Zhu et al. with Wang in order to obtain the solution claimed in this claim, and therefore the solution claimed in this claim does not have outstanding essential features and significant advancements.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see the Notice of References Cited provided by the Examiner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christina Schreiber whose telephone number is (571)272-4350. The examiner can normally be reached M-F 7-4 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached at 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTINA M SCHREIBER/Primary Examiner, Art Unit 2837 01/10/2026