DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,075,193 (hereinafter referred to as Patent ‘193) in view of Hamilton (US Patent Application Publication No. 2009/0276707).
Claim 1, it recites a limitations functionally similar to the limitations recited in claim 1 of Patent ‘193, except the claim 1 of the Patent ‘193 recites “wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) instead of “wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) as recited in claim 1 of present invention.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify claim 1 of Patent ‘193 to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Claim 2, rejected against claim 2 of Patent ‘193.
Claim 3, rejected against claim 3 of Patent ‘193.
Claim 4, rejected against claim 4 of Patent ‘193.
Claim 5, rejected against claim 5 of Patent ‘193.
Claim 6, rejected against claim 6 of Patent ‘193.
Claim 7, rejected against claim 7 of Patent ‘193.
Claim 8, rejected against claim 8 of Patent ‘193.
Claim 9, it recites a limitations functionally similar to the limitations recited in claim 9 of Patent ‘193, except the claim 9 of the Patent ‘193 recites “wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) instead of “wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) as recited in claim 9 of present invention.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify claim 9 of Patent ‘193 to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Claim 10, rejected against claim 10 of Patent ‘193.
Claim 11, rejected against claim 11 of Patent ‘193.
Claim 12, rejected against claim 12 of Patent ‘193.
Claim 13, rejected against claim 13 of Patent ‘193.
Claim 14, rejected against claim 14 of Patent ‘193.
Claim 15, rejected against claim 15 of Patent ‘193.
Claim 16, rejected against claim 16 of Patent ‘193.
Claim 17, it recites a limitations functionally similar to the limitations recited in claim 17 of Patent ‘193, except the claim 17 of the Patent ‘193 recites “wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) instead of “wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment” (emphasis added) as recited in claim 17 of present invention.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify claim 17 of Patent ‘193 to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Claim 18, rejected against claim 18 of Patent ‘193.
Claim 19, rejected against claim 19 of Patent ‘193.
Claim 20, rejected against claim 20 of Patent ‘193.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yerli (US Patent Application Publication No. 2022/0070235), and further in view of Lee (US Patent Application Publication. 2012/0162350), and further in view of Hamilton (US Patent Application Publication No. 2009/0276707).
Regarding claim 1, Yerli teaches a computer-implemented method for videoconferencing in a three-dimensional virtual environment (Paragraph 0040), comprising:
receiving a video stream captured from a camera on a first device of a first user (Abstract, Paragraphs 0007-0008, 0010-0011, 0013, 0015, 0239, 0241, 0270);
mapping the video stream onto a three-dimensional model of an avatar (Paragraph 0161, 0164-0165, 0174-0175, 0234, 0272-0275 user graphical representation corresponding to user in received video); and
from a perspective of a virtual camera of a second user, rendering for display to the second user through a second device the three-dimensional virtual environment including: (i) the mapped three-dimensional model of the avatar (Paragraphs 0170-0171, 0174, 0187-0190, 0279-0282, 0286-0299) ) (Paragraphs 0160-0348 for complete details).
Yerli teaches sending emoticons (Paragraphs 0219, 0227), but Yerli does not explicitly teach receiving a specification of an emote, the specification being input by the first user through the first device; and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Lee teaches receiving a specification of an emote, the specification being input by the first user through the first device (Paragraphs 0071-0075, 0079-0081 sender specifying audiocons); and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user (Paragraphs 0076-0082) (also Paragraphs 0043-0057, 0082-0091 for further details).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Yerli to receive a specification of an emote, the specification being input by the first user through the first device, and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user as taught by Lee in order to enable “an interactive communication stream including one or more messages containing streaming voice and/or video, as well as other time-based media, in addition to text messages, in order to aid in expressing emotions or other audio and/or video content” (Lee, Paragraph 0071).
Yerli and Lee do not explicitly teach wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Yerli and Lee to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Regarding claim 2, Lee teaches that the specification of the emote specifies the sound to emit (Paragraphs 0072, 0082-0084). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 3, Lee teaches the specification specifies a sound file or a link to a sound file to retrieve (Paragraphs 0072, 0082-0084). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 4, Lee teaches the sound is emitted through the second device contemporaneously with display of the emote through the second device (Paragraphs 0072, 0079-0083). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 5, Lee teaches the emote is static (Paragraphs 0072, 0079, 0081 audiocon as picture). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 6, Lee teaches the emote contains an animation or video (Paragraphs 0072, 0079, 0081 audiocon as animation or video). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 7, Lee teaches the specification of the emote is transmitted from the first device in response to at least one of the first user:(i) pressing a button on a screen of the first device (audiocon input function selection); (ii) selecting from a list presented on the screen (selecting from library); (iii) pressing a certain key on the first device (Fig. 7A item 70, Fig. 7B) ; (iv) entering a command on a command palette; (v) entering the command on a chat window; (vi) speaking a voice command captured using a microphone of the first device; (vii) making a gesture captured using the camera of the first device; (viii) making a mouse gesture; and (ix) using another device separate from the first device (loading client from another device) (Paragraphs 0079, 0081). The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 8, Lee teaches the displayed emote comprises at least one of: (i) a three-dimensional model; (ii) an image placed on a surface of the three-dimensional model; (iii) at least a partially transparent 2D or 3D texture; (iv) text (Fig. 8 items 82, 94, 96); (v) a particle effect; (vi) a lighting effect; and (vii) a postprocessing effect. The motivation to combine Yerli and Lee is similar to that in claim 1 rejection.
Regarding claim 9, Yerli teaches a non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations (Abstract, Paragraphs 0007, 0025, 0164, 0348) comprising::
receiving a video stream captured from a camera on a first device of a first user (Abstract, Paragraphs 0007-0008, 0010-0011, 0013, 0015, 0239, 0241, 0270);
mapping the video stream onto a three-dimensional model of an avatar (Paragraph 0161, 0164-0165, 0174-0175, 0234, 0272-0275 user graphical representation corresponding to user in received video); and
from a perspective of a virtual camera of a second user, rendering for display to the second user through a second device the three-dimensional virtual environment including: (i) the mapped three-dimensional model of the avatar (Paragraphs 0170-0171, 0174, 0187-0190, 0279-0282, 0286-0299) ) (Paragraphs 0160-0348 for complete details).
Yerli teaches sending emoticons (Paragraphs 0219, 0227), but Yerli does not explicitly teach receiving a specification of an emote, the specification being input by the first user through the first device; and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Lee teaches receiving a specification of an emote, the specification being input by the first user through the first device (Paragraphs 0071-0075, 0079-0081 sender specifying audiocons); and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user (Paragraphs 0076-0082) (also Paragraphs 0043-0057, 0082-0091 for further details).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Yerli to receive a specification of an emote, the specification being input by the first user through the first device, and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user as taught by Lee in order to enable “an interactive communication stream including one or more messages containing streaming voice and/or video, as well as other time-based media, in addition to text messages, in order to aid in expressing emotions or other audio and/or video content” (Lee, Paragraph 0071).
Yerli and Lee do not explicitly teach wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Yerli and Lee to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Regarding claim 10, Lee teaches that the specification of the emote specifies the sound to emit (Paragraphs 0072, 0082-0084). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 11, Lee teaches the specification specifies a sound file or a link to a sound file to retrieve (Paragraphs 0072, 0082-0084). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 12, Lee teaches the sound is emitted through the second device contemporaneously with display of the emote through the second device (Paragraphs 0072, 0079-0083). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 13, Lee teaches the emote is static (Paragraphs 0072, 0079, 0081 audiocon as picture). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 14, Lee teaches the emote contains an animation or video (Paragraphs 0072, 0079, 0081 audiocon as animation or video). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 15, Lee teaches the specification of the emote is transmitted from the first device in response to at least one of the first user:(i) pressing a button on a screen of the first device (audiocon input function selection); (ii) selecting from a list presented on the screen (selecting from library); (iii) pressing a certain key on the first device (Fig. 7A item 70, Fig. 7B) ; (iv) entering a command on a command palette; (v) entering the command on a chat window; (vi) speaking a voice command captured using a microphone of the first device; (vii) making a gesture captured using the camera of the first device; (viii) making a mouse gesture; and (ix) using another device separate from the first device (loading client from another device) (Paragraphs 0079, 0081). The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 16, Lee teaches the displayed emote comprises at least one of: (i) a three-dimensional model; (ii) an image placed on a surface of the three-dimensional model; (iii) at least a partially transparent 2D or 3D texture; (iv) text (Fig. 8 items 82, 94, 96); (v) a particle effect; (vi) a lighting effect; and (vii) a postprocessing effect. The motivation to combine Yerli and Lee is similar to that in claim 9 rejection.
Regarding claim 17, Yerli teaches a device (Figs. 7A-7C items 702, 0704) for videoconferencing (Paragraphs 0199, 0277-0281 in a three-dimensional virtual environment (Figs. 1, 9, 10A, 10B, 14), comprising:
a processor (Fig. 1 item 104); a memory (Fig. 1 item 106) (Paragraphs 0164, 0171, 0191-0192);
a network interface (Fig. 1 interface from item 118 of client device A to item 110, Figs 7A-7C interface from item 702 peer device A to item 704 peer device B) configured to (i) receive a video stream captured from a camera on a first device of a first user (Abstract, Paragraphs 0007-0008, 0010-0011, 0013, 0015, 0239, 0241, 0270);
a texture mapper configured to map the video stream onto a three-dimensional model of an avatar (Paragraph 0161, 0164-0165, 0174-0175, 0234, 0239-0267 either device processing received video to map to user graphical representation, 0272-0275); and
a renderer configured to, from a perspective of a virtual camera of a second user, render for display to the second user through a second device the three-dimensional virtual environment including the mapped three-dimensional model of the avatar (Paragraphs 0170-0171, 0174, 0187-0190, 0279-0282, 0286-0299) ) (Paragraphs 0160-0348 for complete details).
Yerli teaches sending emoticons (Paragraphs 0219, 0227), but Yerli does not explicitly teach receiving a specification of an emote, the specification being input by the first user through the first device; and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Lee teaches receiving a specification of an emote, the specification being input by the first user through the first device (Paragraphs 0071-0075, 0079-0081 sender specifying audiocons); and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user (Paragraphs 0076-0082) (also Paragraphs 0043-0057, 0082-0091 for further details).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Yerli to receive a specification of an emote, the specification being input by the first user through the first device, and rendering the emote attached to the model of the avatar, wherein the emote emits sound played by the second device to the second user as taught by Lee in order to enable “an interactive communication stream including one or more messages containing streaming voice and/or video, as well as other time-based media, in addition to text messages, in order to aid in expressing emotions or other audio and/or video content” (Lee, Paragraph 0071).
Yerli and Lee do not explicitly teach wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment.
However, in the similar field, Hamilton teaches wherein the sound is adjusted based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment (Paragraphs 0014-0017, 0022, 0024-0029, 0041 adjusting volume based on facial direction i.e. orientation of avatars).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Yerli and Lee to include sound adjustment based on an orientation of the avatar relative to the virtual camera of the second user within the three-dimensional virtual environment as taught by Hamilton so that “the adjustment of volume based on the relative location, facial direction, etc., of avatars may be conducted, for example, by the software that generates the virtual environment” (Hamilton, Paragraph 0024).
Regarding claim 18, Lee teaches that the specification of the emote specifies the sound to emit (Paragraphs 0072, 0082-0084). The motivation to combine Yerli and Lee is similar to that in claim 17 rejection.
Regarding claim 19, Lee teaches the sound is emitted through the second device contemporaneously with display of the emote through the second device (Paragraphs 0072, 0079-0083). The motivation to combine Yerli and Lee is similar to that in claim 17 rejection.
Regarding claim 20, Lee teaches the displayed emote comprises at least one of: (i) a three-dimensional model; (ii) an image placed on a surface of the three-dimensional model; (iii) at least a partially transparent 2D or 3D texture; (iv) text (Fig. 8 items 82, 94, 96); (v) a particle effect; (vi) a lighting effect; and (vii) a postprocessing effect. The motivation to combine Yerli and Lee is similar to that in claim 17 rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEMANT PATEL whose telephone number is (571)272-8620. The examiner can normally be reached M-F 8:00 AM - 4:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached at 571-272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HEMANT PATEL
Primary Examiner
Art Unit 2694
/HEMANT S PATEL/ Primary Examiner, Art Unit 2694