DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
5. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crampton Stephen James (WO 03/058518 A2) in view of Edry et al. (US 2018/0332422 A1).
6. With reference to claim 1, Crampton Stephen James teaches A method performed by a computer system (“Apparatus for an Avatar User Interface System (261) is provided comprising Personal Computers (3), an Avatar Hosting Server (4) and a Session Server (1) all connected by a Network (2).” Abstract, lines 1-2 “In accordance with this aspect of the present invention there is provided a method of communication between a plurality of users via an avatar user interface system comprising the steps of: joining a plurality of computing appliance means and a server means for serving the communications session to start a communication session by means of a network;” page 4, lines 11-16) Crampton Stephen James also teaches the method comprising: implementing a 3D virtual environment configured to be accessed by a plurality of client devices each having corresponding a user graphical representation within the 3D virtual environment, (“Figure 11 is a block diagram of a personal computer 3 with an avatar user interface 260 in an environmental location 273. The personal computer 3 includes a display device 264, a webcam 29, a headset 11 comprising microphone 12 and headphones 13, a keyboard 14 and a mouse 15 in a cabinet 16 running an operating system 20 which in this embodiment is the Microsoft Windows XP operating system, an avatar user interface software application 262 as a plug-in to the browser 263 in which the displayed avatar user interface 260 is seen by the user 17 in the browser window 21 on the desktop 423.” page 25, lines 14-22 “During an avatar user interface session (avatar conference call) , those participating in the session will communicate via information flowing between the personal computers 3 and the session server 1. This information can be in different media formats including: voice, music, video, avatar animation, 3D models, presentation images, text, office application sharing, spreadsheets, word processor documents and whiteboard annotation.” page 25, lines 31-37 “The Meeting Room media window 50 of the avatar conference is a metaphor for an actual meeting that is being video-cast live. An example might be a group discussion broadcast from a television studio. By using photo-realistic 3D avatars, a photo-realistic 3D meeting room, anima-realistic animations of the avatars and good camera direction, it is possible to suspend the disbelief of the viewer on the session such that he thinks it is an actual meeting where he is the only person who is not in the room.” page 30, line 36-page 31, line 5) Crampton Stephen James further teaches the 3D virtual environment includes positions for the user graphical representations arranged in a geometry and a virtual camera positioned within the 3D virtual environment; (“The Meeting Room media window 50 of the avatar conference is a metaphor for an actual meeting that is being video-cast live. An example might be a group discussion broadcast from a television studio. By using photo-realistic 3D avatars, a photo-realistic 3D meeting room, anima-realistic animations of the avatars and good camera direction, it is possible to suspend the disbelief of the viewer on the session such that he thinks it is an actual meeting where he is the only person who is not in the room. … Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54. Other cameras may be positioned at any location and oriented at any orientation.” page 30, line 36-page 31, line 21 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34) Crampton Stephen James teaches moving the virtual camera that maintains a distance between the virtual camera and the positions arranged in the geometry for the user graphical representations; (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “By inputting to the PC 3 with a user input device such as a keyboard 14 or a mouse 15, the user 17 can move the virtual camera 71 such that the desktop 423 on the virtual computing appliance 421 is larger or smaller in the display device 264.” page 109, lines 6-9) Crampton Stephen James also teaches capturing a video stream from the perspective of the virtual camera, wherein the video stream includes video of the user graphical representations in the positions arranged in the geometry. (“Figure 15 is a representation of an example of a meeting room media window 50 during an avatar user interface session. There are 5 participants on the session. Each participant in the avatar user interface session is represented by their avatar 5 sitting around a meeting table 51.” page 28, lines 25-29 “The Meeting Room media window 50 of the avatar conference is a metaphor for an actual meeting that is being video-cast live. An example might be a group discussion broadcast from a television studio. By using photo-realistic 3D avatars, a photo-realistic 3D meeting room, anima-realistic animations of the avatars and good camera direction, it is possible to suspend the disbelief of the viewer on the session such that he thinks it is an actual meeting where he is the only person who is not in the room. … Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 30, line 36-page 31, line 19 “he video (or streaming webcast) 336 can come from a webcam 29 situated on the display device 264 of a user 17. Alternatively, the video 336 can come from any other type of video camera 29 connected to a personal computer 3 on the network 2. … To maintain the metaphor, the avatar walks out of the room before the webcast 336 starts and walks back in when it finishes. The streaming video webcast 336 from the webcam 29 is shown on the screen 53.” page 64, line 19-page 65, line 2)
PNG
media_image1.png
620
548
media_image1.png
Greyscale
Crampton Stephen James does not explicitly teach a predetermined path. This is what Edry teaches (“the virtual camera 191 can follow a camera path 161 that follows the path 160 of the participant object 180. Video data can be recorded from the perspective 192 of the participant object 180 or from the perspective 197 of the virtual camera 191. The location of the virtual camera 191, and thus the virtual camera perspective 197, can be within a threshold distance from the path 160 of the participant object 180.” [0033]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Edry into Crampton Stephen James, in order to provide high fidelity video clips of salient activity of a virtual reality environment.
7. With reference to claim 2, Crampton Stephen James teaches the location or orientation of the movable virtual camera is configured to be controlled by at least one of the client devices. (“Figure 11 is a block diagram of a personal computer 3 with an avatar user interface 260 in an environmental location 273. The personal computer 3 includes a display device 264, a webcam 29, a headset 11 comprising microphone 12 and headphones 13, a keyboard 14 and a mouse 15 in a cabinet 16 running an operating system 20 which in this embodiment is the Microsoft Windows XP operating system, an avatar user interface software application 262 as a plug-in to the browser 263 in which the displayed avatar user interface 260 is seen by the user 17 in the browser window 21 on the desktop 423.” page 25, lines 14-22 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “By inputting to the PC 3 with a user input device such as a keyboard 14 or a mouse 15, the user 17 can move the virtual camera 71 such that the desktop 423 on the virtual computing appliance 421 is larger or smaller in the display device 264.” page 109, lines 6-9)
8. With reference to claim 3, Crampton Stephen James teaches the geometry comprises a circle, an oval, a polygon, a linear geometry, an arcuate geometry, or a curvilinear geometry. (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34)
9. With reference to claim 4, Crampton Stephen James teaches the geometry is a circle, (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34)
Crampton Stephen James does not explicitly teach the predetermined path is a circular path within the circle. This is what Edry teaches (“A generated video can be from a perspective following a customized path through a series of events, such as activity of a video game session.” [0005] “the virtual camera 191 can follow a camera path 161 that follows the path 160 of the participant object 180. Video data can be recorded from the perspective 192 of the participant object 180 or from the perspective 197 of the virtual camera 191. The location of the virtual camera 191, and thus the virtual camera perspective 197, can be within a threshold distance from the path 160 of the participant object 180.” [0033]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Edry into Crampton Stephen James, in order to provide high fidelity video clips of salient activity of a virtual reality environment.
10. With reference to claim 5, Crampton Stephen James teaches the geometry is a circle, the moving of the movable virtual camera comprises rotating the virtual camera about an axis corresponding to the fixed point. (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34 “By inputting to the PC 3 with a user input device such as a keyboard 14 or a mouse 15, the user 17 can move the virtual camera 71 such that the desktop 423 on the virtual computing appliance 421 is larger or smaller in the display device 264.” page 109, lines 6-9)
Crampton Stephen James does not explicitly teach the predetermined path is a fixed point within the circle. This is what Edry teaches (“A generated video can be from a perspective following a customized path through a series of events, such as activity of a video game session.” [0005] “the virtual camera 191 can follow a camera path 161 that follows the path 160 of the participant object 180. Video data can be recorded from the perspective 192 of the participant object 180 or from the perspective 197 of the virtual camera 191. The location of the virtual camera 191, and thus the virtual camera perspective 197, can be within a threshold distance from the path 160 of the participant object 180.” [0033]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Edry into Crampton Stephen James, in order to provide high fidelity video clips of salient activity of a virtual reality environment.
11. With reference to claim 6, Crampton Stephen James teaches the at least one virtual environment comprises one or more additional movable virtual cameras configured to be moved (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “By inputting to the PC 3 with a user input device such as a keyboard 14 or a mouse 15, the user 17 can move the virtual camera 71 such that the desktop 423 on the virtual computing appliance 421 is larger or smaller in the display device 264.” page 109, lines 6-9)
Crampton Stephen James does not explicitly teach moved on the predetermined path or on different paths. This is what Edry teaches (“A generated video can be from a perspective following a customized path through a series of events, such as activity of a video game session.” [0005] “the virtual camera 191 can follow a camera path 161 that follows the path 160 of the participant object 180. Video data can be recorded from the perspective 192 of the participant object 180 or from the perspective 197 of the virtual camera 191. The location of the virtual camera 191, and thus the virtual camera perspective 197, can be within a threshold distance from the path 160 of the participant object 180.” [0033]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Edry into Crampton Stephen James, in order to provide high fidelity video clips of salient activity of a virtual reality environment.
12. With reference to claim 7, Crampton Stephen James teaches maintains a constant viewing orientation angle between the virtual camera and the positions arranged in the geometry for the user graphical representations. (“Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 31, lines 12-19 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34)
Crampton Stephen James does not explicitly teach the predetermined path. This is what Edry teaches (“the virtual camera 191 can follow a camera path 161 that follows the path 160 of the participant object 180. Video data can be recorded from the perspective 192 of the participant object 180 or from the perspective 197 of the virtual camera 191. The location of the virtual camera 191, and thus the virtual camera perspective 197, can be within a threshold distance from the path 160 of the participant object 180.” [0033]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Edry into Crampton Stephen James, in order to provide high fidelity video clips of salient activity of a virtual reality environment.
13. With reference to claim 8, Crampton Stephen James teaches the positions arranged in the geometry comprise defined seating positions for the user graphical representations at a virtual conference. (“The Meeting Room media window 50 of the avatar conference is a metaphor for an actual meeting that is being video-cast live. An example might be a group discussion broadcast from a television studio. By using photo-realistic 3D avatars, a photo-realistic 3D meeting room, anima-realistic animations of the avatars and good camera direction, it is possible to suspend the disbelief of the viewer on the session such that he thinks it is an actual meeting where he is the only person who is not in the room. … Figure 18 is a plan view of the virtual meeting room illustrating possible virtual camera positions. Camera 71 is the overview camera and will show the view illustrated in Figure 15. Camera 71 is positioned at the eye position of the Avatar called Bert who is seeing the Meeting room media window 50 in Figure 15 on his personal computer 3. Cameras 72, 73, 74 and 75 view avatars 5 labelled Ted, Jill, Andy and Pam respectively. Camera 76 shows the presentation screen 53. Camera 77 shows the whiteboard 54.” page 30, line 36-page 31, line 19 “the view presented is from a virtual camera position. In each mode there are one or more virtual cameras . A virtual camera can have camera controls such as zoom and pan in addition to spatial movement.” page 31, lines 27-30 “This mode Ml uses the Meeting Room metaphor. An overview from a single virtual camera of: the table 51, all the avatars around it 5, the whiteboard 54 and the presentation screen 53.” page 32, lines 32-34)
14. Claim 9 is similar in scope to claim 1, and thus is rejected under similar rationale. Crampton Stephen James additionally teaches A non-transitory computer readable medium having stored thereon instructions configured to cause at least one computer comprising a processor and memory to perform steps (“the apparatus comprises two or more personal computers 3 with memory 345, display devices 264 and displayed avatar user interfaces 260 that are connected by a network 2 to a session server 1 with memory 346 using a standard avatar interface protocol 300 and an avatar hosting server 4 containing a plurality of avatars 5 and memory 344.” page 11, lines 24-29 “Figure 11 is a block diagram of a personal computer 3 with an avatar user interface 260 in an environmental location 273. The personal computer 3 includes a display device 264, a webcam 29, a headset 11 comprising microphone 12 and headphones 13, a keyboard 14 and a mouse 15 in a cabinet 16 running an operating system 20 which in this embodiment is the Microsoft Windows XP operating system, an avatar user interface software application 262 as a plug-in to the browser 263 in which the displayed avatar user interface 260 is seen by the user 17 in the browser window 21 on the desktop 423.” page 25, lines 14-22 “A computing appliance may be very powerful with a processor running at speeds in excess of 2 GHz, more than 512 MB of memory 345, a display device 264 with more than 1 million pixels and a specialist 3D graphics chip such as an Nvidia GeForce 3 from Nvidia Inc (USA) .” page 72, line 37-page 73, line 2)
15. Claim 10 is similar in scope to claim 2, and thus is rejected under similar rationale.
16. Claims 11-15 are similar in scope to claims 4-8, and they are rejected under similar rationale.
17. Claim 16 is similar in scope to claim 1, and thus is rejected under similar rationale. Crampton Stephen James additionally teaches A computer system comprising one or more computers having at least one processor and memory, wherein the computer system is programmed to perform steps (“Apparatus for an Avatar User Interface System (261) is provided comprising Personal Computers (3), an Avatar Hosting Server (4) and a Session Server (1) all connected by a Network (2).” Abstract, lines 1-2 “the apparatus comprises two or more personal computers 3 with memory 345, display devices 264 and displayed avatar user interfaces 260 that are connected by a network 2 to a session server 1 with memory 346 using a standard avatar interface protocol 300 and an avatar hosting server 4 containing a plurality of avatars 5 and memory 344.” page 11, lines 24-29 “Figure 11 is a block diagram of a personal computer 3 with an avatar user interface 260 in an environmental location 273. The personal computer 3 includes a display device 264, a webcam 29, a headset 11 comprising microphone 12 and headphones 13, a keyboard 14 and a mouse 15 in a cabinet 16 running an operating system 20 which in this embodiment is the Microsoft Windows XP operating system, an avatar user interface software application 262 as a plug-in to the browser 263 in which the displayed avatar user interface 260 is seen by the user 17 in the browser window 21 on the desktop 423.” page 25, lines 14-22 “A computing appliance may be very powerful with a processor running at speeds in excess of 2 GHz, more than 512 MB of memory 345, a display device 264 with more than 1 million pixels and a specialist 3D graphics chip such as an Nvidia GeForce 3 from Nvidia Inc (USA) .” page 72, line 37-page 73, line 2)
18. Claims 17-20 are similar in scope to claims 4-7, and they are rejected under similar rationale.
Conclusion
19. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE CHIN/
Primary Examiner, Art Unit 2614