DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission has been entered.
The action is in response to claims dated 4/9/2025
Claims pending in the case: 1, 4-6, 11-12, 23, 25-30
Cancelled claims: 2-3, 7-10, 13-22, 24
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-6, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Gefen (US 20110063415) in view of Vats (US 20140208272).
Gefen not used in the prior office action.
Regarding claim 1, Gefen teaches a method for providing interaction with a virtual object in a virtual space (Gefen: [32]: user interacts with 3D objects in a video), the method comprising:
- providing a panoramic video or a real-time video of the virtual space inside 3D graphics environment that is configured for display and manipulation of three-dimensional objects (Gefen: [23, 25]: interactive virtual environment that may be produced using graphics generators), wherein one or more portions of one or more frames of the … video are clickable (Gefen: [28-29]: virtual objects burned into the video frames that may be selected by viewer);
- receiving a first user input over at least one of the portions of at least one of the frames of the … video (Gefen: [29, 31]: virtual objects burned into the video frames selected by viewer); and
- generating and displaying a first view of a 3 dimensional model of the virtual object which is predefined for and associated with the clicked portion of the one or more frames of the … video for which the first user input is received (Gefen: [37-38]: generating a model of the selected object associated with the clicked portion for user interaction), the three-dimensional model being displayed using the same three-dimensional graphics environment through which the … video is provided so that a continuous interactive display of the panoramic video and the three-dimensional model is presented to the user (Gefen: [37-38]: object displayed in the same environment as if the user is present in the scene)
Although Gefen does not specifically recite, Panoramic video, it would have been obvious to one skilled in the art that the functions taught in Gefen may be implemented in a panoramic video as well.
Nonetheless, Vats teaches, panoramic video (Vats: Figs 18-19, [26-27, 45-46]: panoramic view of a place in 3D-graphics envi4onment); It would have been obvious to provide the features taught in Gefen in the panoramic video in King.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gefen and Vats because the combination would enable using 3D graphics for displaying content in a 3D virtual environment that user can interact with. One of ordinary skill in the art would have been motivated to combine the teachings because the combination would give the user an improved experience by providing enhanced digital object viewing using 3D graphics and an interaction experience using an assistant as is possible in the physical environment (see Vats [1-2]);
Regarding claim 4, Gefen and Vats teach the invention as claimed in claim 1 above, and further teach,
- displaying a virtual avatar of representative of the virtual space along with the panoramic video of the virtual space in the 3D graphics environment (Vats: Figs. 18-19, [7, 33, 45-46]: assistant in virtual environment); and
- enabling conversation of an user with representative through video conferencing or through audio conferencing (Vats: [34, 44, 48-49]: conversation with assistant in virtual environment), wherein the virtual avatar of the representative is shown when the audio conferencing is used for conversation, such that the virtual avatar is shown with facial and/or body expression (Vats: [45]: human like virtual assistant with facial expressions).
Regarding claim 5, Gefen and Vats teach the invention as claimed in claim 4 above, and further, wherein the virtual avatar is a 3 dimensional model which render in synchronization with input audio (Vats: [45-46]: avatar with synchronized lip movement and expressions).
Regarding claim 6, Gefen and Vats teach the invention as claimed in claim 4 above, and further, wherein the virtual avatar is a 2 dimensional image whose facial expression changes using image procession in synchronization with input audio of the representative (Vats: [45-46]: avatar with synchronized lip movement and expressions; [46]: “virtual assistant can also be an image or 3D model, where the virtual assistant (1901') is shown moving lips in response to a query.").
Regarding claim 29, Gefen and Vats teach the invention as claimed in claim 1 above, and further,
- receiving a second user input, wherein the second user input are one or more interaction commands comprises interactions for understanding functionality of different parts of the 3D model (Vats: [32, 48]: user input commands);
- identifying one or more interaction commands (Vats: [32-33, 48]; interact as in a real setup);
-in response to the identified one or more commands, rendering of corresponding interaction to 3D model of object using texture data, computer graphics data and selectively using sound data of the 3D- model of object (Vats: [32-33, 48]; interact as in a real setup) ; and
- displaying the corresponding interaction to 3D model, wherein the interaction can emulate the interaction with mechanical and/or electronic and/or light emitting parts similar to real object (Vats: [32-33, 48]; interact as in a real setup).
Claims 11-12, 23, 25-28, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Gefen (US 20110063415), Vats (US 20140208272) and Arfvidsson (US 2008/0244648).
Regarding claim 11, Gefen teaches, A method for providing an interactive … view (Gefen: [32]: user interacts with 3D objects in a video) comprising:
- showing a panoramic video … inside a 3D graphics environment configured for display and manipulation of three-dimensional objects or premises (Gefen: [23, 25]: interactive virtual environment that may be produced using graphics generators), wherein one or more virtual premises shown in one or more frames of the panoramic video are clickable (Gefen: [28-29]: virtual objects burned into the video frames that may be selected by viewer);
- receiving a first user input over at least one of the virtual premises shown in at least one of the frames of the panoramic video (Gefen: [29, 31]: virtual objects burned into the video frames selected by viewer);
- loading a video or a panoramic image the virtual premises in the 3D graphic environment for which the first user input is received (Gefen: [23, 25]: loading and providing an interactive virtual environment that may be produced using graphics generators);
- providing a panoramic video or a real-time video of the virtual premises in the 3D graphics environment, wherein one or more portions of one or more frames of the panoramic video are clickable (Gefen: [28-29]: virtual objects burned into the video frames that may be selected by viewer);
- receiving a second user input over at least one of the portions of at least one of the frames of the panoramic video (Gefen: [29, 31]: virtual objects burned into the video frames selected by viewer); and
- generating and displaying a first view of a 3 dimensional model of the virtual object which is predefined for and associated with the particular portion of one or more frames for which the second user input is received (Gefen: [37-38]: generating a model of the selected object associated with the clicked portion for user interaction), the 3 dimensional model of the virtual object being displayed using the same 3D graphics environment through which the … video is provided, thereby maintaining a continuous interactive display to the user (Gefen: [37-38]: object displayed in the same environment as if the user is present in the scene);
Although Gefen does not specifically recite, 3D graphics environment, it is obvious that graphics generators (Gefen [25]) used to generate the video may use 3D graphics;
Nonetheless, Vats teaches,
3D graphics environment (Vats: Figs 18-19, [26-27, 46]: panoramic view of a place in 3D-graphics environment);
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gefen and Vats because the combination would enable displaying content in a 3D virtual environment that user can interact with. One of ordinary skill in the art would have been motivated to combine the teachings because the combination would give the user an improved experience by providing enhanced digital object viewing and interaction experience using an assistant as is possible in the physical environment (see Vats [1-2]);
Arfvidsson further teaches a street view and showing a panoramic video of a street (Arfvidsson: Figs. 2-7, [6, 20]: street views; Figs. 5-7 illustrate various premises labeled in the virtual space (e.g. Union Bank of California, Paul Hastings Tower, Los Angeles Central Library).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gefen, Vats and Arfvidsson because the combination would enable displaying panoramic view of street . One of ordinary skill in the art would have been motivated to combine the teachings because the combination would enable the user to update the displayed panoramic video in real time by interacting with a map, and vice versa (See Arfvidsson [25]).
Regarding claim 12, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 11 above, and further,
- receiving user input for a geo location (Arfvidsson: [26]: Browser window 200 comprises a text-box 210 for a user to enter a street address);
- loading a 2 dimensional or 3 dimensional map of a virtual space around the geo location (Arfvidsson: [25-27: street view; FIGS. 5-7 illustrate various premises labeled in the virtual space (e.g. Union Bank of California, Paul Hastings Tower, Los Angeles Central Library 26]: Browser window 200 comprises a text-box 210 for a user to enter a street address);
- further showing the virtual space in the map representing the desired geographical location (Arfvidsson: [26]: display street); and
- loading panoramic video of the street (Arfvidsson: [26]: displays as per view point).
Regarding claim 23, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 11 above, and further, - displaying a virtual avatar of representative of the virtual space along with the panoramic video of the virtual space in the 3D graphics environment (Vats: Figs. 18-19, [7, 33, 45-46]: assistant in virtual environment); and
- enabling conversation of an user with representative through video conferencing or through audio conferencing (Vats: [34, 44, 48-49]: conversation with assistant in virtual environment), wherein the virtual avatar of the representative is shown when the audio conferencing is used for conversation, such that the virtual avatar is shown with facial and/or body expression (Vats: [45]: human like virtual assistant with facial expressions);
Regarding claim 25, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 23 above, and further, wherein the virtual avatar is a 3 dimensional model which render in synchronization with input audio (Vats: [45-46]: avatar with synchronized lip movement and expressions).
Regarding claim 26, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 23 above, and further, wherein the virtual avatar is a 2 dimensional image whose facial expression changes using image procession in synchronization with input audio of the representative (Vats: [45-46]: avatar with synchronized lip movement and expressions; [46]: “virtual assistant can also be an image or 3D model, where the virtual assistant (1901') is shown moving lips in response to a query.").
Regarding claim 27, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 23 above, and further, - loading a simulated representation of the user, generated from one or more photographs, or a 3D avatar of the user (Vats: [34]: "user can upload his own photograph to generate a virtual simulation of himself," wherein the "simulated human 3D-model can walk . . . to a showroom or directly visit a product.").
Regarding claim 28, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 27 above, and further, wherein the simulated representation or the 3D avatar of the user is enabled to move within the street and/or the virtual premises and further enabled to interact with the virtual premises and/or other virtual avatars present in space (Vats: [33-35, 40-41]: a simulated 3D model avatar walks and interacts with the objects as phone, computer, light, and steering a car).
Regarding claim 30, Gefen, Vats and Arfvidsson teach the invention as claimed in claim 11 above, and further,
- receiving a third user input, wherein the third user input are one or more interaction commands comprises interactions for understanding functionality of different parts of the 3D model (Vats: [32, 48]: user interaction with a 3D object model, wherein “polygons along with associated texture of said 3D-model moves as per user command, and movement of 3D-model or its parts is achieved and displayed in real time … based on user input commands.”);
- identifying one or more interaction commands (Vats: [32-33, 48]; interact as in a real setup);
-in response to the one or more identified command/s, rendering of corresponding interaction to 3D model of object using texture data, computer graphics data and selectively using sound data of the 3D- model of object (Vats: [32-33, 48]; interact as in a real setup); and
- displaying the corresponding interaction to 3D model wherein the interaction can emulate the interaction with mechanical and/or electronic and/or light emitting parts similar to real object (Vats: [32-33, 48]; interact as in a real setup).
Response to Arguments
Applicants’ prior art arguments have been fully considered and are moot in view of the new ground of rejection presented above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure in attached 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANDRITA BRAHMACHARI whose telephone number is (571)272-9735. The examiner can normally be reached Monday to Friday, 11 am to 8 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached on 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mandrita Brahmachari/Primary Examiner, Art Unit 2176