DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claim 3 provisionally rejected under 35 U.S.C. 101 as claiming the same invention as that of claim 1 of copending Application No. 17/719976 (reference application). This is a provisional statutory double patenting rejection since the claims directed to the same invention have not in fact been patented.
Detailed comparation Table
Current Application
17/719976
Claim 1. A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremental path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a mapping plane or a mapping point associated with coordinates in the virtual 3D space; mapping one or more product models to the mapping plane or the mapping point; generating an interactive plane for the one or more product models, the interactive plane being associated with product information related to the one or more product models such that an interaction with the interactive plane causes retrieval of the product information; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps corresponding to the viewpoint, the one or more product models being visible in the scene, and the interactive plane being at least partly layered over the one or more product models in the scene.
Claim 1: A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremented path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a mapping plane or a mapping point associated with coordinates in the virtual 3D space; mapping one or more product models to the mapping plane or the mapping point; generating an interactive plane for the one or more product models, the interactive plane being associated with product information related to the one or more product models such that an interaction with the interactive plane causes retrieval of the product information; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps corresponding to the viewpoint, the one or more product models being visible in the scene, and the interactive plane being at least partly layered over the one or more product models in the scene
Claim 3:The method of claim 1, wherein:
the cube map includes a plurality of (2D) cube side images; and presenting the virtual interactive environment includes rendering a front 2D cube side image of the plurality of 2D cube side images before rendering other 2D cube side images of the plurality of 2D cube side images.
the cube map including a plurality of (2D) cube side images, wherein presenting the virtual interactive environment includes rendering a front 2D cube side image of the plurality of 2D cube side images before rendering other 2D cube side images of the plurality of 2D cube side images.
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 2, 4-17 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 4-15 of copending Application No. 17719976 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other. In this case, claims 1, 2, 4-17 are anticipated by claims 1, 2, 4-15 of the co-pending application of 17719976.
Claim 1 is rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claim 1 of U.S. Patent Application No. 17719976. The conflicting claims are not identical because claim 1 of 17719976 requires the additional elements from claim 3, not required by claim 1 of the instant application. the elements of claim 1 of the application are fully anticipated by claim 1 of 17719976, and anticipation is “the ultimate or epitome of obviousness” (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Dailey, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim mapping between current application and U.S. Patent Application No. 17719976
current
1
2
4
5
6
7
8
9
10
11
12
13
14
16
17
17719976
1
2
4
5
6
7
8
9
10
11
12
13
14
12
12
Current Application
17719976
Claim 1. A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremental path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a mapping plane or a mapping point associated with coordinates in the virtual 3D space; mapping one or more product models to the mapping plane or the mapping point; generating an interactive plane for the one or more product models, the interactive plane being associated with product information related to the one or more product models such that an interaction with the interactive plane causes retrieval of the product information; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps corresponding to the viewpoint, the one or more product models being visible in the scene, and the interactive plane being at least partly layered over the one or more product models in the scene.
Claim 1: A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremented path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a mapping plane or a mapping point associated with coordinates in the virtual 3D space; mapping one or more product models to the mapping plane or the mapping point; generating an interactive plane for the one or more product models, the interactive plane being associated with product information related to the one or more product models such that an interaction with the interactive plane causes retrieval of the product information; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps corresponding to the viewpoint, the one or more product models being visible in the scene, and the interactive plane being at least partly layered over the one or more product models in the scene
the cube map including a plurality of (2D) cube side images, wherein presenting the virtual interactive environment includes rendering a front 2D cube side image of the plurality of 2D cube side images before rendering other 2D cube side images of the plurality of 2D cube side images.
Claim 12. A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremental path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a plurality of mapping locators;
Claim 12: A method to provide a virtual interactive environment, the method comprising: receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space; converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space, the plurality of viewpoints creating an incremented path in the virtual 3D space; generating, at a viewpoint of the plurality of viewpoints, a plurality of mapping locators;
Claim 16 The method of claim 12, wherein the plurality of mapping locators are placed in the virtual 3D space to provide a drag-and-drop placement for a product model in the virtual 3D space.
the plurality of mapping locators including a 3D mapping shape placed in the virtual 3D space,
Claim 17 : The method of claim 16, wherein the plurality of mapping locators include a 3D mapping box and generating the plurality of mapping locators includes defining boundaries or corners of the 3D mapping box.
the 3D mapping shape including a 3D mapping box, wherein generating the plurality of mapping locators includes defining at least one of boundaries or corners of the 3D mapping box;
Claim 12
mapping a product model to a first mapping locator of the plurality of mapping locators; mapping an animation layer to a second mapping locator of the plurality of mapping locators; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps, the product model being visible in the scene with the animation layer.
mapping a product model to a first mapping locator of the plurality of mapping locators; mapping an animation layer to a second mapping locator of the plurality of mapping locators; and presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps, the product model being visible in the scene, and the animation layer causing a repeating color event in the scene.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A).
Regarding claim 1, Mabrey49 discloses a method to provide a virtual interactive environment, the method comprising:
the plurality of viewpoints creating an incremental path in the virtual 3D space (Mabrey49 (Fig. 2 [0047]) illustrates navigational points that creates the illusion of free movement within panoramic space.);
generating, at a viewpoint of the plurality of viewpoints, a mapping plane or a mapping point associated with coordinates in the virtual 3D space ();
mapping one or more product models to the mapping plane or the mapping point;
generating an interactive plane for the one or more product models (Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.), the interactive plane being associated with product information related to the one or more product models such that an interaction with the interactive plane causes retrieval of the product information. (Mabrey49 (Fig. 3 [0048]) illustrates the user selecting a point on one of the products (virtual price tag). Mabrey49 creates an embedded window that provides additional product views, product information, and product purchase.)
Mabrey49 as modified by Zhang discloses receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space (Zhang [0011] provides a game model of a scene (terrain and ground).);
converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space (Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.),
presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps corresponding to the viewpoint, the one or more product models being visible in the scene, and the interactive plane being at least partly layered over the one or more product models in the scene. (Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.
Mabrey49 (Fig. 1 [0046]) illustrates an interactive environment that is mapped to 6 standard cube faces. Mabrey49 (Fig. 23 [0066]) illustrates providing recognition software to provide product interaction using the cube faces.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce with Zhang’s system of figure skin scattering based on the environment map because both Zhang and Mabrey49 map an environment to cube maps (Mabrey49 [0046], Zhang [0046]).
Regarding claim 2, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 further teaches the method of claim 1, wherein the scene is a first scene and the viewpoint is a first viewpoint, and further comprising presenting, at the display, one or more navigation arrows which, upon receiving a user input, cause the display to present a second scene by traversing from the first viewpoint to a second viewpoint of the plurality of viewpoints. (Mabrey49 (Fig. 2 [0047]) illustrates a user selecting a navigation point in the image (101). The user then jumps to that location in the store (102), (102) being the right side of the original image.)
Regarding claim 7, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 further teaches the method of claim 1, wherein the one or more product models include a first product image based on a 3D model file and a second product image based on a 2D image file. (Mabrey49 (Fig. 3 [0048]) illustrates 2D product images. Mabrey49 (Fig. 4 [0049]) illustrates 3D product images.)
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Zia et al. (“Zia”, US Pre-Grant Publication 20190197599 A1).
Regarding claim 4, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 and Zhang do not describe the method of claim 1, wherein the virtual interactive environment includes a 2D video with a skew corresponding to a surface in the virtual 3D space.. [sic] (priority to US Provisional 63276389 11/5/21)
However, these features are well known in the art as taught by Zia. For example, Zia discloses the method of claim 1, wherein the virtual interactive environment includes a 2D video with a skew corresponding to a surface in the virtual 3D space.. (Zia [0003] discusses AR applications that superimposes images (including video) on a scene. The scene may include planes and manipulates the rendering to include skew.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Zia’s system that places objects in an AR scene because Zia intelligently selecting and presenting objects (e.g., products, such as furniture and home furnishings) for placement in an augmented reality scene [0001].
Claim(s) 5, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Ramalingam et al. (“Ramalingam”, US Pre-Grant Publication 20160202947 A1).
Regarding claim 5, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 and Zhang do not describe the method of claim 1, wherein the viewpoint is a first viewpoint and further comprising:
receiving a first user input as a response to a prompt for preference information;
receiving a second user input as the interaction with the interactive plane; and
determining a second viewpoint to present at the display as a suggested viewpoint based on the response or the product information associated with the interactive plane.
However, these features are well known in the art as taught by Ramalingam. For example, Ramalingam discloses the method of claim 1, wherein the viewpoint is a first viewpoint and further comprising:
receiving a first user input as a response to a prompt for preference information (Ramalingam [0068] discloses two users wearing intelligent glasses, viewing different viewpoints “L1” and “L2”.
The first user transmits a request for at least a portion of the FOV of the second user [0069].);
receiving a second user input as the interaction with the interactive plane (Ramalingam [0070] discloses the second user captures the requested portion of the second user’s FOV. The second user responds to the request made by the second user.
Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.); and
determining a second viewpoint to present at the display as a suggested viewpoint based on the response or the product information associated with the interactive plane. (Ramalingam [0071] the first user receives the portion of the view provided by the second user.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Ramalingam’s remote system of wearable electronic device because Ramalingam provides the capability of a first user viewing a second user’s FOV [0012].
Regarding claim 11, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 further teaches the method of claim 1, wherein the display is a first display of a first computing device, the scene is a first scene defined by a first cube map of the plurality of cube maps (Mabrey49 (Fig. 1 [0046]) illustrates an interactive environment that is mapped to 6 standard cube faces.), and further comprising:
receiving a navigation input at the first computing device. (Mabrey49 (Fig. 2 [0047]) illustrates a user selecting a navigation point in the image (101). The user then jumps to that location in the store (102), (102) being the right side of the original image.)
Mabrey49 as modified by Zhang further teaches the virtual interactive environment as a second scene defined by a second cube map of the plurality of cube maps. (Interpreted from published specification [0073]: additional friends or a plurality of viewers can enter a “follow” mode in which they can only watch the interface of the primary device as the primary user navigates and interacts with the virtual interactive environment.)
(Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce with Zhang’s system of figure skin scattering based on the environment map because both Zhang and Mabrey49 map an environment to cube maps (Mabrey49 [0046], Zhang [0046]).
Mabrey49 and Zhang as modified by Ramalingam further teach presenting the virtual interactive environment at a second display of a second computing device (Ramalingam ([0068]) discloses two users wearing intelligent glasses, viewing different viewpoints “L1” and “L2”.);
presenting, in response to the navigation input at the first computing device, at the first computing device and at the second computing device. (Ramalingam ([0071]) the first user receives the portion of the view provided by the second user.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Ramalingam’s remote system of wearable electronic device because Ramalingam provides the capability of a first user viewing a second user’s FOV [0012].
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Ramalingam et al. (“Ramalingam”, US Pre-Grant Publication 20160202947 A1), in view of Powers et al. (“Powers”, US Pre-Grant Publication 20180374276 A1).
Regarding claim 6, the claimed invention for claim 5 is shown to be met with explanations from Mabrey49, Zhang and Ramalingam above.
Mabrey49, Zhang and Ramalingam do not describe the method of claim 5, wherein: (priority 3/19/19)
the computer-generated 3D model includes a virtual building;
the virtual 3D space is an inside of the virtual building,
the plurality of viewpoints include a plurality of rooms inside the virtual building.
However, these features are well known in the art as taught by Powers. For example, Powers discloses the method of claim 5, wherein:
the computer-generated 3D model includes a virtual building;
the virtual 3D space is an inside of the virtual building,
the plurality of viewpoints include a plurality of rooms inside the virtual building. (Powers [0039] generates a 3D model of a building with multiple rooms.
Mabrey49 (Fig. 26 [0069]) illustrates a virtual 3D space inside a building with multiple floors (rooms) selectable by the user.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map, Ramalingam’s remote system of wearable electronic device with Powers’ system providing a three-dimensional representation of a physical environment because Powers supports annotations with textual content may include product information (e.g., make, model, price, availability) [0094].
Mabrey49 further teaches the second viewpoint corresponds to a particular room of the plurality of rooms with an associated trait corresponding to the response or the product information. (Mabrey49 (Fig. 26 [0069]) illustrates a map popup map when the user selects a floor. Fig. 27 illustrates the displayed selected floor. The floor has embedded price tag hotspots throughout the floor.)
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Furukawa et al. (“Furukawa”, US Pre-Grant Publication 20190238819 A1).
Regarding claim 8, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 and Zhang do not describe the method of claim 1, wherein coordinates of the viewpoint are based at least partly on a minimum distance value from a surface in the virtual 3D space.
However, these features are well known in the art as taught by Furukawa. For example, Furukawa discloses the method of claim 1, wherein coordinates of the viewpoint are based at least partly on a minimum distance value from a surface in the virtual 3D space. (Furukawa (Fig. 5 [0144]) illustrates a depth distance (z) from the viewpoint to an imaging object. Furukawa [0145] identifies the position of (z) which includes a minimum value of (z).)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Furukawa’s system that generates a texture image of high picture quality at a predetermined viewpoint using an omnidirectional image because Furukawa’s system supports a cube map [0409].
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of He et al. (“He”, US Pre-Grant Publication 20190215532 A1).
Regarding claim 9, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 and Zhang do not describe the method of claim 1, wherein generating the mapping plane includes defining a plurality of corner points for the mapping plane at a virtual surface in the virtual 3D space.
However, these features are well known in the art as taught by He. For example, He discloses the method of claim 1, wherein generating the mapping plane includes defining a plurality of corner points for the mapping plane at a virtual surface in the virtual 3D space. (He (Figs. 14a-b) [0133] illustrates a cube map with padding regions (14b). The padded region is filled with samples from neighboring faces. He [0137] fills the padding region from the nearest corner sample position.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with He’s system for processing video data from multiple cameras because He stitches the video data together to obtain a 360-degree video (Abstract). The 360-degree video supports a cube map [0004].
Claim(s) 10, 12, 13, 15, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Smithers (US Pre-Grant Publication 20200298113 A1).
Regarding claim 10, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49 and Zhang above.
Mabrey49 and Zhang do not describe the method of claim 1, further comprising: mapping an animation layer to the mapping plane or the mapping point, the animation layer including a repeating color event or a moving object. (Priority to Instant Application 4/13/2022)
However, these features are well known in the art as taught by Smithers. For example, Smithers discloses the method of claim 1, further comprising: mapping an animation layer to the mapping plane or the mapping point, the animation layer including a repeating color event or a moving object (Smithers [0021] may load animated characters from any point of view and streamed to a video surface (mapping plane).)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Smithers’ system for interactive broadcast streamed video from games and other dynamic content because Smithers’ video surfaces correspond to the six surfaces of a cube map [0021].
Regarding claim 12, Mabrey49 discloses a method to provide a virtual interactive environment, the method comprising:
the plurality of viewpoints creating an incremental path in the virtual 3D space (Mabrey49 (Fig. 2 [0047]) illustrates navigational points that creates the illusion of free movement within panoramic space.);
generating, at a viewpoint of the plurality of viewpoints, a plurality of mapping locators (Mabrey49 (Fig. 3 [0048] (112)));
mapping a product model to a first mapping locator of the plurality of mapping locators. (Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.)
Mabrey49 as modified by Zhang further teaches receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space (Zhang [0011] provides a game model of a scene (terrain and ground).);
converting the computer-generated 3D model into a plurality of cube maps corresponding to a plurality of viewpoints in the virtual 3D space (Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.),
presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the plurality of cube maps. (Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.
Mabrey49 (Fig. 1 [0046]) illustrates an interactive environment that is mapped to 6 standard cube faces. Mabrey49 (Fig. 23 [0066]) illustrates providing recognition software to provide product interaction using the cube faces.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce with Zhang’s system of figure skin scattering based on the environment map because both Zhang and Mabrey49 map an environment to cube maps (Mabrey49 [0046], Zhang [0046]).
Mabrey49 and Zhang as modified by Smithers further teach mapping an animation layer to a second mapping locator of the plurality of mapping locators (Smithers [0021] may load animated characters from any point of view and streamed to a video surface (mapping plane).); and
the product model being visible in the scene with the animation layer. (Smithers [0021] may load animated characters from any point of view and streamed to a video surface (mapping plane). Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Smithers’ system for interactive broadcast streamed video from games and other dynamic content because Mabrey49 supports display of 3D products that may be rotated (animated) by the user (Fig. 4).
Regarding claim 13, the claimed invention for claim 12 is shown to be met with explanations from Mabrey49, Zhang and Smithers above.
Mabrey49 further teaches the method of claim 12, wherein the second mapping locator is a mapping plane or one or more mapping points (Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.)
Mabrey49 and Zhang as modified by Smithers further teach the animation layer occurs over a portion of the cube map defined by the mapping plane or the one or more mapping points. (Smithers [0021] discloses scenes, animated characters, and other content that may be streamed out as video surfaces files. The plurality of video surfaces may correspond to the six surfaces of a cube map captured from the point of view of the video game player.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Smithers’ system for interactive broadcast streamed video from games and other dynamic content because Mabrey49 supports display of 3D products that may be rotated (animated) by the user (Fig. 4).
Regarding claim 15, the claimed invention for claim 13 is shown to be met with explanations from Mabrey49, Zhang and Ramalingam above.
Mabrey49 and Zhang as modified by Smithers further teach the method of claim 13, wherein the animation layer includes an object moving over a portion of the cube map. (Smithers [0021] discloses scenes, animated characters, and other content that may be streamed out as video surfaces files. (An animated character moves from its location, even if it remains in the same position.) The plurality of video surfaces may correspond to the six surfaces of a cube map captured from the point of view of the video game player.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Smithers’ system for interactive broadcast streamed video from games and other dynamic content because Smithers’ video surfaces correspond to the six surfaces of a cube map [0021].
Regarding claim 16, the claimed invention for claim 12 is shown to be met with explanations from Mabrey49, Zhang and Smithers above.
Mabrey49 further teaches (priority to US Provisional 63276389 11/5/21 [0031])
the method of claim 12, wherein the plurality of mapping locators are placed in the virtual 3D space to provide a drag-and-drop placement for a product model in the virtual 3D space. (Mabrey49 (Fig. 22 [0065]) illustrates a system where products are placed in the virtual room (panorama) by dragging the item to the correct position.)
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Smithers (US Pre-Grant Publication 20200298113 A1), in view of Jiang (CN 108132712 A).
Regarding claim 14, the claimed invention for claim 13 is shown to be met with explanations from Mabrey49, Zhang and Smithers above.
Mabrey49, Zhang and Smithers do not describe the method of claim 13, wherein the animation layer includes one or more of: a water animation with a lighting effect across at least a portion of the mapping; or
a twinkle animation with a changing brightness at coordinates of the one or more mapping points.
However, these features are well known in the art as taught by Jiang. For example, Jiang discloses the method of claim 13, wherein the animation layer includes one or more of: a water animation with a lighting effect across at least a portion of the mapping plane (Jiang ( page 9 paragraph 7) discloses a virtual scene that includes water (including river and ocean) with luminance variation. The water surface is animated.); or
a twinkle animation with a changing brightness at coordinates of the one or more mapping points.
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map, Smithers’ system for interactive broadcast streamed video from games and other dynamic content with Jiang’s system with a virtual scene displayed with a weather condition because Jiang adjusts the brightness on the map of the scene. Related parameters of the animation of the simulated water surface wave function are also state from sunny to rainy day state linear transition (page 10 last paragraph).
Claim(s) 18, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Smithers (US Pre-Grant Publication 20200298113 A1), in view of Schickel et al. (“Schickel”, US Pre-Grant Publication 20190394443 A1).
Regarding claim 18, Mabrey49 discloses a method to provide a virtual interactive environment, the method comprising:
generating, at a viewpoint of the one or more viewpoints, a plurality of mapping locators (Mabrey49 (Fig. 3 [0048] (112)));
mapping a product model to a first mapping locator of the plurality of mapping locators (Mabrey49 (Fig. 3 [0048]) illustrates products represented in the images. When the user selects a point on one of the products (virtual price tag), the system provides additional information about the product.);
receiving a navigation input corresponding to the viewpoint (Mabrey49 (Fig. 2 [0047]) illustrates a user selecting a navigation point in the image (101). The user then jumps to that location in the store (102), (102) being the right side of the original image.); and
presenting, at a display, the virtual interactive environment as a scene from the viewpoint, the scene being defined by a cube map of the one or more cube maps corresponding to the viewpoint;
the product model being visible in the scene (Mabrey49 (Fig. 2 [0047]) illustrates a virtual room with products for purchase in multiple locations. The shopper may select navigation point (101) which changes the shopper’s perspective to the right side of the store (102). Multiple products appear in the field of view and the user may choose one or more of the products.
Mabrey49 (Fig. 1 [0046]) illustrates an interactive environment that is mapped to 6 standard cube faces. Mabrey49 (Fig. 23 [0066]) illustrates providing recognition software to provide product interaction using the cube faces.)
Mabrey49 as modified by Zhang further teach receiving a computer-generated three-dimensional (3D) model representing a virtual 3D space (Zhang [0011] provides a game model of a scene (terrain and ground).);
converting the computer-generated 3D model into one or more cube maps corresponding to one or more viewpoints in the virtual 3D space. (Zhang [0046] takes a game scene (model [0011]) and maps to cube maps.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce with Zhang’s system of figure skin scattering based on the environment map because both Zhang and Mabrey49 map an environment to cube maps (Mabrey49 [0046], Zhang [0046]).
Mabrey49 and Zhang as modified by Smithers further teach mapping a virtual character to a second mapping locator of the plurality of mapping locators. (Smithers [0021] may load animated characters from any point of view and streamed to a video surface (mapping plane).)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map with Smithers’ system for interactive broadcast streamed video from games and other dynamic content because Smithers’ video surfaces correspond to the six surfaces of a cube map [0021].
Mabrey49, Zhang and Smithers do not describe the virtual character performing an action in the scene.
However, these features are well known in the art as taught by Schickel. For example, Schickel discloses the virtual character performing an action in the scene. (Schickel [0094]-[0102] discloses a system with realistic avatars as virtual greeters.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map, Smithers’ system for interactive broadcast streamed video from games and other dynamic content with Schickel’s system for representing a spatial image in a virtual environment because Schickel supports virtual mirrors showing different clothes (products) on people (avatars) [0099].
Regarding claim 19, the claimed invention for claim 1 is shown to be met with explanations from Mabrey49, Zhang, Smithers and Schickel above.
Mabrey49 further teaches the method of claim 18, further comprising generating the virtual character by:
receiving a 2D video with a dynamic color area including the virtual character and a static color area omitting the virtual character; and
causing the static color area to be removed or transparent. (Mabrey49 [0078] removes the background of a dressing room to provide images of clothing.)
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mabrey et al. (“Mabrey49”, US Pre-Grant Publication 20140095349 A1), in view of Zhang (CN 104484896 A), in view of Smithers (US Pre-Grant Publication 20200298113 A1), in view of Schickel et al. (“Schickel”, US Pre-Grant Publication 20190394443 A1), in view of Jain et al, (“Jain”, US Patent 10733638 B1), in view of Hwang et al. (“Hwang”, US Pre-Grant Publication 20180300782 A1).
Regarding claim 20, the claimed invention for claim 19 is shown to be met with explanations from Mabrey49, Zhang, Smithers, and Schickel above.
Mabrey49, Zhang, Smithers, and Schickel do not describe the method of claim 19, further comprising: activating a friend mode at a primary device by:
sending a shareable link for viewing the virtual interactive environment at a plurality of devices.
However, these features are well known in the art as taught by Jain. For example, Jain discloses the method of claim 19, further comprising: activating a friend mode at a primary device by:
sending a shareable link for viewing the virtual interactive environment at a plurality of devices (Jain (column 11 lines 33-58 (41)) sends a URL associated with a shopping web page describing products. Jain (column 4 lines 39-67 (16)) uses a URL to render web pages for a client.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map, Smithers’ system for interactive broadcast streamed video from games and other dynamic content, Schickel’s system for representing a spatial image in a virtual environment with Jain’s system that tracks user requests from client devices because the users interact with a web page using a URL (Abstract).
Mabrey49, Zhang, Smithers, Schickel and Jain do not describe receiving a transaction input, at the primary device, associated with the product model; and
causing a plurality of transaction actions to occur at the plurality of devices in response to the transaction input at the primary device. (priority 11/5/21) (Interpreted as the primary device transacts a product which causes the plurality of devices to transact products.)
However, these features are well known in the art as taught by Hwang. For example, Hwang discloses receiving a transaction input, at the primary device, associated with the product model; and
causing a plurality of transaction actions to occur at the plurality of devices in response to the transaction input at the primary device. (Hwang (Fig. 5 [0101]) (S550) discloses a host device and a guest (friend) device, where a list of products selected by the host and guest devices, corresponding to a friend acceptance message, is provided. Hwang [0102] (S560) discloses the host device transmitting payment information about the selected products to a server.)
Therefore it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention, to combine Mabrey’s system for facilitating social e-commerce, Zhang’s system of figure skin scattering based on the environment map, Smithers’ system for interactive broadcast streamed video from games and other dynamic content, Schickel’s system for representing a spatial image in a virtual environment, Jain’s system that tracks user requests from client devices with Hwang’s system for providing a cooperative shopping service because Hwang sends a list of friends (guest clients) to a host client, and the host client may select one or more friends to participate in the cooperative shopping service [0097]-[0099].
Allowable Subject Matter
Claims 3 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim, any intervening claims, and further including overcoming the obvious-type double patenting and statutory double patenting rejections.
The following is an examiner’s statement of reasons for allowance:
Regarding claim 3, Mabrey49 further teaches the cube map includes a plurality of (2D) cube side images. (Mabrey49 (Fig. 1 [0046]) illustrates an interactive environment that is mapped to 6 standard cube f