DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Claim 12 has been amended to correct for the rejection under 35 U.S.C. 101 and as such the rejection has been withdrawn.
Claim 12 has been amended to remove the interpretation under 35 U.S.C. 112(f).
Claim 12 has been amended to correct for the rejection made pursuant to 35 U.S.C. 112(b) and as such the rejection has been withdrawn.
Response to Argument
35 U.S.C. 112(b) Arguments:
Applicant's arguments filed 10/27/2025 with respect to the rejection of claims 7-9 under 35 U.S.C 112(b) have been fully considered but they are not persuasive.
In particular, applicant argues that the claims have been amended to address the issue of indefiniteness (see page 6 of applicant’s correspondence filed 10/27/2025). Examiner respectfully disagrees. Regarding claim 7, the claim still recites “a 3D digital asset player of the user device” in line 3 of the claim, which by using the term “a” as opposed to “the” before the term “3D digital asset player of the user device.” Claim 7, however, depends from claim 1 (through claim 6), and claim 1 already introduces “a 3D digital asset player” in line 3. Accordingly, claim 7 recites the following (emphasis added):
A computer-implemented method for processing digital data including a master 3D digital asset and an auxiliary digital asset, the method comprising:
Providing a 3D digital asset player within a display screen operatively connected to a processing device;
[…]
Playing the master 3D digital asset by a 3D digital asset player of the user device;
[…]
The claim is indefinite as to how the components are laid out. The claim merely recites “providing a 3D digital asset player within a display screen operatively connected to a processing device” in lines 3-4 of claim 1. The phrase “operatively connected” does not connote any definite meaning as to spatial relationship or otherwise, other than the display screen can be in some manner be operatively connected to the processing device, whether local or remote. As such, by reciting “a 3D digital asset player of the user device”, it is unclear as to the relationship with the first provided 3D digital asset player. In other words, is the first instance of the 3D digital asset player on the processing device itself, or is it merely provided on a display screen operatively connected to the processing device, which could also be a display screen of the user device that is in some manner operatively connected to the processing device, such as by the internet in a manner that it receives data for display from the processing device. The claim is therefore rendered indefinite as to how the first introduced term 3D digital asset player relates to the second, and whether claim 7 introduces a second 3D digital asset player or is merely further limiting that the digital asset player is of the user device. As such, the claim is rendered indefinite. Claims 8-9 depend from claim 7 and are therefore indefinite for the same reasons.
35 U.S.C. 102 Arguments:
Applicant's arguments filed 10/27/2025 with respect to the rejection of claims 1-12 under 35 U.S.C. 102 have been fully considered but they are not persuasive in part.
Applicant argues first, that “Heinen’s ‘mesh’ is of an entirely different sort than that set forth in the present application and in the claims”, and second, that Heinen fails to teach “providing a first positional mesh overlaid on..[a] 3D digital asset player.” Examiner respectfully disagrees.
First, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., Heinen teaching a different type of mesh) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In other words, the claim merely recites that a positional mesh is used, but does not include any limitations further distinguishing the type of mesh that applicant appears to argue is intended. As the claim currently recites, any mesh of positional or location data, such as spatial coordinates, would be read on by the claim. If applicant intends some sort of particular mesh structure be used, the applicant needs to recite as much within the claims.
Second, Heinen teaches ““providing a first positional mesh overlaid on..[a] 3D digital asset player” in the paragraphs indicated by applicant, but not discussed in the arguments (see page 8 of applicant’s correspondence filed 10/27/2025, citing Heinen ¶¶331-333). In particular, Heinen ¶333 states:
[0333] In certain examples, in the browser 360140, the generated textured mesh 360138 may be used either as a usual 3D model in a virtual scene or as a 3D marker 360144 in the virtual scene when it is used as an augmented reality overlay over the physical scene. From a user point of view, annotating 360142 a textured mesh 360138 as a 3D marker 360144 for a virtual scene may be more efficient and user friendly than using a point cloud 360124 for the annotation process.
As stated by Heinen, the generated texture mesh can be used as a 3D maker in the virtual scene, used as an augmented reality overlay over the physical scene. Accordingly, applicant’s argument is not persuasive.
Examiner also points out that Heinen is related to the same field as applicant’s invention “facilitating the work of a ‘content editor… manipulating various digital assets and locating them virtual to… positional locations” for augmented reality as described in page 7 of applicant’s correspondence filed 10/27/2025 (see Heinen ¶137: “One problem with VR and AR is that building or modifying scenes for viewing, may be inaccessible to regular users. These regular users may not be able to create their own VR or AR applications without significant knowhow in the software engineering space. Disclosed here are methods and systems including corresponding web-based platforms for enabling average users without specific knowledge in programming and/or design to create AR and/or VR content.”)
Applicant’s arguments, see page 8 of applicant’s correspondence filed 10/27/2025, with respect to the rejection(s) of claim(s) 1-12 under 35 U.S.C. 102, regarding determining via the first positional mesh a virtual 3D positional location within the master 3D digital asset as claimed have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Berquam et al.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-9 and 12 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 7, the claim recites “playing the master 3D digital asset by a 3D digital asset player of the user device” in line 3 of the claim. Claim 7, however, depends from claim 1, which already introduces “a 3D digital asset player” in line 3. By reciting “a 3D digital asset player of the user device”, it is unclear as to the relationship with the first provided 3D digital asset player. In other words, is the first instance of the 3D digital asset player on the processing device itself, or is it merely provided on a display screen operatively connected to the processing device, which could also be a display screen of the user device that is in some manner operatively connected to the processing device, such as by the internet in a manner that it receives data for display from the processing device. The claim is therefore rendered indefinite as to how the first introduced term 3D digital asset player relates to the second, and whether claim 7 introduces a second 3D digital asset player or is merely further limiting that the digital asset player is of the user device. Claims 8-9 depend from claim 7 and are rejected based on the same rationale as claim 7 for incorporating the same indefinite language.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Heinen et al. (US 2020/0312029 A1) in view of
Berquam et al. (US 2020/0249819 A1).
Regarding claim 13, Heinen discloses:
A system, (Heinen, Abstract and ¶348: systems and methods utilize various computing devices) comprising:
one or more computer hardware processors configured to: (Heinen, Fig. 1a and ¶142: computer systems including components such as processor and memory, where processors run software programs; ¶369: processors for executing instructions)
provide a 3D digital asset player within a display screen (Heinen, Fig. 2a and ¶174: system having interfaces to create and output content, providing a player/viewer module or editor, outputting content to user, including using devices in Figs. 30a-30f, disclosed as devices with displays; ¶284 discloses adding 3D objects and annotations to a Holo);
provide a first positional mesh overlaid on at least a portion of the 3D digital asset player (Heinen, ¶164: imported 360 degree spherical image viewed and used with computing device – see Fig. 1q; Fig. 2i and ¶183: user interface for creating and editing VR tours with 360 degree images and videos, where user can add and edit 3D content in the image or video; ¶319 discloses object tracking to transform object space to virtual world space; ¶¶326-327: 3D scanner application generates point cloud 360124, which is then used to generate a 3D mesh 360133 of the scene; ¶331: mesh having corresponding keyframe 360112 as its texture and saving textured mesh into a 3D model and corresponding keyframe, where key frame is used to extract key points needed for the tracking process in the virtual scene when it is used as an AR overlay over the physical scene; ¶333: In certain examples, in the browser 360140, the generated textured mesh 360138 may be used either as a usual 3D model in a virtual scene or as a 3D marker 360144 in the virtual scene when it is used as an augmented reality overlay over the physical scene.”),
wherein the first positional mesh assigns a plurality of positions within the display screen with corresponding virtual 3D positional locations within a master 3D digital asset;
(Heinen, Fig. 6A and ¶218 3D position and rotation of canvas, including virtual space having correct absolute sizes, 3D positions and rotations, creating a virtual representation of the physical spaced using mapping of distances and scale to physical world; ¶326: 3D scanner application generates a point cloud using estimated depth map of keyframe; ¶327: point cloud used to generate 3D mesh of the scene; ¶¶328-329 discloses 3D mesh generating and applying texture to generated 3D mesh; ¶331: mesh having corresponding keyframe 360112 as its texture and saving textured mesh into a 3D model and corresponding keyframe, where key frame is used to extract key points needed for the tracking process in the virtual scene when it is used as an AR overlay over the physical scene; Heinen further discloses a generation of a 3D mesh for real-time viewing of a scene, wherein the mesh is a 3D mesh which corresponds to virtual and physical 3D positions within the system – see ¶¶328-329 including: “The 3D mesh 360133 generation process uses a filtered 360125 or unfiltered point cloud 360124 and keyframe poses as an input source. As a first iteration of the process, the system can compute a normal vector for each point in the input point cloud” the 3D mesh 360133 generation system can check and orient a normal vector of each point toward the camera pose of the keyframe that the point belongs to; ¶371, NP17: A method of creating a virtual reality scene, comprising, by a computer with a processor and memory, receiving an image data over a network; estimating a depth map of a keyframe of the received image data using estimated depth values of pixels in the keyframe; and generating a point cloud using the estimated depth map of the keyframe; generating a 3D mesh using the generated point cloud);
identify a position (Heinen, ¶138: AR scenes may be created by dragging and dropping into the platform running in the web browser either 2D or 3D markers for tracking as well as one or more 2D or 3D objects; ¶165: In goggle arrangements, the headset may be synchronized to the image such that the user's movements are detected, and the image changes correspondingly; ¶331: mesh having corresponding keyframe 360112 as its texture and saving textured mesh into a 3D model and corresponding keyframe, where key frame is used to extract key points needed for the tracking process in the virtual scene when it is used as an AR overlay over the physical scene; ¶335: The generated mesh 370108 may be used for different usage scenarios which often require a fast or real time reconstruction of the scene while the user is moving through the scene); and
determine via the first positional mesh a virtual 3D positional location within the master 3D digital asset from the identified position (Heinen, ¶136: HMD that as user moves head, user views different parts of field within AR and VR; ¶165: In goggle arrangements, the headset may be synchronized to the image such that the user's movements are detected, and the image changes correspondingly; ¶333: From a user point of view, annotating 360142 a textured mesh 360138 as a 3D marker 360144 for a virtual scene may be more efficient and user friendly than using a point cloud 360124 for the annotation process; ¶335: The generated mesh 370108 may be used for different usage scenarios which often require a fast or real time reconstruction of the scene while the user is moving through the scene; ¶336-337 discusses using the mesh for tracking physic simulation and correct occlusion of virtual objects with physical objects from user’s point of view – this teaches the determining of the mesh locations from identified screen positions, i.e. user’s point of view)
The only limitation not explicitly taught by Heinen is that the position is identified within the display screen. This is interpreted as a particular position on the display screen coordinates is identified as opposed to merely a position of the mesh identified (which is taught by Heinen as the user head movement is tracked to change the mesh view). Examiner notes that Heinen does teach identifying a particular location of the display screen for other uses (e.g. Heinen Fig. 14a discloses a user clicking and dragging to rotate the scene; ¶287 disclosing dragging 3D models into scenes).
Berquam discloses:
identify a position within the display screen, provided via an input device and determine via the first positional mesh a virtual 3D positional location within the master 3D digital asset from the identified position within the display screen (Berquam, ¶45: render AR content in 2D or 3D in conjunction with physical, real world environment;
Berquam discloses the creation of a mesh mapping of an environment:
¶54: associate 3D spatial environments with addressable mesh, e.g. a homogenous or heterogeneous 3D mesh may be defined for a given physical space;
¶69: a logical addressing scheme (e.g., a 2D or 3D addressing scheme) may be utilized which resolves to resources associated with logical maps (e.g., meshes) that correlate to two-dimensional or three-dimensional spatial locations, e.g. point cloud may be utilized to generate a digital 3D model of the space, where surface reconstruction may be performed on the model, and the data points may be converted to an array of adjacent values that can be used to define logical addresses;
¶70: a mesh of logical addresses of mesh cells may be correlated with the point cloud to provide referential relationships between the 3D physical space and the logical mesh addresses.
¶156: FIG. 2 illustrates an example three dimensional spatial, physical environment 202 associated with a 3D mesh 204, where an interface 206 may present the three dimensional spatial environment 202 overlaid by the 3D mesh 204
Berquam further discloses using an input device to determine a virtual 3D positional location within using the positional mesh within the mapped 3D environment – i.e. 3D digital asset:
¶73: A user interface may display all or a portion of the mesh in combination with an image or model of the corresponding physical environment, and the user may then point at or touch a data cube and drag it (and its associated programs, content, triggers, etc.) from one location in the 3D mesh (corresponding to a first physical space) to and drop it on another location in the 3D mesh (corresponding to a second physical space;
¶90: visual design tools and interfaces provided which enable users to define an interactive environment, including enable a user to view a visualization of a physical space (e.g., a model of photograph), lay out active areas within the physical space, indicate which user interactions are to take place and at which geo-spatial locations such actions are to take place, and associate various types of content with geo-spatial locations
¶170: FIG. 11J illustrates an example user interface presented on a user device (a mobile phone in this example) via which the user can select an animated emoji (e.g., a 2D or 3D emoji) from a library of emojis and specify a physical location (via a mesh cell associated with the physical location) at which the emoji is to be “left” for later access by one or more designated recipients
¶215 discloses user input devices enabling location based content and feedback to the user including a touch screen and mouse)
Both Heinen and Berquam are directed to graphical user interfaces for modifying a scene with virtual objects in augmented or virtual reality by coordinating a 3D mesh with the physical environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for annotating images of a 3D mesh mapped physical environment with virtual effects as provided by Heinen, by using the technique allowing a user to select a particular location within a mapped 3D environment using mesh locations as provided by Berquam, using known electronic interfacing and programming techniques. The modification results in an improved user interface for editing a physical scene using virtual effects by allowing a more direct user control over the scene, namely using a more intuitive control that allows a user to directly specify location information for modification, for easier usability, and further allowing for more accurate coordination of modification within a particular mapped environment for improved coordination of mapped data.
Regarding claim 1, the processors of claim 13 are configured to perform the method of claim 1 and as such claim 1 is rejected based on the same rationale as claim 13 set forth above.
Regarding claim 12, Heinen discloses
At least one non-transitory computer readable storage medium storing software that, when executed by one or more hardware processors, cause the one or more hardware processors to perform steps (Heinen, Abstract and ¶348: systems and methods utilize various computing devices; ¶368-369 discloses software stored in a machine readable medium including tangible storage medium, and processors executing instructions for the operation of the system)
Further regarding claim 12, the steps perform the method of claim 1 and as such claim 12 is further rejected based on the same rationale as claim 1 set forth above.
Regarding claim 2, Heinen further discloses:
wherein identifying a position within the display screen comprises enabling a marker for the auxiliary digital asset to be dragged via the user input device from a first area of the display screen to a second area comprising the 3D digital asset player onto which the first positional mesh is provided (Heinen, ¶138: AR scenes may be created by dragging and dropping into the platform running in the web browser either 2D or 3D markers for tracking as well as one or more 2D or 3D objects, where all objects present in scene are manipulated, animated and associated with additional information; ¶151: in case an AR scene is created, the user may decide whether or not a marker should be used 110606. If a marker should be used, it may be imported in terms of a 2D or 3D object 110608; Also ¶161: selection of 2D/3D object using drag and drop; ¶259: inserted 2D/3D content may overlap/occlude one another; Also NP 46-48 in ¶371 discloses moving the hotspot on the floorplan by a click-and-drag operation from the user)
Also Berquam discloses:
wherein identifying a position within the display screen comprises enabling a marker for the auxiliary digital asset to be dragged via the user input device from a first area of the display screen to a second area comprising the 3D digital asset player onto which the first positional mesh is provided (Berquam, ¶73: A user interface may display all or a portion of the mesh in combination with an image or model of the corresponding physical environment, and the user may then point at or touch a data cube and drag it (and its associated programs, content, triggers, etc.) from one location in the 3D mesh (corresponding to a first physical space) to and drop it on another location in the 3D mesh (corresponding to a second physical space)
Heinen is modifiable by Berquam for the same reasons as set forth above.
Regarding claim 3, Heinen further discloses:
Wherein determining comprises: determining the virtual 3D positional location in the master 3D digital asset based on a mesh location identified from the position of the user input device, wherein the first positional mesh provides a mapping of a plurality of mesh locations within the digital screen to corresponding 3D positional locations of a current view of the master 3D digital asset being displayed in the 3D digital asset player (Heinen, ¶136: HMD that as user moves head, user views different parts of field within AR and VR; ¶165: In goggle arrangements, the headset may be synchronized to the image such that the user's movements are detected, and the image changes correspondingly; ¶333: the generated textured mesh 360138 may be used either as a usual 3D model in a virtual scene or as a 3D marker 360144 in the virtual scene when it is used as an augmented reality overlay over the physical scene. From a user point of view, annotating 360142 a textured mesh 360138 as a 3D marker 360144 for a virtual scene may be more efficient and user friendly than using a point cloud 360124 for the annotation process; ¶335: The generated mesh 370108 may be used for different usage scenarios which often require a fast or real time reconstruction of the scene while the user is moving through the scene; ¶336-337 discusses using the mesh for tracking physic simulation and correct occlusion of virtual objects with physical objects from user’s point of view – this teaches the determining of the mesh locations from identified screen positions, i.e. user’s point of view)
Berquam further discloses:
determining the virtual 3D positional location in the master 3D digital asset based on a mesh location identified from the position within the display screen of the user input device, Berquam, ¶73: A user interface may display all or a portion of the mesh in combination with an image or model of the corresponding physical environment, and the user may then point at or touch a data cube and drag it (and its associated programs, content, triggers, etc.) from one location in the 3D mesh (corresponding to a first physical space) to and drop it on another location in the 3D mesh (corresponding to a second physical space; also ¶170)
Heinen is modifiable by Berquam for the same reasons as set forth above.
Regarding claim 4, Heinen further discloses:
Wherein the first positional mesh assigns a plurality of positions within the display screen with corresponding virtual 3D positional locations at a first depth position within the master 3D digital asset (Heinen, Fig. 6A and ¶218: 3D position and rotation of canvas, including virtual space having correct absolute sizes, 3D positions and rotations, creating a virtual representation of the physical spaced using mapping of distances and scale to physical world; Heinen further discloses a generation of a 3D mesh for real-time viewing of a scene, wherein the mesh is a 3D mesh which corresponds to virtual and physical 3D positions within the system – see ¶¶328-329 including: “The 3D mesh 360133 generation process uses a filtered 360125 or unfiltered point cloud 360124 and keyframe poses as an input source. As a first iteration of the process, the system can compute a normal vector for each point in the input point cloud” the 3D mesh 360133 generation system can check and orient a normal vector of each point toward the camera pose of the keyframe that the point belongs to;
¶335: The generated mesh 370108 may be used for different usage scenarios which often require a fast or real time reconstruction of the scene while the user is moving through the scene; ¶336: 3D mesh used as an invisible layer during the tracking for a physic simulation, such as letting a virtual object move on top of a surface, or ¶337 used for occlusion;
¶331: mesh having corresponding keyframe 360112 as its texture and saving textured mesh into a 3D model and corresponding keyframe, where key frame is used to extract key points needed for the tracking process in the virtual scene when it is used as an AR overlay over the physical scene;
¶371, NP17:
A method of creating a virtual reality scene, comprising, by a computer with a processor and memory, receiving an image data over a network; estimating a depth map of a keyframe of the received image data using estimated depth values of pixels in the keyframe; and generating a point cloud using the estimated depth map of the keyframe; generating a 3D mesh using the generated point cloud.
)
Regarding claim 5, Heinen further discloses:
Providing a second positional mesh on at least a portion of the 3D digital asset player, wherein the second positional mesh assigns a plurality of positions within the display screen with corresponding virtual 3D positional location a second depth position within the master 3D digital asset that is different to the first depth position (Heinen discloses more than one mesh, as a result of generation of mesh for each keyframe pose – see ¶¶328-329 including: “The 3D mesh 360133 generation process uses a filtered 360125 or unfiltered point cloud 360124 and keyframe poses as an input source. As a first iteration of the process, the system can compute a normal vector for each point in the input point cloud” the 3D mesh 360133 generation system can check and orient a normal vector of each point toward the camera pose of the keyframe that the point belongs to; As such the corresponding meshing of multiple key frame teaches the first positional mesh and the second positional mesh, and where the details are taught as provided in claim 4, namely Heinen, Fig. 6A and ¶218, ¶331, ¶335 and ¶371)
Regarding claim 6, Heinen further discloses:
Associating in a database the determined virtual 3D positional location with the auxiliary digital asset (Heinen ¶144: server and client based system, where server directs communication with data store 110116 – i.e. database - saving and loading Halos and the AR/VR scenes and 2D/3D objects contained therein; Also note ¶363 discloses user of database)
Regarding claim 7, Heinen further discloses:
Delivering, to a user device, the master 3D digital asset and the auxiliary digital asset, (Heinen, ¶144: server and client based system, where server directs communication with data store 110116, saving and loading Halos and the AR/VR scenes and 2D/3D objects contained therein, and further client side interacting with editor to create AR/VR content and player 110230 that enables them to consume previously created AR/VR content; ¶242 further discloses multi-user experience on multiple devices, such that users join an experience and see the same augmented and virtual content in the scene and may also obtain rights to annotate in the 360 degree image or video)
Playing the master 3D digital asset by a 3D digital asset player of the user device (Heinen, ¶335: The generated mesh 370108 may be used for different usage scenarios which often require a fast or real time reconstruction of the scene while the user is moving through the scene; ¶336-337 discusses using the mesh for tracking physic simulation and correct occlusion of virtual objects with physical objects from user’s point of view; ¶175 discloses the distribution of Holos between user device and external storage; Also ¶242 discloses supporting multi-user experience, so that a user can invite other users 200101 to a multi-user-experience in any created Holo using a plurality of devices); and
Displaying a hotspot marker for the auxiliary digital asset within the master 3D digital asset at the determined virtual 3D position location identified from the database (Heinen, Fig. 6a and ¶225: shape 160120 corresponds to physical object, defined as a hitbox area or 3D object to allow user interaction with the created shape, defining what should happen when the created shape 160120 is selected, e.g. clicked or tapped; Again, ¶144: server and client based system, where server directs communication with data store 110116 – i.e. database - saving and loading Halos and the AR/VR scenes and 2D/3D objects contained therein)
Regarding claim 8, Heinen further discloses:
Switching from playing the master 3D digital asset to playing of the auxiliary digital asset upon activation of the hotspot marker (Heinen, Fig. 6a and ¶225:
Alternatively or additionally, in the shape creation mode 160140 a shape 160120 which corresponds for example to the physical object (in this example a door) 160118, can be defined on the canvas 160132, which then can be used as, including but not limited to a hitbox area or a 3D object to allow user interactions with this created shape 160120. This way the user can define what should happen when the created shape 160120 is selected, e.g. clicked or tapped. One of multiple possible examples is that as soon as the shape 160120 is clicked the scene switches to a new one using a command system.
Also ¶371: NP50 discloses hotspot triggerable command is triggerable command is navigation to another image scene; ¶154: triggerable actions associated with a 2D or 3D object contained in AR or VR may be a programmed animation, and open website action, etc.; ¶163: triggerable commands including calling a number, opening a web page, transferring to a different scene, showing an info box containing a text, showing a warning box containing a text, sending an e-mail, starting or ending an object animation, displaying or removing an object, and playing a sound. Any kind of triggerable command could be used, these examples not limiting. )
Regarding claim 9, Heinen further discloses:
Wherein switching to playing of the auxiliary digital asset comprises:
detecting activation of the hotspot marker at the virtual 3D positional location; (Heinen, Fig. 6a and ¶225: when shape is selected, clicked or tapped, performing command; ¶371, NP45: “receiving an indication of a hotspot on the floor plan; causing display of an icon on the hotspot on the floorplan”, NP48: “associating a triggerable command with the hotspot, activated by a user in the image scene by an interaction”; ¶371: NP50 discloses hotspot triggerable command is triggerable command is navigation to another image scene; ¶154: triggerable actions associated with a 2D or 3D object contained in AR or VR may be a programmed animation, and open website action, etc.; ¶163: triggerable commands including calling a number, opening a web page, transferring to a different scene, showing an info box containing a text, showing a warning box containing a text, sending an e-mail, starting or ending an object animation, displaying or removing an object, and playing a sound. Any kind of triggerable command could be used, these examples not limiting.) and
Playing the auxiliary digital asset in the 3D digital asset player in response to the detected activation of the hotspot marker (Heinen, ¶371: NP50 discloses hotspot triggerable command is triggerable command is navigation to another image scene; ¶154: triggerable actions associated with a 2D or 3D object contained in AR or VR may be a programmed animation, and open website action, etc.); ¶163: triggerable commands including calling a number, opening a web page, transferring to a different scene, showing an info box containing a text, showing a warning box containing a text, sending an e-mail, starting or ending an object animation, displaying or removing an object, and playing a sound. Any kind of triggerable command could be used, these examples not limiting.)
Regarding claim 10, Heinen further discloses:
Wherein the master 3D digital asset comprises a 3D digital scene, for example a 3D video stream or 360 degree video content (Note example language is non-limiting; Heinen, Fig. 1Q and ¶164: user interface importing 360 degree spherical image viewed with a computing device)
Regarding claim 11, Heinen further discloses:
Wherein the auxiliary digital asset comprises an auxiliary 3D digital scene, for example an auxiliary 3D video stream (Note example language is non-limiting; Fig. 2q and ¶¶50-51 discloses user inserting and adding new time based scenes to existing scenes – scenes as 2D and 3D content; Also Fig. 7a and ¶227: new overlay visible on top of virtual scene when user opens scene later in player mode, including 3D renderings using WebGL)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM A BEUTEL/ Primary Examiner, Art Unit 2616