DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-17 are all the claims pending in the application.
Claims 1, 9 and 17 are amended.
Claims 1-17 are rejected.
This is a Final Office Action in response to amendments and remarks filed on July 28, 2025.
Response to Arguments
Applicant's arguments filed July 28, 2025 have been fully considered.
Regarding 102 and 103 rejections, the rejections are maintained for the following reasons:
Due to applicant amendments, Examiner has found the Applicant’s 102 rejection arguments regarding the amended claim limitation wherein the augmented reality interaction server and the virtual real estate directory system are implemented on separate servers, to be persuasive. The rejection has been updated under 35 U.S.C. 103, reference rejection below.
Applicant asserts that “The claimed term "virtual real estate object" refers to an object of virtual real estate, and not as a virtual object placed in virtual real-estate. While Spivack mentions a location criteria of the VOB, this does not explicitly disclose a query response comprising one or more virtual real estate objects assigned to a virtual real estate object location within a predefined region defined by the location information of the first user”
MPEP 2111.05 (III) recites that “where the claim as a whole is directed to conveying a message or meaning to a human reader independent of the intended computer system, and/or the computer-readable medium merely serves as a support for information or data, no functional relationship exists”. In this example, by defining a virtual object as real estate object has no bearing on how the object is stored and retrieved.
The Examiner respectfully does not find the applicant’s assertion persuasive because Spivack teaches assigning a virtual object based upon location and within the field of view.
The rejection is maintained, reference rejection below.
For Claims 9 and Claim 17, Applicant respectfully requests withdrawal of the rejection of independent Claim 9 under 35 U.S.C. § 102 and allowance of the claim.
Applicant asserts that for Claims 9 and 17 respectively, Spivack does not teach or suggest the amended claim limitations. The Examiner respectfully does not find this assertion to be persuasive, as the rejection against Claims 1 was maintained as recited above.
For dependent claims, rejections are maintained
Applicant asserts that the dependent claims, Claims 2-4, 6-8, 10-12 and 14-16, when taken in context of Claims 1 and 9, set forth a number of recitation not taught, disclosed, or suggested and requests that the rejections be withdrawn. The Examiner respectfully does not find the applicant’s assertion persuasive because the rejections for Claims 1 and 9 have been maintained.
Applicant asserts that Claims 5 and 13 depend directly or indirectly from Claims 1 and 9 and are patentably distinct. The Examiner respectfully does not find this assertion to be persuasive, as the rejection against Claims 1 and 9 were maintained as recited above.
Please see below for the complete rejection of the claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4; 6-12; 14-17 are rejected under 35 U.S.C. 103 as being anticipated by Spivack et al, International Publication No. WO2019/079826A1, herein referred to as “Spivack” in view of Amitay et al, US Pub No. US 20180315134 A1, herein referred to as “Amitay”.
Regarding Independent Claims 1, 9 and 17, Spivack teaches the following limitations:
… obtain location information of a first user of the communication device;
(“[00185] … The engine 310 can also determine that the recipient user who is an intended recipient is in a given location in the real world environment which meets a location criteria of the VOB designated for the recipient user.”)(Fig. 3A ¶00185)
… transmit a query request via a communication network to a computerized virtual real estate directory system, the query request comprising at least the location information of the first user, wherein the virtual real estate directory system is implemented using one or more servers;
[0047] In a further embodiment, a web portal and/or mobile interface (e.g., user interface 104) enables people to search and browse the content in the registry. The portal can maintain a directory section. The disclosed system (e.g., any of one or more of, client device 102 of FIG. 1, client device 402 of FIG. 4A or server 100 of FIG. l, server 300 of FIG. 3A) functions as a central or distributed search engine for AR/VR content across publishers, locations and apps. The system also enables people or software agents/modules or apps to easily locate available relevant AR/VR content, from any number of publishers and other agents, for their present context (location, query, activity, intent or goals, etc.).”)(Fig. 1, Fig. 3A, Fig. 4A ¶0047)
Under the broadest reasonable interpretation of the term “virtual real estate directory system” in the claim limitations is interpreted to be equivalent to the term “the registry” in the Spivack teaching above.
receive from the virtual real estate directory system via the communication network a query response, the query response comprising one or more virtual real estate objects assigned to a virtual real estate object location within a predefined region defined by the location information of the first user, each of the virtual real estate objects comprising identification information and a network address of an augmented reality interaction server;
([00185] In general, the VOB sharing / publication engine 310 (hereinafter engine 310) can determine that a recipient user is one or more of an intended recipient of a VOB that is shared with the recipient user by the sender entity (of the AR environment). The engine 310 can also determine that the recipient user who is an intended recipient is in a given location in the real world environment which meets a location criteria of the VOB designated for the recipient user.”)(Fig. 3A ¶00185)
Under the broadest reasonable interpretation of the term “virtual real estate directory system” in the claim limitations is interpreted to be equivalent to the term “VOB sharing / publication engine” in the Spivack teaching above. Additionally, under the broadest reasonable interpretation of the term “virtual real estate objects” in the claim limitations is interpreted to be equivalent of the term “VOB” (VOB is virtual object) in the Spivack teaching above.
transmit for at least one particular virtual real estate object an interaction request via the communication network to one of the augmented reality interaction servers, respectively using the network address of the augmented reality interaction server, each interaction request comprising the identification information of the particular virtual real estate object,
(“[0090] Embodiments of the present disclosure include systems, methods and apparatuses of platforms (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) for deployment and targeting of context-aware virtual objects and/or behavior modeling of virtual objects based on physical laws or principle. Further embodiments relate to how interactive virtual objects that correspond to content or physical objects in the physical world are detected and/or generated, and how users can then interact with those virtual objects, and/or the behavioral characteristics of the virtual objects, and how they can be modeled. Embodiments of the present disclosure further include processes that augmented reality data (such as a label or name or other data) with media content, media content segments (digital, analog, or physical) or physical objects. Yet further embodiments of the present disclosure include a platform (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) to provide an augmented reality (AR) workspace in a physical space, where a virtual object can be rendered as a user interface element of the AR workspace.”)(Fig. 1 ¶0090)
Under the broadest reasonable interpretation of the term “augmented reality interaction servers” in the claim limitations is interpreted to be equivalent to the term “AR workspace” in the Spivack teaching above.
However, Spivack does not teach, but Amitay does teach wherein the augmented reality interaction server and the virtual real estate directory system are implemented on separate servers;
(“[0020] The application server 112 hosts a number of applications and subsystems, including a messaging server application 114, an image processing system 116 and a social network system 122. The messaging server application 114 implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content including images and video clips) included in messages received from multiple instances of the messaging client application 104. …”) (Fig. 1 ¶0020)
Also refer to (“[0023] The application server 112 is communicatively coupled to a database server 118, which facilitates access to a database 120 in which is stored data associated with messages processed by the messaging server application 114.”)(Fig. 1 and ¶0023)
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the teachings of Spivack and the known techniques of Amitay to incorporate separate servers for the image processing server and the database server to improve the server recited in the Spivack teaching. Reference MPEP 2143(I) C.
receive from the augmented reality interaction server via the communication network access authorization for the particular virtual real estate object;
(“[0094] The entity that is the rightholder of the virtual real-state can control the content or objects (e.g., virtual objects) that can be placed in it, by whom, for how long, etc. As such, the disclosed technology includes a marketplace (e.g., as run by server 100 of FIG. 1) to facilitate exchange of virtual real-estate (VRE) such that entities can control object or content placement to a virtual space that is associated with a physical space.”)(Fig. 1 ¶0094)
transmit first user data to a communication device of a second user, associated with the particular virtual real estate object, via the augmented reality interaction server;
(“[00193] The communications manager 340 can be any combination of software agents and/or hardware modules (e.g., including processors and/or memory units) able to facilitate or manage, administer, coordinate, enable, enhance, communications sessions between users of the AR environment. The communications sessions can be 1-1, 1-many, many to many, and/or many-1. The communications manager 340 can determine that a second user of the augmented reality environment, is an intended recipient of a first message object. The communications manager 340 can then, for example, cause to be perceptible, to the second user of the augmented reality environment, the first message object, such that the second user is able to participate in the communications session via the augmented reality environment (e.g., hosted by server 300).”)(Fig 3A ¶00193)
receive second user data from the communication device of the second user via the augmented reality interaction server.
(“[00193] … The communications manager 340 can then, for example, cause to be perceptible, to the second user of the augmented reality environment, the first message object, such that the second user is able to participate in the communications session via the augmented reality environment (e.g., hosted by server 300).”)(Fig 3A ¶00193)
Regarding Claims 2 and 10, Spivack and Amitay teach all the limitations of Claims 1 and 9; Spivack further teaches the following limitations:
transmit a public user identifier of the first user to the augmented reality interaction server comprising one or more of the following: a public name of the first user or an encryption key of the first user; and
(“[00183] The object or virtual object is generally digitally rendered or synthesized by a machine (e.g., a machine can be one or more of, client device 102 of FIG. 1, client device 402 of FIG.4A or server 100 of FIG.l, server 300 of FIG.3A) to be presented in the AR environment and have human perceptible properties to be human discernible or detectable. The sender/recipient identification engine 312 can determine, identify, a sending entity of a VOB and/or a recipient entity of the VOB. The sending entity of the VOB can include one or more of, an individual user, a user group having multiple users, a foundation, an organization, a corporation, an advertiser, any other user of an AR environment hosted by the host server 300. The sending entity may also be the host server 300.”)(Fig. 1, Fig. 3A, Fig. 4A ¶00183)
receive, from the augmented reality interaction server a public user identifier of the second user comprising one or more of the following: a public name of the second user or an encryption key of the second user.
Also refer to (“[00193] … The communications manager 340 can determine that a second user of the augmented reality environment, is an intended recipient of a first message object… The sending entity of the VOB can include one or more of, an individual user, a user group having multiple users, a foundation, an organization, a corporation, an advertiser, any other user of an AR environment hosted by the host server 300. The sending entity may also be the host server 300.”)(Fig. 3A ¶00193)
Under the broadest reasonable interpretation of the term “augmented reality interaction servers” in the claim limitations is interpreted to be equivalent to the term “AR environment” in the Spivack teaching above.
Regarding Claims 3 and 11, Spivack and Amitay teach all the limitations of Claims 1 and 9; Spivack further teaches the following limitations:
one or more cameras and
(“[0032] On embodiment of the present disclosure further includes a system (e.g., any of one or more of, client device 102 of FIG. 1, client device 402 of FIG. 4A or server 100 of FIG.l, server 300 of FIG. 3A) having AR/VR software agent/hardware module having sensors (e.g., the camera and mic and other sensors) ...”)(Fig. 1, Fig. 4A ¶0032)
a rendering system and wherein the electronic circuit is configured to:
(“[00183] The object or virtual object is generally digitally rendered or synthesized by a machine (e.g., a machine can be one or more of, client device 102 of FIG. 1, client device 402 of FIG.4A or server 100 of FIG.l, server 300 of FIG.3A)…”)(Fig. 1, Fig. 3A, Fig. 4A ¶00183)
determine visual representation data of the first user using one or more images of the first user captured by the one or more cameras;
(“[00148] The message can be for instance, supplemented, formatted, optimized or designed with additional text, graphical content, multimedia content, and or simulated objects or virtual objects. The user can place the enhanced messages (or any of the simulated or virtual objects) in physical locations relative to their device, and/or also relative to other virtual objects or simulated objects in the scene to construct a scene relative to a user view or the user' s camera perspective”)(¶00148)
transmit the visual representation data of the first user, included in the first user data, to the communication device of the second user via the augmented reality interaction server;
(“[00155] Embodiments of the present disclosure further include systems, methods and apparatuses of: creating, rendering, depicting, provisioning, and/or generating message objects (e.g., VOBs, virtual objects, AR messages, etc.) with digital enhancements. … For instance, enhanced messages can be shared, transmitted or sent/received via communication channels including legacy SMS, Internet, mobile network via web services, applications (e.g., mobile apps) or dedicated platforms such as VR/AR or mixed VR/AR platforms or environments.”)(¶00155)
Under the broadest reasonable interpretation of the term “augmented reality interaction servers” in the claim limitations is interpreted to be equivalent to the term “VR/AR or mixed VR/AR platforms or environments” in the Spivack teaching above.
receive, via the augmented reality interaction server, from the communication device of the second user visual representation data of the second user, included in the second user data; and
(“[00155] Embodiments of the present disclosure further include systems, methods and apparatuses of: creating, rendering, depicting, provisioning, and/or generating message objects (e.g., VOBs, virtual objects, AR messages, etc.) with digital enhancements. … For instance, enhanced messages can be shared, transmitted or sent/received via communication channels including legacy SMS, Internet, mobile network via web services, applications (e.g., mobile apps) or dedicated platforms such as VR/AR or mixed VR/AR platforms or environments.”)(¶00155)
Under the broadest reasonable interpretation of the term “augmented reality interaction servers” in the claim limitations is interpreted to be equivalent to the term “VR/AR or mixed VR/AR platforms or environments” in the Spivack teaching above.
render, using the rendering system and the visual representation data of the second user, augmented reality content comprising a visual representation of the second user.
(“[00155] Embodiments of the present disclosure further include systems, methods and apparatuses of: creating, rendering, depicting, provisioning, and/or generating message objects (e.g., VOBs, virtual objects, AR messages, etc.) with digital enhancements. The enhanced messages can include virtual and/or augmented reality features. The enhanced messages can further be rendered, accessed, transmitted, manipulated, acted on and/or otherwise interacted with via various networks in digital environments by or amongst users, real or digital entities, other simulated/virtual objects or computing systems including any virtual reality (VR), non-virtual reality, augmented reality (AR) and/or mixed reality (mixed AR, VR and/or reality) environments or platforms...”)(¶00155)
Under the broadest reasonable interpretation of the term “augmented reality content” in the claim limitations is interpreted to be equivalent to the term “virtual object” in the Spivack teaching above.
Regarding Claims 4 and 12, Spivack and Amitay teach all the limitations of Claims 3 and 11; Spivack further teaches the following limitations:
determine a field of view of the first user using the one or more cameras; determine from the field of view coordinates of real-world features in the field of view;
(“[00316] FIG. 8 depicts a flow chart illustrating an example process to map a physical space using locally captured images for augmented reality applications such as precise placement of virtual objects in the physical space, in accordance with embodiments of the present disclosure.”)(Fig. 8 ¶00316)
render the augmented reality content using the rendering system…
(“[00155] Embodiments of the present disclosure further include systems, methods and apparatuses of: creating, rendering, depicting, provisioning, and/or generating message objects (e.g., VOBs, virtual objects, AR messages, etc.) with digital enhancements.”)(¶00155)
such that the augmented reality content has a fixed position relative to the coordinates of the real world features.
(“[00157] In one example, if a user places a virtual object visually in front of their physical position, the virtual or simulated object can be saved to that physical position or near that physical position or within a range of the physical location. The user can also place and save the object for example, at any angle e.g., 10 degrees to the right of their front position. …”)(¶00157)
Regarding Claims 6 and 14, Spivack and Amitay teach all the limitations of Claims 1 and 9; Spivack further teaches the following limitations:
…a motion tracker and wherein the electronic circuit is further configured to: record user motion data related to body motion of the first user, using the motion tracker;
(“[00245] One embodiment of the motion sensor includes a first sensor configured to detect user motion of a first human user in the first physical location of the real world environment.”)(Fig. 4A ¶00245)
transmit the user motion data, included in the first user data, to the augmented reality interaction server.
(“[00223] The AR/VR window manager 380 can include an apparatus or can control an apparatus having a single lens or a multi-directional lens (e.g., a two-way transparent display or monitor). The apparatus can include touch and/or motion sensors, for instance on each side of the display or lens. This would render a view into the AR world at that physical place. The AR/VR window manager 380, can be embodied in a standalone system, device, client or thin client, window, or lens. If it is aimed at a physical location in the real world, or any virtual world, and it will depict the virtual activity there in a manner that you can see from the physical world, without hardware on your body or in your hand etc. …”)(Fig. 3A ¶00223)
Under the broadest reasonable interpretation of the term “user motion data” in the claim limitations is interpreted to be equivalent to the term “interaction trigger” in the Spivack teaching above.
Regarding Claims 7 and 15, Spivack and Amitay teach all the limitations of Claims 1 and 9; Spivack further teaches the following limitations:
one or more microphones and one or more speakers and wherein the electronic circuit is configured to:
[00335] The I/O components 1050 can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. … The input components 1054 can include … audio input components (e.g., a microphone)...”)(Fig. 10 ¶0335)
record audio data of the first user, using the one or more microphones;
(“[0035] One embodiment of the present disclosure includes a system (e.g., any of one or more of, client device 102 of FIG. 1, client device 402 of FIG. 4A or server 100 of FIG.l, server 300 of FIG. 3A) which routes sensors or devices with sensors (e.g., including for example people with camera phones, wearable devices, headmounted devices, smart glasses, etc.) to locations in order to rent time on the sensors (e.g., imaging devices, microphones, cameras, or other input sensors of various devices) to record what is happening at that location.…”)(Fig. 1, Fig. 3A, Fig. 4A ¶0035)
transmit the audio data of the first user, included in the first user data, to the augmented reality interaction server;
(“[00194] Note that in general, at least a portion of content associated with the first message object includes first user generated content provided by a first user who is a sender entity or sender user, to be consumed by (e.g., viewed, read, heard, interact with, reviewed by, etc.) the second user who is the recipient user for the first message object. The first user generated content and / or the first message object can be created or managed by the message object manager 342. …”)(Fig. 3A ¶00194)
receive from the communication device of the second user audio data of the second user, included in the second user data;
(“[00194] … The communications manager 340 can further receive second user generated content provided by the second user (e.g., the recipient user of the first message object) where the second user generated content is provided by the second user in response to the first user generated content provided by the original sender entity (e.g., the first user) of the first message object. The second user generated content is to be consumed by the first user.”)(Fig. 3A ¶00194)
Under the broadest reasonable interpretation of the term “audio data” in the claim limitations is interpreted to be equivalent to the term “user generated content” in the Spivack teaching above in ¶00194.
and play, on the one or more speakers, the audio data of the second user.
[00194] Note that in general, at least a portion of content associated with the first message object includes first user generated content provided by a first user who is a sender entity or sender user, to be consumed by (e.g., viewed, read, heard, interact with, reviewed by, etc.) the second user who is the recipient user for the first message object.
Also refer to (“[00335] The I/O components 1050 can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. … The output components 1052 can include … acoustic components (e.g., speakers), ...”)(Fig. 10 ¶00335)
Regarding Claims 8 and 16, Spivack and Amitay teach all the limitations of Claims 1 and 9; Spivack further teaches the following limitations:
receive an augmented reality object from the augmented reality interaction server;
(“[00183] The object or virtual object is generally digitally rendered or synthesized by a machine (e.g., a machine can be one or more of, client device 102 of FIG. 1, client device 402 of FIG.4A or server 100 of FIG.l, server 300 of FIG.3A) to be presented in the AR environment and have human perceptible properties to be human discernible or detectable. The sender/recipient identification engine 312 can determine, identify, a sending entity of a VOB and/or a recipient entity of the VOB. ...”)(Fig. 1, Fig. 3A, Fig. 4A ¶00183)
Under the broadest reasonable interpretation of the term “augmented reality object” in the claim limitations is interpreted to be equivalent to the term “VOB” (VOB is virtual object) in the Spivack teaching above. Additionally, under the broadest reasonable interpretation of the term “augmented reality interaction server” in the claim limitations is interpreted to be equivalent to the term “AR environment” in the Spivack teaching above.
render augmented reality content comprising the augmented reality object using the rendering system;
(“[00183] The object or virtual object is generally digitally rendered or synthesized by a machine (e.g., a machine can be one or more of, client device 102 of FIG. 1, client device 402 of FIG.4A or server 100 of FIG.l, server 300 of FIG.3A) to be presented in the AR environment”) (Fig. 1, Fig. 3A, Fig. 4A ¶00183)
determine user interaction data of the first user interacting with the augmented reality object;
(“[00190] In one embodiment, the host server 300 detects an interaction trigger (e.g., via the interaction trigger detection engine 316, hereinafter referred to as 'engine 316') with respect to the virtual object. For instance, the interaction trigger can be detected (e.g., by the engine 316) responsive to the initial rendering or presentation of the content through engagement with the augmented reality experience in the augmented reality environment. … Note that the interaction trigger can include stimuli detected of the recipient user. For instance, the stimuli can include voice, touch, eye, gaze, gesture (body, hand, head, arms, legs, limbs, eyes, torso, etc.), text input and/or other command submitted by a with respect to the VOB. …”)(Fig. 3A ¶00190)
Under the broadest reasonable interpretation of the term “user interaction data” in the claim limitations is interpreted to be equivalent to the term “interaction trigger” in the Spivack teaching above.
transmit the user interaction data to the augmented reality interaction server;
(“[00190] … Once the interaction trigger has been depicted, the host server can further render or depict the content associated with the virtual object. … For instance, the stimuli can include voice, touch, eye, gaze, gesture (body, hand, head, arms, legs, limbs, eyes, torso, etc.), text input and/or other command submitted by a with respect to the VOB. …”)(Fig. 3A ¶00190)
Under the broadest reasonable interpretation of the term “user interaction data” in the claim limitations is interpreted to be equivalent to the term “interaction trigger” in the Spivack teaching above.
receive an updated augmented reality object from the augmented reality interaction server;
(“[00146] Embodiments of the present disclosure further include systems, methods and apparatuses of: creating, rendering, depicting, provisioning, and/or generating message objects with digital enhancements. The enhanced messages can include virtual and/or augmented reality features. … The enhanced messages can further be rendered, accessed, transmitted, manipulated, acted on and/or otherwise interacted with via various networks in digital environments by or amongst users, real or digital entities, other simulated/virtual objects or computing systems including any virtual reality (VR), non-virtual reality, augmented reality (AR) and/or mixed reality (mixed AR, VR and or reality) environments or platforms. For instance, enhanced messages can be shared, transmitted or sent/received via communication channels including legacy SMS, Internet, mobile network via web services, applications (e.g., mobile apps) or dedicated platforms such as VR/AR or mixed VR/AR platforms or environments.”)(¶00146)
render the augmented reality content comprising the updated augmented reality object using the rendering system.
(“[00146] The enhanced messages can further be rendered, accessed, transmitted, manipulated, acted on and/or otherwise interacted with via various networks in digital environments by or amongst users, real or digital entities, other simulated/virtual objects or computing systems including any virtual reality (VR), non-virtual reality, augmented reality (AR) and/or mixed reality (mixed AR, VR and or reality) environments or platforms.”)(¶00146)
Under the broadest reasonable interpretation of the term “augmented reality content” in the claim limitations is interpreted to be equivalent to the term “virtual object” in the Spivack teaching above.
Claims 5 and 13 are rejected under 35 U.S.C. 103 as being anticipated by Spivack et al, International Publication No. WO2019/079826A1, herein referred to as “Spivack” and Amitay et al, US Pub No. US 20180315134 A1, herein referred to as “Amitay”, in view of Devam et al, US Pub No. US20190206134A1, herein referred to as “Devam”.
Regarding Claims 5 and 13, Spivack and Amitay teach all the limitations of Claims 3 and 11 and Spivack teaches the following limitations:
a bounding volume of the virtual real estate object;
Spivack teaches the bounded volume limitations through the use of mapping of a physical space using imaged captured on a local device camera. Under the broadest reasonable interpretation, the term “bounded volume” is interpreted to be equivalent to “physical space” in the Spivack teachings. Spivack teaches physical spaces in the reference below:
(“[00131] The physical space geometry metadata repository 122 is able to store metadata of crowdsourced information regarding physical spaces. Perspectives, views, images, of indoors and/or outdoors physical spaces can be stored in the repository 122. Moreover, geometry, shape, size, dimension data of physical spaces around the world can be stored in repository 122.”)(Fig. 1 ¶00131)
Further, Spivack teaches how an object can be placed in the physical space in the reference below:
(“[00291] In further embodiments, the local client can refine or calibrate its location using geolocation metadata of multiple physical tags. Geo-location metadata can be retrieved from multiple physical tags affixed to the physical surface in the real world environment. The local client location in the real world environment can then be calibrated based on the geo-location metadata retrieved from the multiple physical tags. As such the location placement of the virtual object can then be determined in a physical space in the real world environment defined by the multiple physical tags affixed to the physical surface.”)(¶00291)
However, Spivack does not teach “verify whether the augmented reality content is entirely contained within”, but Devam does teach the entire limitation verify whether the augmented reality content is entirely contained within a bounding volume of the virtual real estate object
(“[0348] FIG. 30 shows a flow chart for frame offset calculation. The flow begins at Start in the top left corner. If this is the first frame of the sequence (e.g., First image captured from a camera), we simply save the current frame and complete the sequence. If this is any subsequent frame, we store the previous frame and add the current frame. Next, a number of reference points (N) are selected, either at predefined coordinates or by some other means of selection. These reference coordinates are used to retrieve values from the previous frame. The values are stored for later use. The values at the reference coordinates in the current frame are then compared to those taken from the previous frame.”)(Fig. 30 ¶0348)
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the teachings of Spivack and Devam to reduce the likelihood that augmented reality objects would be rendered outside of the bounding volume of the field of view by incorporating an automated method to verify and ensure that the object was rendered into its bounded volume or “frame” as taught by the Devam teachings. Reference MPEP 2143(I) C, D, and F
render the augmented reality content;
Spivack teaches how to render augmented reality content with the following reference:
(“[0090] Embodiments of the present disclosure include systems, methods and apparatuses of platforms (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) for deployment and targeting of context-aware virtual objects and/or behavior modeling of virtual objects based on physical laws or principle. Further embodiments relate to how interactive virtual objects that correspond to content or physical objects in the physical world are detected and/or generated, and how users can then interact with those virtual objects, and/or the behavioral characteristics of the virtual objects, and how they can be modeled. … Yet further embodiments of the present disclosure include a platform (e.g., as hosted by the host server 100 as depicted in the example of FIG. 1) to provide an augmented reality (AR) workspace in a physical space, where a virtual object can be rendered as a user interface element of the AR workspace.”)(Fig. 1 ¶0090)
However, Spivack does not teach responsive to positive verification, but Devam does teach the entire limitation responsive to positive verification render the augmented reality content;
(Fig. 30 ¶0348)
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the teachings of Spivack and Devam to reduce the likelihood that augmented reality objects would be rendered outside of the bounding volume of the field of view by incorporating an automated method to verify and ensure that the object was rendered into its bounded volume or “frame” as taught by the Devam teachings. Reference MPEP 2143(I) C, D, and F
and render modified augmented reality content fitting within the bounding volume of the virtual real estate object.
(“[00214] In one embodiment, the host server 300 can detect the movement of the user in the real world environment and identify changes in location of the physical space around the user due to the movement of the user in the real world environment. The virtual billboard engine 360 can render the virtual billboard to move in the augmented reality environment in accordance with the changes in location of the physical space around the user such that the virtual billboard moves with or appears to move with the user in the augmented reality environment. Furthermore, the host server 300 can detect interaction with the virtual billboard by a user and further render augmented reality features embodied in the virtual billboard in the augmented reality environment. In one embodiment, the augmented reality features can include the user replies depicted as a 3D thread associated with the virtual billboard. In addition, the augmented reality features embodied in the virtual billboard can further include, for example, digital stickers, GIFs, digital tattoos, emoticons, animations, videos, clips, games, photos, images, objects or scenes rendered in 360 degrees or 3D and/or music, sounds, tones. ….”)(Fig. 3A ¶00214)
Under the broadest reasonable interpretation, the term “bounded volume” is interpreted to be equivalent to “physical space” in the Spivack teaching above. Additionally, under the broadest reasonable interpretation, the term “virtual real estate object” is interpreted to be equivalent to “virtual billboard” in the Spivack teaching above.
However, Spivack does not teach “and responsive to negative verification to either: not render the augmented reality content, or modify the augmented reality content”, but Devam does teach the entire limitation: and responsive to negative verification to either: not render the augmented reality content, or modify the augmented reality content and render modified augmented reality content fitting within the bounding volume of the virtual real estate object.
(Fig. 30 ¶0348)
Further, it would have been obvious before the effective filing date of the claimed invention, to combine the teachings of Spivack and Devam to reduce augmented reality objects rendering errors by incorporating an automated method to ensure that the object was rendered into its bounded volume or “frame” as taught by the Devam teachings. Reference MPEP 2143(I) C, D, and F
Conclusion
Applicant' s amendment necessitate new ground(s) of rejection presented in this Office Action. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAHUL SHARMA whose telephone number is (571) 272-3058. The examiner can normally be reached Monday thru Friday, 8-5 CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAHUL SHARMA/Examiner, Art Unit 3626
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626