Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Todasco (Pub. No. 2017/0123750)1, in view of Wells et al. (Pub. No. 2015/0363966).
Regarding claim 1, Todasco discloses a method, comprising:
identifying a digital element located within a region near a device, wherein the digital element is virtually located at a dynamically updated location with real-world physical longitude and latitude components that dynamically change to different locations over time for the digital element(See Fig. 4 and pars. 57-58. In particular, virtual objects within a threshold distance of a user device are identified. The location of a virtual object can be tied to the location of a second user device and dynamically changes as the second user device moves. Par. 60 further discloses: “when a user device moves, the GPS location of the user device may change to a new GPS location. In response to the change, the user device and/or a server for orchestrating the virtual world may map the new GPS location to new virtual world coordinates and update the virtual location of the user device with the new virtual world coordinates”. A person skilled in the art would recognize that latitude and longitude components are included in GPS coordinates to define a precise location on Earth);
determining that the digital element is to be rendered (See pars. 40 and 70. In particular, a virtual object can be rendered when it is within the field of view of the user device, and may not be rendered when it is outside the field of view);
using a processor to generate provided via the augmented reality capable observing platform (Par. 40: “the user may provide user input by changing the orientation of the user device such that one or more 3-D models of virtual objects are within the field of view established at process 203. In response, the user device may detect and/or display the one or more virtual objects in the display”); and
providing content of the digital element in response to receiving an indication associated with the digital element received using the augmented reality capable observing platform (Par. 44: “the user device may process the user input and accordingly adjust the virtual object, such as… playing an animation related to the virtual object, … causing an action (e.g. transferring currency, transferring ownership of goods, providing a ticket for access to a service, displaying an animation, etc.), and/or the like”. In particular, actions associated with a virtual object, such as playing an animation, transferring currency, providing a ticket, could be interpreted as providing content of the virtual object).
Todasco, however, does not disclose determining a vertical location of a virtual object relative to the user device, and generating the virtual object at the vertical location relative to the user device.
In the same field of virtual reality/augmented reality (VR/AR), Wells teaches a technique for navigating a virtual scene with a user device wherein translation of a virtual camera is controlled to be at an elevation that is fixed relative to a base plane (e.g., a floor) of the virtual scene even if the user device is pointed up or down relative to horizontal while moving (See pars. 79-82). Wells further teaches rendering virtual objects at positions defined relative to the virtual camera’s viewpoint as the user moves through the scene (See pars. 35-36). Although Wells describes vertical placement of virtual objects with respect to a world‑based coordinate system (i.e., elevation relative to a virtual floor), it would have been obvious to one of ordinary skill in the art at the time of the invention to express the vertical placement of virtual objects relative to the position of a user device, as claimed.
In particular, AR systems were well known to track the pose of the AR device and to define virtual object placement relative to the device’s viewpoint to maintain consistent spatial alignment with the user’s perspective. Under KSR Int’l Co. v. Teleflex Inc., substituting one known coordinate reference frame (world‑centric, floor‑based) with another known coordinate reference frame (device‑centric, AR‑based) constitutes a predictable use of prior art elements according to their established functions, and represents a simple design choice among a finite number of identified, predictable solutions for specifying vertical position. A person of ordinary skill in the art, seeking to implement the rendering and navigation techniques of Wells in an AR environment, would have been motivated to express object positions relative to the user device because the device’s pose defines the user’s viewpoint in AR, thereby more closely aligning with the experience of walking where the user's viewpoint remains at a constant elevation above a terrain (Wells, par. 82).
Regarding claim 2, Todasco in view of Wells discloses the method of claim 1, wherein the dynamically updated location of the digital element is based on a dynamic current location of a reference location device (Todasco, par. 57).
Regarding claim 3, Todasco in view of Wells discloses the method of claim 2, wherein the dynamic current location of the reference location device is provided to a server by the reference location device (Todasco, par. 60).
Regarding claim 4, Todasco in view of Wells discloses the method of claim 2, wherein the dynamically updated location of the digital element is determined based on an offset relative to the dynamic current location of the reference location device (Todasco, par. 31: “When the virtual object is created, the virtual object may be given positional coordinates in the virtual world based on a predetermined distance from the positional coordinates of the user device”).
Regarding claim 5, Todasco in view of Wells discloses the method of claim 2, wherein the vertical location of the digital element is static relative to the augmented reality capable observing platform (Todasco in view of Wells would position a virtual camera at a predetermined elevation from the user device. Since the vertical location of the virtual camera does not change (static) relative to the user device, the vertical location of a virtual object “seen” by the virtual camera also does not change relative to the user device).
Regarding claim 8, Todasco in view of Wells discloses the method of claim 1, wherein the indication associated with the digital element received using the augmented reality capable observing platform is associated with a capture of the digital element using the augmented reality capable observing platform (Todasco, par. 40: “At process 204, the user may provide user input by changing the orientation of the user device such that one or more 3-D models of virtual objects are within the field of view established at process 203. In response, the user device may detect and/or display the one or more virtual objects in the display”).
Regarding claim 9, Todasco in view of Wells discloses the method of claim 1, wherein the digital element includes a user profile of a target user (Todasco, par. 40: “In some embodiments, the user device may display virtual objects that are associated with the user device or public for viewing but not virtual objects that are private and unassociated with the user device or when the user device lacks the appropriate permissions”. See also par. 37. The user account associated with a virtual object could be interpreted as a user profile).
Regarding claim 10, Todasco in view of Wells discloses the method of claim 1, wherein providing content of the digital element in response to receiving the indication includes awarding the content of the digital element to a user (Todasco, par. 23: “the virtual object may have a passcode and/or a puzzle that, when solved, causes the virtual object to conduct a certain action, such as an animation, transfer of ownership, transfer of monetary funds, and/or the like”).
Claims 11-15, 18 and 19 recite similar limitations as respective claims 1-5, 8 and 10, but are directed to a system comprising one or more processors and a memory coupled with at least one of the one or more processors, wherein the one or more processors are configured to implement the steps recited in the respective claims. Since Todasco also discloses such a system (See Fig. 7, for example), these claims could be rejected under the same rationales set forth in the rejection of their respective claims.
Claim 20 recites similar limitations as claim 1, but is directed to a computer program product comprising instructions programmed to implement the steps recited in claim 1. Since Todasco also discloses such a product (See Claim 15, for example), claim 20 could be rejected under the same rationale set forth in the rejection of claim 1.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Todasco in view of Wells as applied to respective claims 1 and 11 above, and further in view of Brown et al. (Pub. No. CA 2690425, published on 10/8/2010 and provided in the IDS dated 8/30/2024).
Regarding claim 6, Todasco in view of Wells discloses the method of claim 1,
In the same field of mobile location tracking, Brown teaches determining a filtered location of an electronic device based at least in part on a new location ([0037]: “In step 1160, the mobile communications device calculates the difference between the status update and the last accepted status update. If it is below the threshold, the status update is discarded. For example, if the current status update represents a location less than 50 meters distance from the last accepted status update, the current status update may be discarded”. In particular, Brown teaches reducing the amount of location updates transmitted from a mobile device when it is determined that location changes are just the result of the mobile device being "loitering" (see [0028]). The reduction is performed by applying algorithm-based filters (see Abstract). Since a small change between the device’s previous location and its updated location, e.g. a change of less than 50m, would be discarded by the filters, it could be said that a filtered location of the user device is based on its updated location in a manner that reduces a rate of change of the updated location).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further incorporate Brown's method of filtering locations into Todasco such that representation of a virtual object in the rendered view would include determining a filtered location of the user device based at least in part on the dynamically updated location in a manner that reduces a rate of change of the dynamically updated location. The motivation for this modification would have been to conserve resources and to prevent unnecessary updating of the dynamically updated locations due to the user device being loitering (Brown, [0029]).
Claim 16 recites similar limitations as claim 6, but is directed to a system comprising one or more processors and a memory coupled with at least one of the one or more processors, wherein the one or more processors are configured to implement the steps recited in claim 6. Since Todasco also discloses such a system (See Fig. 7, for example), claim 16 could be rejected under the same rationale set forth in the rejection of claim 6.
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Todasco in view of Wells as applied to respective claims 1 and 11 above, and further in view of Holzer et al. (Pub. No. US 2017/0109930).
Regarding claim 7, Todasco in view of Wells discloses the method of claim 1,
In the same field of AR, Holzer teaches generating a representation of a virtual object by calculating a directional heading value of the virtual object based on a determined geographical location of a user device (See Figs. 1A-1B and 2A-2B and 4 and the associated description).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to further modify Todasco by generating the representation of the virtual object in the rendered view by calculating a directional heading value of the virtual object based at least in part on a determined geographical location of the user device, as taught by Holzer. The motivation would have been to make the virtual object look more realistic.
Claim 17 recites similar limitations as claim 7, but is directed to a system comprising one or more processors and a memory coupled with at least one of the one or more processors, wherein the one or more processors are configured to implement the steps recited in claim 7. Since Todasco also discloses such a system (See Fig. 7, for example), claim 17 could be rejected under the same rationale set forth in the rejection of claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHONG X NGUYEN whose telephone number is (571)270-1591. The examiner can normally be reached Mon-Fri 8am - 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHONG X NGUYEN/ Primary Patent Examiner, Art Unit 2617
1 This reference qualifies as prior art because the present application has the effective filing date of 11/17/2017, which is the effective filing date of application 15/817,027 for which it claims priority, and which still has support for the claimed feature “dynamically-changing location”. Earlier applications for which it claims priority do not have support for this feature.