Prosecution Insights
Last updated: April 19, 2026
Application No. 17/954,527

Point of Interest System and Method for AR, VR, MR, and XR Connected Spaces

Non-Final OA §103§112
Filed
Sep 28, 2022
Examiner
SUN, HAI TAO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Magnopus LLC
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
347 granted / 476 resolved
+10.9% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
511
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.8%
+25.8% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 476 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Response to Arguments Applicant's arguments filed 12/16/2025 have been fully considered but they are not persuasive. Regarding to 35 U.S.C 112 (a) rejection, the applicant argues that the amendments overcome the 35 U.S.C 112 (a) rejection of claims 1 and 18. The arguments have been fully considered. The argument according claim 18 is persuasive. Therefore, the 35 U.S.C 112 (a) rejection of claim 18 and claims 19-22 are hereby withdrawn. The argument according claim 1 is not persuasive. The examiner cannot concur with the applicant for following reasons: Claim 1 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification describes “the controller creates a record mapping the coordinate of each of the plurality of objects” in paragraph [0038]. The specification further describes “the coordinate space is a physical coordinate space; the coordinate space is a virtual coordinate space” in paragraph [0042]. The specification further more describes “visual positioning database 132 with the geographic coordinates of the items” in paragraph [0103]. However, the specification does not describe “the unified underlying global coordinate space”. Therefore, the language “the unified underlying global coordinate space” is a new matter. Regarding to claim 1 and claim 18, the applicant argues the cited arts fail to teach or suggest “each of the plurality of objects comprises a temporal information component, wherein the temporal information component indicates when the object is active”. The arguments have been fully considered, but is not persuasive. The examiner cannot concur with the applicant for following reasons: what claimed is: “a temporal information component, wherein the temporal information component indicates when the object is active”. McKinnon discloses “a temporal information component, wherein the temporal information component indicates when the object is active”. For example, in paragraph [0042], McKinnon teaches the audio data is prioritized over the video data based on a time the data was captured, etc; McKinnon further teaches objects in a video are active based on a time the data was captured; McKinnon further more teaches a sequence of frames of a video are actively displayed based on time. In paragraph [0048], McKinnon teaches providing information not only related to a layout, but also to traffic, popularity, and time. In paragraph [0058], McKinnon teaches visible to the user at that particular point in time. In paragraph [0074], McKinnon teaches the user sees a particular view within a larger view of interest 132 at any given time. In Fig. 4 and paragraph [0088], McKinnon teaches obtaining information related to the data capturing device at the time of capture. Claims 2-13 and 18-22 are not allowable due to the similar reasons as discussed above. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification describes “the controller creates a record mapping the coordinate of each of the plurality of objects” in paragraph [0038]. The specification further describes “the coordinate space is a physical coordinate space; the coordinate space is a virtual coordinate space” in paragraph [0042]. The specification further more describes “visual positioning database 132 with the geographic coordinates of the items” in paragraph [0103]. However, the specification does not describe “the unified underlying global coordinate space”. Therefore, the language “the unified underlying global coordinate space” is a new matter. Claims 2-13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph due to dependency of claim 1. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the unified underlying global coordinate space" in last line. There is insufficient antecedent basis for this limitation in the claim. Claims 2-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph due to dependency of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13 and 18-22 are rejected under 35 U.S.C. 103 as being unpatentable over McKinnon (US 20190057113 A1) in view of Velasquez (US 20210264685 A1), and further in view of Weising (US 20110216060 A1). Regarding to claim 1 (currently amended), McKinnon discloses a points-of-interest system for a space ([0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: an AR management engine obtains an initial map of an area of interest from the area data within the area database; [0029]: provide augmented reality content to a user device based on a precise location of the user device; [0038]: the map generation engine 102 of system 100 generates an ad-hoc area of interest based on a number of devices detected in a particular area at a particular time; [0040]: generate an augmented-reality or mixed-reality environment; overlay the content on real-world imagery via the computing device) comprising: a controller ([0028]: engines, controllers, and processor; a non-transitory computer readable medium stores the software instructions that cause a processor to execute the disclosed steps); a plurality of objects, each of the plurality of objects comprising a coordinate in a coordinate space ([0036]: select a landmark as the area of interest; select a coordinate on a rendered digital map; [0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a coordinate space; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; various objects of interest; the point of view origin, i.e., a coordinate, is associated with view A; PNG media_image1.png 284 608 media_image1.png Greyscale ; [0110]: GPS data are a coordinate in a coordinate space; [0115]: encode their warehouse floor with location information; the location information is X, Y coordinates; X, Y coordinates in the warehouse; tiles location; [0116]: the X, Y coordinate of a floor tile; the seed for a location is S and a coordinate is (X, Y)), the coordinate space comprising: an AR camera coordinate space ([0088]: images are captured from a different position, view or angle; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; PNG media_image2.png 112 506 media_image2.png Greyscale ; [0106]: a field of interest facing 35 degrees above eye level from four feet above the ground); a digital world coordinate space (Fig. 1; [0033]: PNG media_image3.png 284 462 media_image3.png Greyscale ; Fig. 5; [0093-0094]: the point of view origins having fields of interest leading to views B and Z); and a World Geodetic System coordinate space ([0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a World Geodetic System coordinate space; [0110]: GPS data); and a temporal information component, wherein the temporal information component indicates when the object is active ([0042]: the audio data is prioritized over the video data based on a time the data was captured, etc; [0048]: provide information not only related to a layout, but also to traffic, popularity, and time; [0058]: visible to the user at that particular point in time; [0074]: the user sees a particular view within a larger view of interest 132 at any given time; Fig. 4; [0088]: obtain information related to the data capturing device at the time of capture); wherein the controller creates a record mapping the coordinate of each of the plurality of objects to an object reference identifier for the object ([0068]: descriptors are reference identifiers; the AR management engine 130 obtains descriptors, i.e. reference identifiers, for the recognized objects within the area of interest; the AR management engine 130 obtains the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates and maps the recognized objects within the area of interest with AR content types; AR content objects are associated within the area of interest; [0090]: associate and map the various descriptors 405A, B and C with one or more content objects; [0091]: object generation engine 404 transmits the image AR content objects 422 to AR content database 420 via network 415; [0092]: the AR content objects 422 are selected based on the descriptor of the object itself; [0093]: a first point of view origin, e.g., a coordinate; various objects of interest are mapped to descriptors; [0102]: the AR content objects are determined based on the user's device, a user identification, and information about the user; an object reference identifier includes the user's device, a user identification, and information about the user; [0129]: a bidirectional mapping from image patch space to descriptor space; [0130]: a mapping function that is bidirectional in the sense; a descriptor generates a corresponding coordinate; find the X and Y coordinates); wherein the controller creates a second record defining a plurality of transformations for converting between a pair of the AR camera coordinate space, the digital world coordinate space, and the World Geodetic System coordinate space ([0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a coordinate space; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; various objects of interest; the point of view origin, i.e., a coordinate, is associated with view A; PNG media_image1.png 284 608 media_image1.png Greyscale ; [0110]: GPS data are a coordinate in a coordinate space; [0115]: encode their warehouse floor with location information; the location information is X, Y coordinates; X, Y coordinates in the warehouse; tiles location; [0116]: the X, Y coordinate of a floor tile; the seed for a location is S and a coordinate is (X, Y)); an object database for storing the object reference identifier for each of the plurality of objects ([0014]: the content database stores augmented reality or other digital content objects of various modalities; [0068]: the AR management engine 130 obtains the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types; Fig. 1; [0080]: store the AR content objects and the ancillary information in the database; the AR content objects and the ancillary information are associated with various descriptors; store various descriptors in descriptor database; [0081]: AR content objects 134 in database 120 are associated with one or more descriptors that are associated with one or more views of interest 132, etc; Fig. 4; [0088]: descriptor database 405 stores information related to various objects and descriptors associated with the image data); a cloud storage ([0020]: the device composes a data center and is coupled with a cloud server); wherein the object database provides at least one object reference identifier to the remote storage ([0019]: the tiles are associated with one or more of an identification, an owner, an object of interest, a set of descriptors, an advertiser, a cost, or a time; [0020]: the device could compose a data center and be coupled with a cloud server; [0028]: hard drive, FPGA, solid state drive, RAM, flash, ROM, memory, distributed memory; [0048]: the map generation engine 202 transmits the initial map 218 to area database 210 for storage via network 215; [0068]: the AR management engine 130 obtains descriptors for the recognized objects within the area of interest; Fig. 4; [0087]: object generation engine 404 is coupled with descriptor database 405, AR content database 420, and user interfaces 400A, 400B, and 400C via networks 425, 415 and 405; AR content objects are associated with one or more descriptors related to objects viewable from an area of interest, and stored in an AR content database 420 for use; PNG media_image4.png 698 676 media_image4.png Greyscale ; Fig. 4; [0088]: descriptor database 405 comprises information related to various objects viewable from within the Aria; descriptor database 405 comprise object image data, descriptors associated with the image data, and information relating to the device capturing the object image data; [0090]: Carina associates content objects 422A and 424A with descriptor 405A, content objects 422B and 426A with descriptor 405B, and content objects 422C and 424B with descriptor 405C.; [0102]: the AR content objects are determined based on the user's device, a user identification, and information about the user); McKinnon fails to explicitly disclose: a space is a connected space; the remote storage is the cloud storage; wherein the unified underlying global coordinate space allows a plurality of devices to simultaneously be in the connected space. In same field of endeavor, Velasquez teaches: a space is a connected space ([0111]: enable multiple users to experience virtual content in the same location with respect to the physical world; [0114]: persistent spatial information is represented in a way that may be readily shared among users and among the distributed components; [0131]: enable multiple users to share an AR experience; [0258]: provide spatial persistence across user instances within a shared space, i.e., a connected space; [0262]: one or more coordinate systems in real space across one or more sessions; [0515]: relate locations specified with respect to the shared map to the coordinate frame used by the user device; the shared map is a connected space; [0519]: match features in the collection of features in the selected shared map; [0524]: correspond with features of a shared map); the remote storage is a cloud storage ([0111]: enable multiple users to experience virtual content in the same location with respect to the physical world; [0112]: the persistent map is stored in a remote storage medium, e.g., a cloud storage; the remote storage is a cloud storage; [0114]: persistent spatial information is represented in a way that may be readily shared among users and among the distributed components; [0131]: enable multiple users to share an AR experience; [0157]: the wearable device accesses the cloud based remote data repositories and memory; the remote storage is a cloud storage; [0158]: access a remote persistent map stored on a cloud storage; [0159]: store maps maintained on the cloud service; the localization processing takes place in the cloud matching the device location to existing maps; [0188]: the counterpart in the cloud is described in a coordinate frame shared by all devices in an XR system). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify McKinnon to include a space is a connected space; the remote storage is the cloud storage as taught by Velasquez. The motivation for doing so would have been to access cloud based remote data repositories; to improve their experiences with the AR system; to facilitate persistent and consistent cross reality experiences between individual and/or groups of users as taught by Velasquez in paragraphs [0157], [0163], and [0440]. McKinnon in view of Velasquez fails to explicitly disclose: wherein the unified underlying global coordinate space allows a plurality of devices to simultaneously be in the connected space. In same field of endeavor, Weising teaches: wherein the unified underlying global coordinate space allows a plurality of devices to simultaneously be in the connected space ([0044]: a Global Positioning System (GPS) device; Fig. 2; [0046]: the location is detected by analyzing data obtained from inertial systems and GPS; PNG media_image5.png 298 566 media_image5.png Greyscale ; Fig. 4; [0052]: a multi-player virtual reality game; Fig. 5; [0056]: a multi-player environment; the positional information gained from GPS and compass is transmitted to other linked devices to enhance the collaboratively maintained data; create a common shared space synchronized to a common reference point 502; a first player 504A synchronizes her device into the 3D space with respect to reference point 502. Other players in the shared space establish a communication link with the first player; PNG media_image6.png 690 624 media_image6.png Greyscale ; [0060]: shared spaces are created when players are in different locations; Fig. 7; [0061]: an interactive game that is independent of the location of the portable device). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify McKinnon in view of Velasquez to include wherein the unified underlying global coordinate space allows a plurality of devices to simultaneously be in the connected space as taught by Weising. The motivation for doing so would have been to detect the location by analyzing data obtained from inertial systems and GPS; to enhance the collaboratively maintained data; to create a common shared space synchronized to a common reference point 502 as taught by Weising in Fig. 5 and paragraphs [0046] and [0056]. Regarding to claim 2 (currently amended), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 1 further comprising: a position determining component for correlating a physical location of a first device of the plurality of devices with a digital location (McKinnon; [0038]: generate an ad-hoc area of interest based on a number of devices detected in a particular area; the map generation engine 102 receives position data corresponding to a plurality of user devices; [0057]: a view of interest 132 is a digital representation of a physical location in real-world space; [0065]: GPS determine location of devices; [0067]: GPS determines the location information of devices; [0110]: determine a location of a device; [0112]: the device is able to determine its location within the wide area; [0116]: determine location of a device); a digital location database (McKinnon; [0013]: the area database stores area data and locations related to an area of interest; [0017]: obtain a set of AR content objects, e.g., a virtual object, chroma key content, digital image, digital video, and audio data, from the AR content database; [0045]: the initial map 118 could comprise a digital or virtual construct in memory that is generated by the map generation engine 102 of system 100); wherein the physical location is determined using at least one of the plurality of transformations stored in the second record (McKinnon; Fig. 1; [0033]: area data could comprise image data 112; PNG media_image3.png 284 462 media_image3.png Greyscale ; [0045]: the initial map 118 could comprise a digital or virtual construct in memory that is generated by the map generation engine 102 of system 100); wherein the digital location database looks up digital content for a physical location in the object database (McKinnon; [0013]; the area database stores area data related to an area of interest; [0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain and look up an initial map of an area of interest from the area data within the area database; obtain and look up area data such as image data, signal data, video data, audio data, views data, viewable object data, points of interest data, field of view data, etc; [0016]: the AR management engine derives a set of views of interest from at least one of the initial map and other area data; [0047]: obtain and look up the images and videos from Abigail, Bryan and Catherine's profiles; [0067]: the recognized objects and characteristics of the environment are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on one or more of the location information; [0068]: the AR management engine 130 obtains and looks up descriptors for the recognized objects within the area of interest; obtain and look up the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types; [0081]: the set of AR content objects 134 are obtained based on a search query and look ups of AR content database 120; a search and look ups for AR content objects 134 in database 120); wherein the object database returns the digital content for the physical location to the first device (McKinnon; [0013]; the area database stores area data related to an area of interest; [0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain an initial map of an area of interest from the area data within the area database; obtain area data such as image data, signal data, video data, audio data, views data, viewable object data, points of interest data, field of view data, etc; [0016]: the AR management engine derives a set of views of interest from at least one of the initial map and other area data; [0047]: obtain the images and videos from Abigail, Bryan and Catherine's profiles; [0067]: the recognized objects and characteristics of the environment are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on one or more of the location information; [0068]: the AR management engine 130 obtains descriptors for the recognized objects within the area of interest; obtain the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types); wherein the digital location database looks up digital content for the digital location of a second device of the plurality of devices in the object database (McKinnon; [0013]; the area database stores area data related to an area of interest; [0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain and look up an initial map of an area of interest from the area data within the area database; obtain and look up area data such as image data, signal data, video data, audio data, views data, viewable object data, points of interest data, field of view data, etc; [0016]: the AR management engine derives a set of views of interest from at least one of the initial map and other area data; [0047]: obtain and look up the images and videos from Abigail, Bryan and Catherine's profiles; [0067]: the recognized objects and characteristics of the environment are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on one or more of the location information; [0068]: the AR management engine 130 obtains and looks up descriptors for the recognized objects within the area of interest; obtain and look up the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types; [0081]: the set of AR content objects 134 are obtained based on a search query and look ups of AR content database 120; a search and look ups for AR content objects 134 in database 120; [0079]: the area data are presented to users from which the human users select a corresponding view of interest 132; [0108]: this modification and addition are viewable to all users of the system); and wherein the object database returns the digital content for the digital location to the second device (McKinnon; [0013]; the area database stores area data related to an area of interest; [0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain an initial map of an area of interest from the area data within the area database; obtain area data such as image data, signal data, video data, audio data, views data, viewable object data, points of interest data, field of view data, etc; [0016]: the AR management engine derives a set of views of interest from at least one of the initial map and other area data; [0047]: obtain the images and videos from Abigail, Bryan and Catherine's profiles; [0067]: the recognized objects and characteristics of the environment are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on one or more of the location information; [0068]: the AR management engine 130 obtains descriptors for the recognized objects within the area of interest; obtain the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types; [0079]: the area data are presented to users from which the human users select a corresponding view of interest 132; [0108]: this modification and addition are viewable to all users of the system). Regarding to claim 3 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the positioning determining component is a global positioning system (GPS) (McKinnon; [0065]: location-sensor data, e.g., GPS; [0067]: GPS or other location-sensor information). Regarding to claim 4 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the positioning determining component is a compass (Velasquez; [0156]: compasses). Same motivation of claim 1 is applied here. Regarding to claim 5 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the first device is a smartphone (McKinnon; [0036]: smartphones; [0040]: smartphone). Regarding to claim 6 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the first device is a tablet (McKinnon; [0036]: tablets; [0040]: tablet). Regarding to claim 7 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the second device is a smartphone (McKinnon; [0036]: smartphones include first device and second device; [0040]: smartphone). Regarding to claim 8 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the second device is a tablet (McKinnon; [0036]: tablets include first device and second device; [0040]: tablet). Regarding to claim 9 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the plurality of objects comprises at least one physical object (McKinnon; [0014]: the content objects are associated with one or more real world objects, i.e. physical objects, viewable from an area of interest; [0036]: select a landmark as the area of interest; select a floor of a building as the area of interest; [0047]: various images and videos of the Los Angeles Airport; [0063]: recognize and identify real-world objects within the area of interest). Regarding to claim 10 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the plurality of objects comprises at least one virtual object (McKinnon; [0014]: image content objects, video content objects, or audio content objects; [0017]: a virtual object; [0040]: a virtual object; digital image, digital video; AR content objects can include graphic sprites and animations). Regarding to claim 11 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the plurality of objects comprises: at least one physical object (McKinnon; [0014]: the content objects are associated with one or more real world objects, i.e. physical objects, viewable from an area of interest; [0036]: select a landmark as the area of interest; select a floor of a building as the area of interest; [0047]: various images and videos of the Los Angeles Airport; [0063]: recognize and identify real-world objects within the area of interest); and at least one virtual object (McKinnon; [0014]: image content objects, video content objects, or audio content objects; [0017]: a virtual object; [0040]: a virtual object; digital image, digital video; AR content objects can include graphic sprites and animations). Regarding to claim 12 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the object database stores metadata for each of the plurality of objects (McKinnon; [0013]: the area database stores area data related to an area of interest; [0014]: the content database stores augmented reality or other digital content objects of various modalities; [0016]: the views of interest comprise a view-point origin, a field of interest, an owner, metadata, a direction, an orientation, a cost, a search attribute, a descriptor set, an object of interest, or any combination or multiples thereof; [0059]: the view of interest 132 includes data associated with one or more of an owner, metadata, a direction, an orientation, a cost, a search attribute, or any combination or multiples thereof; [0105]: a view of interest comprises an owner, metadata, a direction, an orientation, a cost, a search attribute, or combinations or multiples thereof.). Regarding to claim 13 (original), McKinnon in view of Velasquez and Weising discloses the points-of-interest system of claim 2 wherein the points-of-interest system is cloud hosted (McKinnon; [0020]: the device could compose a data center and be coupled with a cloud server, i.e., a cloud host). McKinnon in view of Velasquez and Weising further discloses wherein the points-of-interest system is cloud hosted (Velasquez; [0112]: the persistent map is stored in a remote storage medium, e.g., a cloud; [0116]: a localization service is provided on remote processors, such as in the cloud; [0159]: cloud-based localization). Same motivation of claim 1 is applied here. Regarding to claim 18 (currently amended), McKinnon discloses a method for creating and accessing points of interest in a space ([0014]: the content database stores augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: an AR management engine obtains an initial map of an area of interest from the area data within the area database; [0016]: derive a set of views of interest from at least one of the initial map and other area data; [0029]: provide augmented reality content to a user device based on a precise location of the user device; [0038]: the map generation engine 102 of system 100 generates an ad-hoc area of interest based on a number of devices detected in a particular area at a particular time; [0040]: generate an augmented-reality or mixed-reality environment; overlay the content on real-world imagery via the computing device; [0108]: this modification and addition are viewable to all users of the system; [0109]: provide incentives for users to navigate a specific portion of the area) comprising: storing an object reference identifier for each of a plurality of objects in an object database ([0014]: the content database stores augmented reality or other digital content objects of various modalities; [0068]: the AR management engine 130 obtains descriptors for the recognized objects within the area of interest; [0069]: AR content objects are associated within the area of interest to varying levels of specificity or granularity; [0087]: AR content objects are associated with one or more descriptors related to objects viewable from an area of interest, and stored in an AR content database 420 for use by an AR management engine; Fig. 4; [0088]: descriptor database 405 stores object image data, descriptors associated with the image data, and information relating to the device); receiving coordinates in a coordinate space for each of the plurality of objects ([0036]: select a landmark as the area of interest; select and receive a coordinate on a rendered digital map; [0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a coordinate space; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; various objects of interest; the point of view origin, i.e., a coordinate, is associated with view A; PNG media_image1.png 284 608 media_image1.png Greyscale ; [0110]: GPS data are a coordinate in a coordinate space; [0115]: encode their warehouse floor with location information; the location information is X, Y coordinates; X, Y coordinates in the warehouse; tiles location; [0116]: the X, Y coordinate of a floor tile; the seed for a location is S and a coordinate is (X, Y)); wherein the coordinate space comprises: an AR camera coordinate space (Fig. 5; [0039]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; PNG media_image2.png 112 506 media_image2.png Greyscale ; [0088]: images are captured from a different position, view or angle; Fig. 5; [0093]; [0106]: a field of interest facing 35 degrees above eye level from four feet above the ground); a digital world coordinate space (Fig. 1; [0033]: PNG media_image3.png 284 462 media_image3.png Greyscale ; Fig. 5; [0093-0094]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A); a World Geodetic System coordinate space ([0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a World Geodetic System coordinate space; [0110]: GPS data); and receiving a temporal information component for each of the plurality of objects. wherein the temporal information component indicates when the object is active ([0042]: the audio data is prioritized over the video data based on a time the data was captured, etc; [0048]: provide information not only related to a layout, but also to traffic, popularity, and time; [0058]: visible to the user at that particular point in time; [0074]: the user sees a particular view within a larger view of interest 132 at any given time; Fig. 4; [0088]: obtain information related to the data capturing device at the time of capture); mapping the coordinates for each of the plurality of objects to an object reference identifier for each of the plurality of objects ([0068]: the AR management engine 130 obtains descriptors, i.e. reference identifiers, for the recognized objects within the area of interest; the AR management engine 130 obtains the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates and maps the recognized objects within the area of interest with AR content types; AR content objects are associated within the area of interest; [0090]: associate and map the various descriptors 405A, B and C with one or more content objects; [0091]: object generation engine 404 transmits the image AR content objects 422 to AR content database 420 via network 415; [0092]: the AR content objects 422 are selected based on the descriptor of the object itself; [0093]: a first point of view origin, e.g., a coordinate; various objects of interest are mapped to descriptors; [0129]: a bidirectional mapping from image patch space to descriptor space; [0130]: a mapping function that is bidirectional in the sense; a descriptor generates a corresponding coordinate; find the X and Y coordinates); storing a first record with the mapping of the coordinates for each of the plurality of objects to the object reference identifier for each of the plurality of objects in the object database ([0014]: the content database stores augmented reality or other digital content objects of various modalities; [0068]: the AR management engine 130 obtains the descriptors from a descriptor database corresponding to various objects capable of being recognized; [0069]: the AR management engine 130 associates the recognized objects within the area of interest with AR content types; [0080]: store the AR content objects and the ancillary information in the database; [0081]: AR content objects 134 in database 120 are associated with one or more descriptors that are associated with one or more views of interest 132, etc; [0088]: descriptor database 405 comprises information related to various objects; [0093]: an AR management engine 530 generating area tile maps 538 and 538T; Cluster A comprises a first point of view origin (e.g., a coordinate) having a first field of interest leading to view A; [0115]: store a large, wide area map database; [0129]: a bidirectional mapping from image patch space to descriptor space; [0130]: a mapping function that is bidirectional in the sense; a descriptor generates a corresponding coordinate; find the X and Y coordinates); storing a second record defining a plurality of transformations for converting between a pair the AR camera coordinate space, the digital world coordinate space, and the World Geodetic System coordinate space ([0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a World Geodetic System coordinate space; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; various objects of interest; the point of view origin, i.e., a coordinate, is associated with view A; PNG media_image1.png 284 608 media_image1.png Greyscale ; [0110]: GPS data are a coordinate in a unified underlying global coordinate space; [0115]: encode their warehouse floor with location information; the location information is X, Y coordinates; X, Y coordinates in the warehouse; tiles location; [0116]: the X, Y coordinate of a floor tile; the seed for a location is S and a coordinate is (X, Y)); accessing at least one of the plurality of objects ([0036]: select a landmark as the area of interest; select a coordinate on a rendered digital map; [0067]: the recognized objects are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on the location information, e.g., GPS; the GPS data are a coordinate in a coordinate space; Fig. 5; [0093]: Cluster A comprises a first point of view origin, e.g., a coordinate, having a first field of interest leading to view A; various objects of interest; the point of view origin, i.e., a coordinate, is associated with view A; PNG media_image1.png 284 608 media_image1.png Greyscale ; [0110]: GPS data are a coordinate in a unified underlying global coordinate space; [0115]: encode their warehouse floor with location information; the location information is X, Y coordinates; X, Y coordinates in the warehouse; tiles location; [0116]: the X, Y coordinate of a floor tile; the seed for a location is S and a coordinate is (X, Y)); and McKinnon fails to explicitly disclose: a space is a connected space; providing the at least one of the plurality of objects to a plurality of devices simultaneously in the connected space. In same field of endeavor, Velasquez teaches a space is a connected space ([0111]: enable multiple users to experience virtual content in the same location with respect to the physical world; [0114]: persistent spatial information is represented in a way that may be readily shared among users and among the distributed components; [0131]: enable multiple users to share an AR experience; [0258]: provide spatial persistence across user instances within a shared space, i.e., a connected space; [0262]: one or more coordinate systems in real space across one or more sessions; [0515]: relate locations specified with respect to the shared map to the coordinate frame used by the user device; the shared map is a connected space; [0519]: match features in the collection of features in the selected shared map; [0524]: correspond with features of a shared map). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify McKinnon to include a space is a connected space as taught by Velasquez. The motivation for doing so would have been to access cloud based remote data repositories; to improve their experiences with the AR system; to facilitate persistent and consistent cross reality experiences between individual and/or groups of users as taught by Velasquez in paragraphs [0157], [0163], and [0440]. McKinnon in view of Velasquez fails to explicitly disclose: providing the at least one of the plurality of objects to a plurality of devices simultaneously in the connected space. In same field of endeavor, Weising teaches: providing the at least one of the plurality of objects to a plurality of devices simultaneously in the connected space ([0044]: a Global Positioning System (GPS) device; Fig. 2; [0046]: the location is detected by analyzing data obtained from inertial systems and GPS; PNG media_image5.png 298 566 media_image5.png Greyscale ; Fig. 4; [0052]: a multi-player virtual reality game; Fig. 5; [0056]: a multi-player environment; the positional information gained from GPS and compass is transmitted to other linked devices to enhance the collaboratively maintained data; create a common shared space synchronized to a common reference point 502; a first player 504A synchronizes her device into the 3D space with respect to reference point 502. Other players in the shared space establish a communication link with the first player; PNG media_image6.png 690 624 media_image6.png Greyscale ; [0060]: shared spaces are created when players are in different locations; Fig. 7; [0061]: an interactive game that is independent of the location of the portable device). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify McKinnon in view of Velasquez to include providing the at least one of the plurality of objects to a plurality of devices simultaneously in the connected space as taught by Weising. The motivation for doing so would have been to detect the location by analyzing data obtained from inertial systems and GPS; to enhance the collaboratively maintained data; to create a common shared space synchronized to a common reference point 502 as taught by Weising in Fig. 5 and paragraphs [0046] and [0056]. Regarding to claim 19 (original), McKinnon in view of Velasquez and Weising discloses the method of claim 18 further comprising: providing a cloud storage (McKinnon; [0020]: the device could compose a data center and be coupled with a cloud server); receiving at least one locally referenced object at the cloud storage (McKinnon; [0020]: obtain at least a portion of the subset based on the tile map; the device could compose a data center and be coupled with a cloud server; [0036]; select a floor of a building as the area of interest; generate, via the graphical user interface, and/or adjust the area of interest; input data into the user interface; present output data to the user; [0079]: the area data are presented to users from which the human users select a corresponding view of interest 132; [0108]: this modification and addition are viewable to all users of the system; [0128]: location is determined by initially calibrating the device in the local area; use accelerometry to generate a location window; location is determined by initially calibrating the device in the local area; use accelerometry to generate a location window); assigning the at least one locally referenced object an object reference identifier ([0016]: derive a set of views of interest from at least one of the initial map and other area data; the views of interest could comprise a view-point origin, a field of interest, an owner, metadata, a direction, an orientation, a cost, a search attribute, a descriptor set, an object of interest, or any combination or multiples thereof; [0017]: the AR content objects are selected for obtaining based on one or more of the following: a search query, an assignment of content objects to a view of interest or object of interest within the view, one or more characteristics of the initial map, a context of an intended user of a user, or a recommendation, selection or request of a user); providing at least one object reference identifier for the at least one locally referenced object from the cloud storage to the object database ([0020]: obtain at least a portion of the subset based on the tile map; the device could compose a data center and be coupled with a cloud server; [0067]: the recognized objects and characteristics of the environment are associated with particular locations within the area of interest by correlating the area data with the initial map 118A based on one or more of the location information and location information associated with image data; [0068]: the AR management engine 130 obtains descriptors for the recognized objects within the area of interest; [0087]: AR content objects are associated with one or more descriptors related to objects viewable from an area of interest, and stored in an AR content database 420 for use by an AR management engine of the inventive subject matter.; [0091]: transmit the image AR content objects 422, video AR content objects 424, audio AR content objects 426, and optionally the associated descriptors to AR content database 420 via network 415). McKinnon in view of Velasquez and Weising further discloses a cloud storage (Velasquez; [0112]: the persistent map is stored in a remote storage medium, e.g., a cloud storage; [0157]: the wearable device accesses the cloud based remote data repositories; [0158]: access a remote persistent map stored on a cloud storage; [0188]: the counterpart in the cloud is described in a coordinate frame shared by all devices in an XR system). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify McKinnon to include a cloud storage as taught by Velasquez. The motivation for doing so would have been to access cloud based remote data repositories; to improve their experiences with the AR system; to facilitate persistent and consistent cross reality experiences between individual and/or groups of users as taught by Velasquez in paragraphs [0157], [0163], and [0440]. Regarding to claim 20 (original), McKinnon in view of Velasquez and Weising discloses the method of claim 19 further comprising: receiving at least one object reference identifier for at least one remotely stored object from a remote cloud storage (McKinnon; [0020]: obtain at least a portion of the subset based on the tile map; the device could compose a data center and be coupled with a cloud server; [0080]: an object generation engine 104 obtains a plurality of content objects from one or more users or devices, and transmits the objects to AR content database 120 via network 115; [0081]: obtain a set of AR content objects 134 related to the derived set of views of interest 132; Fig. 4; [0087]: AR content objects are associated with one or more descriptors related to objects viewable from an area of interest, and stored in an AR content database 420 for use by an AR management engine). Regarding to claim 21 (original), McKinnon in view of Velasquez and Weising discloses the method of claim 19 further comprising: creating a connection between a physical object and a virtual object using a web-hosted, two-dimensional map interface (McKinnon; [0014]: store augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain an initial map of an area of interest from the area data within the area database; [0020]: obtain at least a portion of the subset based on the tile map; the device could compose a data center and be coupled with a cloud server; [0036]: generate, via the graphical user interface, and/or adjust the area of interest; to input data into the user interface and output devices such as screens, audio output, sensory feedback devices, etc. to present output data to the user; [0040]: overlay the content on real-world imagery; rendered sprites are made to appear to interact with the physical elements of the space whose geometry has been reconstructed either in advance, or in real-time in the background of the AR experience). Regarding to claim 22 (original), McKinnon in view of Velasquez and Weising discloses the method of claim 19 further comprising: creating the connection between the physical object and the virtual object using an virtual reality interface (McKinnon; [0014]: store augmented reality or other digital content objects of various modalities; the content objects are associated with one or more real world objects viewable from an area of interest; [0015]: obtain an initial map of an area of interest from the area data within the area database; [0020]: obtain at least a portion of the subset based on the tile map; the device could compose a data center and be coupled with a cloud server; [0036]: generate, via the graphical user interface, and/or adjust the area of interest; to input data into the user interface and output devices; present output data to the user; [0040]: overlay the content on real-world imagery; rendered sprites are made to appear to interact with the physical elements of the space whose geometry has been reconstructed either in advance, or in real-time in the background of the AR experience). McKinnon in view of Velasquez and Weising further discloses an immersive virtual reality interface (Velasquez; [0111]: provide XR scenes for a more computationally efficient and immersive experiences for a single or multiple users; provides a more immersive experience; [0118]: provide a more immersive user experience). Same motivation of claim 19 is applied here. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 28, 2022
Application Filed
Nov 06, 2024
Non-Final Rejection — §103, §112
May 09, 2025
Response after Non-Final Action
May 09, 2025
Response Filed
May 22, 2025
Response Filed
Jun 13, 2025
Final Rejection — §103, §112
Dec 16, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602816
SIMULATED CONFIGURATION EVALUATION APPARATUS AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12603024
DISPLAY CONTROL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12586310
APPARATUS AND METHOD WITH IMAGE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578846
GENERATING MASKED REGIONS OF AN IMAGE USING A PREDICTED USER INTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579727
APPARATUS AND METHOD FOR ASYNCHRONOUS RAY TRACING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+26.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 476 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month