Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the amendment filed on December 10th, 2025. Claims 1, 9, and 17 have been amended. The amended claims limitations have been fully considered but are not persuasive. Office action has been updated to reflect amended claims. Claims 1-20 remain rejected in the application.
Response to Arguments
In response to applicant’s argument regarding Daniels failing to disclose specifics about the location in which the device located and that is captured by the device, structure aspects, and AR objects. Arguments have been fully considered but are not persuasive. Daniels explicitly discloses these limitation [Daniels: 0036 “At block 210, the MDD acquires real world positioning data using techniques including, but not limited to: GPS, visual imaging, geometric calculations, gyroscopic or motion tracking, point clouds, and other data about a physical location”][Daniels: 0038 “At block 222, the MDD sends on-site environmental information and the associated GPS coordinates to the server”](Daniels teaches the mobile device sending both position (GPS) and orientation (gyroscopic/motion-tracking))[Daniels: 0046 “The off-site devices create an off-site virtual augmented reality (oVAR) version of the location which uses a 3D-map made from data stored in the server's databases, which stores the relevant data generated by the on-site devices.”][Daniels: 0032 “The central server 110 is responsible for storing and transferring the information for creating the augmented reality.”] (teaches structure and AR aspects).Claims 1-20 remain rejected in the application.
In response to applicant’s arguments regarding dependent claims being allowable due to the allowance of their independent claim, argument has been fully considered but is not persuasive. Due to the Examiner maintaining the rejection for the independent claims, rejection for the dependent claims are maintained. Claims 1-20 remain rejected in the application.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 9, 14, 16, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Daniels et. al (U.S. Patent Publication No. 2016/0133230).
Regarding claim 1, Daniels discloses a method, comprising: receiving, at a server over a network, a location and orientation of a device within a structure [Daniels: 0036 “At block 210, the MDD acquires real world positioning data using techniques including, but not limited to: GPS, visual imaging, geometric calculations, gyroscopic or motion tracking, point clouds, and other data about a physical location”][Daniels: 0038 “At block 222, the MDD sends on-site environmental information and the associated GPS coordinates to the server”](Daniels teaches the mobile device sending both position (GPS) and orientation (gyroscopic/motion-tracking) data to the cloud server); retrieving, by the server, a model of the structure associated with the location of the device (interpreted as retrieving from the server a 3d/2d model of the structure)[Daniels: 0046 “The off-site devices create an off-site virtual augmented reality (oVAR) version of the location which uses a 3D-map made from data stored in the server's databases, which stores the relevant data generated by the on-site devices.”][Daniels: 0032 “The central server 110 is responsible for storing and transferring the information for creating the augmented reality.”][Daniels: 0069 “Each environmental data set of an augmented reality event can be associated with a particular real-world location or scene in many ways, which includes but is not be limited to application specific location data, geofencing data and geofencing events.”](Daniels teaches retrieving a model based on data from a server that may be associated with a location); orienting, by the server, the model of the structure to align with the location and orientation of the device within the structure (interpreted as server re-registers the model so its frame matches the device pose/location)[Daniels: 0070 “Based on the position and geometry data of the real-world location, the AR sharing system can determine the relative locations of AR content in the augmented reality Scene. For example, the system can decide the relative distance between an avatar (an AR content object) and a fiducial marker, (part of the LockAR data). Another example is to have multiple fiducial markers with an ability to cross reference positions, directions and angles to each other, so the system can refine and improve the quality and relative position of location data in relationship to each other”](Daniels teaches using the server to re-orient the model’s (structure) location/pose in relation to each other); determining, by the server from the location and orientation of the device, a current view of the device (interpreted as the server figures out a viewpoint of the device)[Daniels: 0055 “At block 280, the user selects a piece of AR content to view, or a location to view AR content from. At block 282, the OVAR application queries the server for the information needed for display and possibly for interaction with the piece of AR content, or the pieces of AR content visible from the selected location, as well as the background environment”](The request sent at block 282 contains the devices chosen location/orientation; the server responds only with data visible from that location, showing the server first determines what the current view is before it can know what information is needed for display); and transmitting, to the device, a portion of the oriented model corresponding to the current view of the device (interpreted as, the server sends only the geometry/content that falls inside that view) [Daniels: 0055-0056 “At block 284, the server receives the request from the OVAR application and calculates an intelligent order in which to deliver the data. At block 286, the server streams the information needed to display the piece or pieces of AR content back to the oVAR application in real time (or asynchronously).”] (The phrase “information needed to display” means just the subset that is within the user’s current view), wherein the model of the structure comprises information about structural aspects that are not visible in the current view of the device and that are to be displayed by the device as augmented reality objects [Daniels: 0046 “3D-map made from data stored in the server's databases, which stores the relevant data generated by the on-site devices”] [Daniels: 0032 “The central server 110 is responsible for storing and transferring the information for creating the augmented reality.”] (teaches having stored data (information) about the 3D map (structure) which is intended to be displayed).
Regarding claim 2, Daniels discloses the method of claim 1, wherein receiving the location (interpreted as position) and orientation of the device within the structure comprises receiving, at the server, an image of a location marker captured by the device (interpreted as the server gets a photograph (or bitmap) of a physical marker; the image is taken by the client device and sent upstream) [Daniels: 0087 “As FIG. 4A illustrates, the on-site device sends data, which could include the positions, geometry, and bitmap image data of the background objects of the real-world scene, to the off-site device. The on-site device also sends positions, geometry, and bitmap image data of the other real-world objects it sees, including foreground objects to the off-site device.”][Daniels: 0074 “using computer vision techniques, a 2D fiducial marker can be recognized as an image on a flat plane or defined surface in the real world”] (Daniels teaches that each on site handset captures and sends bitmap image data to the remote side (server/offsite device). He further clarifies that the system works by recognizing a fiducial location marker in the image), the location marker corresponding to the location [Daniels: 0072 “The system can identify the orientation and distance of the fiducial marker and can deter mine other positions or object shapes relative to the fiducial marker.”] (Daniels teaches location marker corresponding to the location).
Regarding claim 9, Daniels discloses a non-transitory computer readable medium (CRM) [Daniels: 0103 “Memory 504”] (Memory 504 is a computer readable medium) comprising instructions that, when executed by at least one processor of an apparatus, cause the apparatus to: detect a location marker within a structure, the location marker associated with a location within the structure [Daniels: 0087 “As FIG. 4A illustrates, the on-site device sends data, which could include the positions, geometry, and bitmap image data of the background objects of the real-world scene, to the off-site device. The on-site device also sends positions, geometry, and bitmap image data of the other real-world objects it sees, including foreground objects to the off-site device.”][Daniels: 0074 “using computer vision techniques, a 2D fiducial marker can be recognized as an image on a flat plane or defined surface in the real world”] (Daniels teaches that each on site handset captures and sends bitmap image data to the remote side (server/offsite device). He further clarifies that the system works by recognizing a fiducial location marker in the image); transmit, to a remote server, the location marker and an orientation of the apparatus [Daniels: 0038 “At block 222, the MDD sends on-site environmental information and the associated GPS coordinates to the server, which then propagates it to the OSDDS.”] [Daniels: 0036 “, the MDD acquires real world positioning data using techniques including, but not limited to: GPS, visual imaging, geometric calculations, gyroscopic or motion tracking, point clouds, and other data about a physical location”](Daniels teaches sending to a server information such as location and orientation); receive, from the remote server, at least a portion of a digital model that is a representation of a portion of the structure that is in view of the apparatus, based on the location marker and the orientation [Daniels: 0055 “At block 284, the server receives the request from the OVAR application and calculates an intelligent order in which to deliver the data.”] [Daniels: 0056 “At block 286, the server streams the information needed to display the piece or pieces of AR content back to the oVAR application in real time (or asynchronously).”](Daniels teaches receiving from a server at least a portion of AR content based on the location marker and orientation); and display, on display coupled to the apparatus, the portion of the digital model [0058 “At block 296, the server receives information from another device about an interaction that updates AR content that the OVAR application is displaying. At block 298, the server sends the update information to the OVAR application. At block 299, the OVAR application updates the scene based on the received information, and displays the updated Scene.”](Daniels teaches displaying the AR content), wherein the model of the structure comprises information about structural aspects that are not visible in a current view of the apparatus and that are to be displayed by the apparatus as augmented reality objects [Daniels: 0046 “3D-map made from data stored in the server's databases, which stores the relevant data generated by the on-site devices”] [Daniels: 0032 “The central server 110 is responsible for storing and transferring the information for creating the augmented reality.”] (teaches having stored data (information) about the 3D map (structure) which is intended to be displayed).
Regarding claim 14, Daniels discloses the CRM of claim 9, wherein to detect and transmit the location marker, the instructions are to further cause the apparatus to [Daniels: 0103 “The computing device 500 can further include a memory 504, a network adapter 510 and a storage adapter 514, all interconnected by an interconnect 508.”][Daniels: 0105 “for storing processor executable code and data structures.”](teaches the CRM that holds the code which drives the capture and transmit of further steps): capture, with a camera coupled to the apparatus, an image of one or more objects located within the structure [Daniels: 0060 “The augmented reality view of the on-site devices includes AR content overlaid on top of a live image feed from the device's camera (or other image/ video capturing component).”] (teaches camera taking images or videos); and transmit the image of the one or more objects to the remote server (interpreted as off-site device) [Daniels: 0087 “As FIG. 4A illustrates, the on-site device sends data, which could include the positions, geometry, and bitmap image data of the background objects of the real-world scene, to the off-site device.”](teaches sending the image data to an off-site device which can be a server).
Regarding claim 16, Daniels discloses the CRM of claim 9, wherein the apparatus is a mobile device [Daniels: 0034 “The computer device 130 or 140 can be a desktop computer, a laptop computer, a tablet computer, an automobile computer, a game console, a Smartphone”] (Daniels teaches the device may be a smartphone).
Claim 17 is a non-transitory computer readable medium format claim corresponding to the method claim 1 above. Daniels further discloses an additional limitation of Claim 17 which include a processor (Daniels: 114; Fig. 1). Thus, claim 17 is rejected for the same reason as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3, 5, 7, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Carre et. al (U.S. Patent No. 10,127,724 B2).
Regarding claim 3, Daniels discloses the method of claim 2, retrieving, by the server, the location [Daniels: 0070 “The AR sharing system then loads the LockAR data corresponding to the real-world location where the user is situated.”] (Daniels discloses the server retrieves the location) but fails to explicitly disclose further comprising: extracting, from the image of the location marker, a unique identifier that is associated with the location.
However, Carre discloses further comprising: extracting, from the image of the location marker, a unique identifier (interpreted as QR code) that is associated with the location (Carre: Col. 5, Lines 27-29 “The user aims the camera of the device 101 at a QR code 110 to capture and scan the QR code 110 within the targeting advice area 130.”) (Carre teaches extracting a QR code from a photo taken by a camera).
Daniels and Carre are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Carre’s teachings of extracting QR codes. The motivation for such a combination would provide the benefit of eliminating the need for additional database matching.
Regarding claim 5, Daniels discloses the method of claim 1, wherein receiving the location and orientation of the device within the structure comprises [Daniels: 0036 “At block 210, the MDD acquires real world positioning data using techniques including, but not limited to: GPS, visual imaging, geometric calculations, gyroscopic or motion tracking, point clouds, and other data about a physical location”][Daniels: 0038 “At block 222, the MDD sends on-site environmental information and the associated GPS coordinates to the server”](Daniels teaches the mobile device sending both position (GPS) and orientation (gyroscopic/motion-tracking) data to the cloud server): and retrieving, by the server, the location [Daniels: 0070 “The application of the AR sharing system can use GPS and other triangulation technologies to generally identify the location of the user. The AR sharing system then loads the LockAR data corresponding to the real-world location where the user is situated.”](Daniels teaches the server retrieves the location) but fails to explicitly disclose receiving, at the server, a unique identifier that is associated with the location.
However, Carre discloses receiving, at the server, a unique identifier (interpreted as QR code) that is associated with the location (Carre: Col. 13, Lines 23-24 “The application on the device performs or receives a scan of a QR code having a URL or a shared URL redirect (401).”)(Carre teaches using a QR code which is a unique identifier).
Daniels and Carre are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Carre’s teachings of receiving scans of QR codes. The motivation for such a combination would provide the benefit of speeding up server lookup (reduce latency).
Regarding claim 7, Daniels discloses the method of claim 1, wherein receiving the location and orientation of the device within the structure [Daniels: 0036 “At block 210, the MDD acquires real world positioning data using techniques including, but not limited to: GPS, visual imaging, geometric calculations, gyroscopic or motion tracking, point clouds, and other data about a physical location”][Daniels: 0038 “At block 222, the MDD sends on-site environmental information and the associated GPS coordinates to the server”](Daniels teaches the mobile device sending both position (GPS) and orientation (gyroscopic/motion-tracking) data to the cloud server) but fails to explicitly disclose comprises receiving, at the server, a unique identifier corresponding to the location.
However, Carre discloses comprises receiving, at the server, a unique identifier corresponding to the location (Carre: Col. 13, Lines 23-24 “The application on the device performs or receives a scan of a QR code having a URL or a shared URL redirect (401).”)(Carre teaches using a QR code which is a unique identifier).
Daniels and Carre are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Carre’s teachings of receiving unique QR codes corresponding to the location. The motivation for such a combination would provide the benefit of speeding up server location lookup (reduce latency).
Regarding claim 10, Daniels discloses the CRM of claim 9, but fails to explicitly disclose wherein to detect and transmit the location marker, the instructions are to further cause the apparatus to: capture, with a camera coupled to the apparatus, an image of the location marker; extract, from the image of the location marker, a unique identifier associated with the location within the structure; and transmit the unique identifier to the remote server.
However, Carre discloses wherein to detect and transmit the location marker, the instructions are to further cause the apparatus to: capture, with a camera coupled to the apparatus, an image of the location marker (Carre: Col. 5, Lines 27-29 “The user aims the camera of the device 101 at a QR code 110 to capture and scan the QR code 110 within the targeting advice area 130.”) (Carre teaches capturing an image with a QR code using a camera); extract, from the image of the location marker, a unique identifier associated with the location within the structure (Carre: Col. 13, Lines 23-30 “The application on the device performs or receives a scan of a QR code having a URL or a shared URL redirect (401). The application determines whether the URL is from a third party or a proprietary one (402). If the URL is from a third party, the application redirects the URL to a web browser of the device (403). If the URL is a proprietary one, the application extracts the video ID and associated data embedded in the URL (404)”)(Carre teaches extracting from the image a URL which is a unique identifier that is associated with the data of the QR code); and transmit the unique identifier to the remote server (Carre: Col. 13, Lines 31-33 “After the video ID and associated data embedded in the URL is extracted, the application makes a request to a server (430).”)(Carre teaches sending the data to a server).
Daniels and Carre are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Carre’s teachings of taking a photo and extracting the data from the QR code and sending the data to a server. The motivation for such a combination would provide the benefit of embedding a unique ID and speeding up server lookup (reducing latency).
Claims 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Carre et. al (U.S. Patent No. 10,127,724 B2), in further view of Taylor et. al (U.S. Patent Publication No. 2014/0340423).
Regarding claim 4, Daniels and Care disclose the method of claim 3, but fails to explicitly disclose wherein the image of the location marker is an image of a QR code.
However, Taylor discloses wherein the image (interpreted as printed content) of the location marker is an image of a QR code [Taylor: 0004 “known as a QR code, which was tacked onto posters or other printed content”] (Taylor clearly discloses a QR code on “printed content”, an image may be printed content).
Daniels, Carre, and Taylor are considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels and Carre to incorporate Taylor’s teachings of adding a QR code onto printed content. The motivation for such a combination would provide the benefit of improved reliability and deployment cases.
Claim 11 is a non-transitory computer readable medium format claim corresponding to the method claim 4 above. Thus, claim 11 is rejected for the same reason as claim 4.
Claims 6, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Kamaraju (U.S. Patent Publication No. 2022/0334789 A1).
Regarding claim 6, Daniels discloses the method of claim 1, wherein receiving the location (interpreted as position) and orientation of the device within the structure comprises receiving [Daniels: 0087 “As FIG. 4A illustrates, the on-site device sends data, which could include the positions, geometry, and bitmap image data of the background objects of the real-world scene, to the off-site device. The on-site device also sends positions, geometry, and bitmap image data of the other real-world objects it sees, including foreground objects to the off-site device.”] (Daniels teaches that each on site handset captures and sends bitmap image data to the remote side (server/offsite device including position)), but fails to explicitly disclose at the server, camera pose data and camera intrinsics about a camera coupled to the device.
However, Kamaraju discloses at the server, camera pose data and camera intrinsics about a camera coupled to the device [0020 “such as some or all of camera position and orientation ( pose ), camera transform ( s ), feature points, camera intrinsics (e.g. , focal length, image sensor format, and principal point ), sequence number, etc. , along with video and audio tracks to a remote user”] (Kamaraju clearly teaches camera pose data and camera intrinsics).
Daniels and Kamaraju are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Kamaraju’s teachings of camera pose data and intrinsics. The motivation for such a combination would provide the benefit of improved overlay accuracy.
Regarding claim 15, Daniels discloses the CRM of claim 9, wherein the instructions are to further cause the apparatus to: receive, from the remote server, an updated portion of the digital model in view of the apparatus based on the updated location and updated orientation [Daniels: 0056 “At block 286, the server streams the information needed to display the piece or pieces of AR content back to the oVAR application in real time (or asynchronously). At block 288, the OVAR application renders the AR content and back ground environment based on the information it receives, and updating the rendering as the OVAR application continues to receive information.”](Daniels teaches the server streams portions of the content and updates the content based on the information it receives); and display, on the display, the updated portion of the digital model [Daniels: 0056 “At block 286, the server streams the information needed to display the piece or pieces of AR content”] but fails to disclose transmit, to the remote server, an updated location and an updated orientation.
However, Kamaraju discloses transmit, to the remote server, an updated location and an updated orientation [Kamaraju: 0065 “The AR presentation may be updated as out - of - band data is received.”] (teaches updating the information as updated data is received).
Daniels and Kamaraju are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Kamaraju’s teachings of incorporating updated locations and orientations. The motivation for such a combination would be the benefit of an accuracy increase.
Regarding claim 18, Daniels discloses the CRM of claim 17, and transmit, to the remote device over the network, an updated portion of the oriented model corresponding to an updated view of the remote device, the updated view determined from the updated location and updated orientation [Daniels 0056 “At block 286, the server streams the information needed to display the piece or pieces of AR content back to the oVAR application in real time (or asynchronously). At block 288, the OVAR application renders the AR content and back ground environment based on the information it receives, and updating the rendering as the OVAR application continues to receive information.”] but fails to disclose wherein the instructions are to further cause the apparatus to: receive, over the network, an updated location and updated orientation of the remote device within the structure; and transmit, to the remote device over the network, an updated portion of the oriented model corresponding to an updated view of the remote device, the updated view determined from the updated location and updated orientation.
However, Kamaraju discloses wherein the instructions are to further cause the apparatus to: receive, over the network, an updated location and updated orientation of the remote device within the structure [Kamaraju 0022 “Network connection 103 may carry a video stream generated by consumer device 102 , along with essential meta - data provided in - band with the video stream , and network connection 105 may , if needed , carry other nonessential data and / or additional spatial data associated with the video stream from consumer device 102.”] [Kamaraju 0065 “The AR presentation may be updated as out - of - band data is received”](teaches the ability of receiving update data over a network).
Daniels and Kamaraju are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Kamaraju’s teachings of incorporating receiving via network, updated locations and orientations. The motivation for such a combination would be the benefit of an accuracy increase and reduction in latency.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Lerman (U.S. Patent No. 10,949,669 B2).
Regarding claim 8, Daniels discloses the method of claim 1, wherein receiving the location and orientation of the device within the structure comprises [Daniels: 0070 “Based on the position and geometry data of the real-world location, the AR sharing system can determine the relative locations of AR content in the augmented reality Scene. For example, the system can decide the relative distance between an avatar (an AR content object) and a fiducial marker, (part of the LockAR data). Another example is to have multiple fiducial markers with an ability to cross reference positions, directions and angles to each other, so the system can refine and improve the quality and relative position of location data in relationship to each other”](Daniels teaches using the server to re-orient the model’s (structure) location/pose in relation to each other): receiving, from the device, an image of the structure [Daniels: 0087 “As FIG. 4A illustrates, the on-site device sends data, which could include the positions, geometry, and bitmap image data of the background objects of the real-world scene, to the off-site device. The on-site device also sends positions, geometry, and bitmap image data of the other real-world objects it sees, including foreground objects to the off-site device.”] (Daniels teaches one device receives from another device a bitmap image); but fails to explicitly disclose detecting, by the server, one or more objects from the image of the structure; and determining, by the server, the location from the one or more detected objects.
However, Lerman discloses detecting, by the server, one or more objects from the image of the structure (Lerman: Col. 9, Lines 48-51 “using the high-precision geolocation, the system processor can determine that the live image content contains a skyscraper as a subject, and can determine the perspective view of the skyscraper as it is received as live image content”) (Lerman clearly teaches detecting objects from an image); and determining, by the server, the location from the one or more detected objects (Lerman: Col. 10, Lines 27-29 “ the system processor determines a high-precision geolocation of the user device, based on the matching one or more image”)(Lerman teaches determining with high precision, the location based on an object/structure from an image).
Daniels and Lerman are both considered to be analogous to the claimed invention because they are in the same field of augmented reality. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Lerman’s teachings of detecting from an object from an image. The motivation for such a combination would provide the benefit of better pose accuracy yielding crisper, low-latency overlays.
Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Mendelson (U.S. Patent No. 9,204,257 B1).
Regarding claim 12, Daniels discloses the CRM of claim 9, but fails to disclose wherein to detect and transmit the location marker, the instructions are to further cause the apparatus to: detect a radiofrequency beacon (Mendelson Col. 31, Lines 60-64 “The scanner detection part of the application whenever a mobile phone or mobile device with a Bluetooth switched the Bluetooth on and loaded with the application, it will periodically scan the area for Bluetooth beacons; proximity to a tag/beacon”)(teaches detecting radiofrequency); extract, from the radio frequency beacon, a unique identifier associated with the location within the structure (Mendelson Col. 31, Lines 64-65 “the tags/beacons ID as well with signal strength will determine the “user location”) (teaches ID will determine a location); and transmit the unique identifier to the remote server (Mendelson Col. 32, Lines 13-17 “the unique Bluetooth ID/key set of the device (not a telephone number or name...) with association to the beacons ID which the device was near and aggregate resultant data to be server for habit and/or preference for triggering content delivery”)(teaches sending data to a server).
Daniels and Mendelson are both considered to be analogous to the claimed invention because they are in the same field of server assisted systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Mendelson’s teachings of utilizing radio frequency. The motivation for such a combination is providing an instantly readable unique ID.
Regarding claim 13, Daniels discloses the CRM of claim 12, but fails to disclose wherein the radio frequency beacon is a RFID tag, Bluetooth beacon, or WiFi hotspot (Mendelson Col. 31 Lines 60-63 “The scanner detection part of the application whenever a mobile phone or mobile device with a Bluetooth switched the Bluetooth on and loaded with the application, it will periodically scan the area for Bluetooth beacons”) (Mendelson teaches the radio frequency is a bluetooth beacon).
Daniels and Mendelson are both considered to be analogous to the claimed invention because they are in the same field of server assisted systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels to incorporate Mendelson’s teachings of Bluetooth beacons. The motivation for such a combination is reducing latency.
Claims 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et. al (U.S. Patent Publication No. 2016/0133230), in view of Kamaraju (U.S. Patent Publication No. 2022/0334789 A1) as applied to claim 18 above , in further view of Gildfind et. al (U.S. Patent Publication No. 2015/0153181).
Regarding claim 19, Daniels and Kamaraju discloses the CRM of claim 18, but fails to explicitly disclose wherein the location of the remote device comprises a unique identifier extracted from a physical marker, and the instructions are to further cause the apparatus to determine a location of the remote device within the structure based on the unique identifier.
However, Gildfind discloses wherein the location of the remote device comprises a unique identifier extracted from a physical marker [Gildfind: 0003 “indoor navigation services. The method includes identifying a position marker using a client device, wherein the position marker comprises a marker identifier that uniquely identifies the position marker within a set of marker data, and wherein the set of marker data comprises location information for a plurality of position markers, receiving a set of location information associated with the marker identifier, wherein the set of marker information comprises a first location of the position marker, and identifying a second location of the client device using the first location of the position marker received via the set of marker information.”][Gildfind: 0018 “These position markers may be physical objects”](Gildfind teaches identifiers that identifies position markers and the markers may be physical objects), and the instructions are to further cause the apparatus to determine a location of the remote device within the structure based on the unique identifier [Gildfind: 0014 “An identifier describing the position marker may be sent to a remote server, and the remote server may provide a precise location to the client device.”].
Daniels, Kamaraju, and Gildfind’s are considered to be analogous to the claimed invention because they are in the same field of AR-based positioning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels and Kamaraju to incorporate Gildfind’s teachings of extracting from physical markers. The motivation for such a combination is to increase location accuracy.
Regarding claim 20, Daniels discloses the CRM of claim 19, but fails to explicitly disclose wherein the instructions are to further cause the apparatus to: receive, from the remote device, an image of the physical marker [Gildfind: 0051 “The QR code 216 provides a position marker that is visible to a camera located on the client device 202.”][Gildfind: 0058 “ the client device may identify a position marker in a variety of manners, including receiving the identifier via a RFID transmission.”](teaches receiving from a camera (remote device) an image of the marker); and extract, from the image of the physical marker, the unique identifier [Gildfind: 0051 “A number, string, or other data embedded within the QR code 216 may be used as a marker identifier in the same manner as described with respect to the RFID security system 214.”][Gildfind: 0052 “The QR code 216 may contain an embedded numerical identifier that the client device may report to a remote server to receive the location of the QR code 216.”](teaches extracting the identifiers from the QR image).
Daniels, Kamaraju, and Gildfind’s are considered to be analogous to the claimed invention because they are in the same field of AR-based positioning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Daniels and Kamaraju to incorporate Gildfind’s teachings of receiving from a remote device, images of physical markers. The motivation for such a combination is to increase location accuracy.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED TAHA whose telephone number is (571)272-6805. The examiner can normally be reached 8:30 am - 5 pm, Mon - Fri. Examiner interviews are available via telephone, in person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571)272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786- 9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED TAHA/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613