DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,967,147 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because it is clear that all the elements of the application claims 1-20 are to be found in patent claims 1-20 (as the application claims 1-20 fully encompasses patent claims 1-20). The difference between the application claims 1-20 and the patent claims 1-20 lies in the fact that the patent claim includes many more elements and is thus much more specific. Thus the invention of claims 1-20 of the patent is in effect a “species” of the “generic” invention of the application claims 1-20. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1-20 is anticipated by claims 1-20 of the patent, it is not patentably distinct from claims 1-20 of the patent.
11,967,147 B2
1. A method comprising: detecting, by a processing system including at least one processor, a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying, by the processing system, an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting, by the processing system, a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying, by the processing system, at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: extracting visual data of the at least one object from the at least one prior captured image; infilling, via a machine learning-based process, a portion of the at least one prior captured image from which the visual data of the at least one object is extracted to generate an infilled image; and inserting imagery of the at least one object into the infilled image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on the visual data of the at least one object that is extracted; and presenting, by the processing system via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
18/642,508
1. A method comprising: detecting, by a processing system including at least one processor, a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying, by the processing system, an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting, by the processing system, a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying, by the processing system, at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: inserting an imagery of the at least one object into the at least one prior captured image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on visual data of the at least one object retrieved from a repository; and presenting, by the processing system via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
2. The method of claim 1, wherein the user has a reservation with the venue for access to a first type of enclosed space, wherein the first enclosed space is of the first type of enclosed space.
3. The method of claim 2, wherein the identifying includes identifying that the first enclosed space is of the first type of enclosed space.
4. The method of claim 3, wherein the presenting the first visual information of the first enclosed space is performed in response to the identifying.
2. The method of claim 1, wherein the user has a reservation with the venue for access to a first type of enclosed space, wherein the first enclosed space is of the first type of enclosed space.
3. The method of claim 2, wherein the identifying includes identifying that the first enclosed space is of the first type of enclosed space.
4. The method of claim 3, wherein the presenting the first visual information of the first enclosed space is performed in response to the identifying.
5. The method of claim 1, wherein the processing system is in communication with the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space; and providing, to the augmented reality endpoint device or a mobile computing device associated with the augmented reality endpoint device, an access credential for accessing the first enclosed space.
6. The method of claim 1, wherein the processing system comprises the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space; and providing the input to at least one server associated with the venue.
7. The method of claim 6, further comprising: obtaining, from the at least one server, an access credential for accessing the first enclosed space.
8. The method of claim 1, wherein the first visual information further comprises alphanumeric data regarding the first enclosed space.
9. The method of claim 8, wherein the alphanumeric data comprises at least one of: a cost for utilization of the first enclosed space; data describing at least one feature of the first enclosed space; or data describing additional information regarding the first enclosed space that can be accessed via the augmented reality endpoint device.
10. The method of claim 9, wherein the alphanumeric data further includes data describing how to access the additional information regarding the first enclosed space.
11. The method of claim 1, wherein the first imagery of the first enclosed space is from a perspective of the entryway of the first enclosed space facing into the first enclosed space.
12. The method of claim 1, further comprising: detecting a second location and a second orientation of the augmented reality endpoint device of the user at the venue; identifying that the entryway of the first enclosed space of the venue is beyond a threshold distance from the augmented reality endpoint device and is within a second field-of-view of the augmented reality endpoint device in accordance with the second location and the second orientation of the augmented reality endpoint device; and presenting, via the augmented reality endpoint device, second visual information of the first enclosed space, wherein the second visual information is presented within the second field-of-view as an overlay on or proximate to the entryway.
13. The method of claim 12, further comprising: presenting third visual information regarding at least a second enclosed space of the venue.
14. The method of claim 13, wherein an entryway of the at least the second enclosed space is within the second field-of-view of the augmented reality endpoint device.
15. The method of claim 1, wherein the first location and the first orientation of the augmented reality endpoint device are obtained from the augmented reality endpoint device.
16. The method of claim, 1 wherein the entryway of the first enclosed space is identified within the first field-of-view of the augmented reality endpoint device in accordance with at least one augmented reality anchor point of the venue.
5. The method of claim 1, wherein the processing system is in communication with the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space; and providing, to the augmented reality endpoint device or a mobile computing device associated with the augmented reality endpoint device, an access credential for accessing the first enclosed space.
6. The method of claim 1, wherein the processing system comprises the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space; and providing the input to at least one server associated with the venue.
7. The method of claim 6, further comprising: obtaining, from the at least one server, an access credential for accessing the first enclosed space.
8. The method of claim 1, wherein the first visual information further comprises alphanumeric data regarding the first enclosed space.
9. The method of claim 8, wherein the alphanumeric data comprises at least one of: a cost for utilization of the first enclosed space; data describing at least one feature of the first enclosed space; or data describing additional information regarding the first enclosed space that can be accessed via the augmented reality endpoint device.
10. The method of claim 9, wherein the alphanumeric data further includes data describing how to access the additional information regarding the first enclosed space.
11. The method of claim 1, wherein the first imagery of the first enclosed space is from a perspective of the entryway of the first enclosed space facing into the first enclosed space.
12. The method of claim 1, further comprising: detecting a second location and a second orientation of the augmented reality endpoint device of the user at the venue; identifying that the entryway of the first enclosed space of the venue is beyond a threshold distance from the augmented reality endpoint device and is within a second field-of-view of the augmented reality endpoint device in accordance with the second location and the second orientation of the augmented reality endpoint device; and presenting, via the augmented reality endpoint device, second visual information of the first enclosed space, wherein the second visual information is presented within the second field-of-view as an overlay on or proximate to the entryway.
13. The method of claim 12, further comprising: presenting third visual information regarding at least a second enclosed space of the venue.
14. The method of claim 13, wherein an entryway of the at least the second enclosed space is within the second field-of-view of the augmented reality endpoint device.
15. The method of claim 1, wherein the first location and the first orientation of the augmented reality endpoint device are obtained from the augmented reality endpoint device.
16. The method of claim, 1 wherein the entryway of the first enclosed space is identified within the first field-of-view of the augmented reality endpoint device in accordance with at least one augmented reality anchor point of the venue.
17. A non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: detecting a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: extracting visual data of the at least one object from the at least one prior captured image; infilling, via a machine learning-based process, a portion of the at least one prior captured image from which the visual data of the at least one object is extracted to generate an infilled image; and inserting imagery of the at least one object into the infilled image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on the visual data of the at least one object that is extracted; and presenting, via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
17. A non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: detecting a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: inserting an imagery of the at least one object into the at least one prior captured image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on visual data of the at least one object retrieved from a repository; and presenting, via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
18. An apparatus comprising: a processing system including at least one processor; and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising: detecting a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: extracting visual data of the at least one object from the at least one prior captured image; infilling, via a machine learning-based process, a portion of the at least one prior captured image from which the visual data of the at least one object is extracted to generate an infilled image; and inserting imagery of the at least one object into the infilled image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on the visual data of the at least one object that is extracted; and presenting, via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
19. The apparatus of claim 18, wherein the user has a reservation with the venue for access to a first type of enclosed space, wherein the first enclosed space is of the first type of enclosed space.
20. The apparatus of claim 19, wherein the identifying includes identifying that the first enclosed space is of the first type of enclosed space.
18. An apparatus comprising: a processing system including at least one processor; and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising: detecting a first location and a first orientation of an augmented reality endpoint device of a user at a venue; identifying an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device in accordance with the first location and the first orientation of the augmented reality endpoint device; detecting a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object; modifying at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: inserting an imagery of the at least one object into the at least one prior captured image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on visual data of the at least one object retrieved from a repository; and presenting, via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view.
19. The apparatus of claim 18, wherein the user has a reservation with the venue for access to a first type of enclosed space, wherein the first enclosed space is of the first type of enclosed space.
20. The apparatus of claim 19, wherein the identifying includes identifying that the first enclosed space is of the first type of enclosed space.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cronin et al., U.S. Patent Publication Number 2018/0240274 A1, in view of Bostick et al., U.S. Patent Publication Number 2018/0089869 A1.
Regarding claim 1, Cronin discloses a method comprising: an augmented reality endpoint device of a user at a venue (paragraph 0015, computer system 101 can be implemented as or incorporated into various devices; a global positioning satellite (GPS) device; augmented reality system; paragraph 0028, guest physically present at the venue; figure 3, 308, headset shown as being a pair of glasses); identifying, by the processing system, an entryway of a first enclosed space of the venue that is within a first field-of-view of the augmented reality endpoint device (paragraph 0063, detecting an opening of a door of the room); detecting, by the processing system, a location of at least one object within the first enclosed space, wherein the location of the at least one object is detected via a wireless sensing of at least one wireless tag of the at least one object (paragraph 0061, sensors 310, 312, 314 and 316 may be disposed in the hotel room individually or in combination to detect, determine, or measure at least one of the position and dimension of the object within the hotel room in real-time; figure 3); modifying, by the processing system, at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected (paragraph 0095, modifying the virtual reality projected on the at least one object), wherein the modifying comprises: imager of the at least one object is based on visual data of the at least one object retrieved from a database (paragraph 0043, a room extraction module extracts data related to a position and dimensions of the object in the hotel room; the room extraction module may extract data from a second database), and presenting, by the processing system via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view (paragraph 0048, theme project information may include image information; paragraph 0102, the virtual reality projected onto the at least one object is configured to be viewed by a guest using a headset; paragraph 0105, the projector projects, with the projector, the virtual reality when detecting that a door of a room within which the projector is disposed is unlocked).
However, it is noted that Cronin discloses a user at a venue, the device may be a global position satellite (GPS) device; an augmented reality system, but fails to specifically disclose detecting, by a processing system including at least one processor, a first location and a first orientation of an augmented reality endpoint device. It is further noted that Cronin discloses modifying the image, but fails to specifically disclose wherein the modifying comprises: inserting an imagery of the at least one object into the at least one prior captured image in accordance with the location that is detected, to generate the first imagery of the first enclosed space, wherein the imagery of the at least one object is based on visual data of the at least one object retrieved from a repository.
Bostick discloses detecting, by a processing system including at least one processor, a first location and a first orientation of an augmented reality endpoint device of a user (paragraph 0036, determining the user’s location and surrounding structures); modifying, by the processing system, at least one prior captured image of the first enclosed space to generate a first imagery of the first enclosed space that depicts the at least one object at the location that is detected, wherein the modifying comprises: inserting an imagery of the at least one object into the at least one prior captured image in accordance with the location that is detected, to generate the first imagery of the first enclosed space (paragraph 0035, receiving and analyzing image data captured by the camera and retrieving indoor view data for displaying in the user’s field of view), wherein the imagery of the at least one object is based on visual data of the at least one object retrieved from a repository (paragraph 0006, identify a relevant repository of indoor views and to retrieve from the repository at least one indoor view of the building identified; paragraph 0037); and presenting, by the processing system via the augmented reality endpoint device, first visual information of the first enclosed space, wherein the first visual information comprises the first imagery of the first enclosed space and wherein the first visual information is presented within the first field-of-view (paragraph 0046, the thumbnail view may be positioned over the identifying marker, adjacent to the identifying marker, or over a point of the building corresponding to the location of the retrieved indoor view).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the user at a venue as disclosed by Cronin, combining the augmented reality device and global positioning device as disclosed by Cronin, to detect the location and orientation of a user device as disclosed by Bostick, to properly determine where to display the overlaid augmented reality information in a guest field of view, Cronin discloses paragraph 0023, virtual reality projector projects virtual reality data, which the guest may observe, in alignment with the physical dimensions and objects in the room. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the modifying as disclosed by Cronin of the real-time image, the first imager of the first enclosed space and first visual information presented within the first field of view, which Examiner interprets as the objects located with the indoor view as disclosed by Bostick, to provide the user with an indoor view of the hotel room, in addition to an augmented or themed view of the room.
Regarding claim 2, Cronin discloses wherein the user has a reservation with the venue for access to a first type of enclosed space, wherein the first enclosed space is of the first type of enclosed space (paragraph 0028, the venue may include one or more guests; any type of person or group of persons at any type of venue; paragraph 0075, acquires a reservation for the hotel room; acquired reservation includes information relating to a guest, for example, a name of the name, a room type designated by the guest).
Regarding claim 3, Cronin discloses wherein the identifying includes identifying that the first enclosed space is of the first type of enclosed space
(paragraph 0028, type of the venue; paragraph 0075, acquired reservation information relating to; a room type designated by the guest).
Regarding claim 4, Cronin discloses wherein the presenting the first visual information of the first enclosed space is performed in response to the identifying (paragraph 0075, hotel room and a designated room theme are identified from the reservation; paragraph 0076, Virtual reality content, image content, or video content is projected in the hotel room based on the position and the dimensions of the object in the hotel room and the theme projection information).
Regarding claim 5, Cronin discloses wherein the processing system is in communication with the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space (paragraph 0063, detecting an opening of a door of the room; the door may be locked by an electronic lock which is connected to a network and can be opened by electronic key (e.g., a card key)); and providing, to the augmented reality endpoint device or a mobile computing device associated with the augmented reality endpoint device, an access credential for accessing the first enclosed space (paragraph 0063, processor may monitor an open/closed state of the electronic lock installed on a door for sending the instruction to the projector; paragraph 0064-0065 may detect the interaction of the guest 202 with the virtual reality content; detects the predetermined relationship between the guest 202 and the virtual reality content 306 projected on the object).
Regarding claim 6, Cronin discloses wherein the processing system comprises the augmented reality endpoint device, the method further comprising: receiving an input from the user selecting to access the first enclosed space; and providing the input to at least one server associated with the venue (paragraph 0063, detecting an opening of a door of the room; the door may be locked by an electronic lock which is connected to a network and can be opened by electronic key (e.g., a card key); processor may monitor an open/closed state of the electronic lock installed on a door for sending the instruction to the projector; paragraph 0015, the computer system may operate in the capacity of a server or as a client user computer in a server-client user network environment).
Regarding claim 7, Cronin discloses further comprising: obtaining, from the at least one server, an access credential for accessing the first enclosed space (paragraph 0063, electronic lock which is connected to a network; processor may monitor an open/closed state of the electronic lock installed on a door for sending the instruction to the projector; paragraph 0015, the computer system may operate in the capacity of a server or as a client user computer in a server-client user network environment).
Regarding claim 8, Cronin discloses wherein the first visual information further comprises alphanumeric data regarding the first enclosed space (paragraph 0055, process may acquire a position and dimensions of the object in the hotel room; may identify the object and acquire the position and the dimensions of the object via the room object database; paragraph 0063, room object database may store dimensions (e.g., room layouts) of a plurality of hotel rooms including the hotel room 300 and positions of each object placed in the hotel rooms; dimensions may be textual or graphic-based).
Regarding claim 9, Cronin discloses wherein the alphanumeric data comprises at least one of: a cost for utilization of the first enclosed space; data describing at least one feature of the first enclosed space; or data describing additional information regarding the first enclosed space that can be accessed via the augmented reality endpoint device (paragraph 0040, hotel rooms based on a cost of the hotel rooms; paragraph 0056, may acquire at least one of the position and dimensions of the object in real-time).
Regarding claim 10, Cronin discloses wherein the alphanumeric data further includes data describing how to access the additional information regarding the first enclosed space (paragraph 0054, processor may identify an area of service to be enhanced; paragraph 0067, the service to be enhanced using the virtual reality may relate to the entertainment system; paragraph 0070, the service to be enhanced using the virtual reality may relate to the room or meal delivery service).
Regarding claim 11, Cronin discloses wherein the first imagery of the first enclosed space is from a perspective of the entryway of the first enclosed space facing into the first enclosed space (paragraph 0038, may preview any available virtual reality options for the hotel room before selection; paragraph 0073, system includes the proximity sensor, the venue may include an additional area different from the hotel room; second proximity sensor may be configured to detect the presence of the guest in the additional area; may be applied to each of the room and additional area).
Regarding claim 12, Cronin discloses further comprising: detecting a second location and a second orientation of the augmented reality endpoint device of the user at the venue (paragraph 0073, system includes the proximity sensor, the venue may include an additional area different from the hotel room; second proximity sensor may be configured to detect the presence of the guest in the additional area; Examiner interprets proximity sensors as detecting a second location and orientation of the user at the venue); identifying that the entryway of the first enclosed space of the venue is beyond a threshold distance from the augmented reality endpoint device and is within a second field-of-view of the augmented reality endpoint device in accordance with the second location and the second orientation of the augmented reality endpoint device (paragraph 0073, second proximity sensor may be configured to detect the presence of the guest in the additional area; may be applied to each of the room and additional area); and presenting, via the augmented reality endpoint device, second visual information of the first enclosed space, wherein the second visual information is presented within the second field-of-view as an overlay on or proximate to the entryway (paragraph 0073, may control a theme of the additional area, e.g., one of the attractions, in accordance with the room theme based on the detection of the presence of the guest in the additional area).
Regarding claim 13, Cronin discloses further comprising: presenting third visual information regarding at least a second enclosed space of the venue (paragraph 0034, object 302 may include, for example, a table, a chair, a bed, a nightstand, a television, a lamp, a picture, a wall, a ceiling, a floor, a door, or any additional tangible object, item, or thing located in the hotel room 300, or other defined physical space. That is, in the event that the defined physical space is not the hotel room 300, the object 302 may comprise additional or alternative objects which are within the defined physical space. For example, if the defined physical space is a booth or room in a restaurant, the object 302 may include a bench, a sconce, a chandelier, a piece of dinnerware, a table, etc.; therefore Examiner interprets the defined physical space of room in a restaurant, as a second enclosed space within the venue of a restaurant).
Regarding claim 14, Cronin discloses wherein an entryway of the at least the second enclosed space is within the second field-of-view of the augmented reality endpoint device (paragraph 0034, paragraph 0034, object 302 may include a door; room in a restaurant; virtual reality described herein may be used to enhance any additional or alternative physical objects within a defined physical space, therefore Examiner interprets Cronin as capable of locating doors and defined physical space including a second physical space).
Regarding claim 15, Bostick discloses wherein the first location and the first orientation of the augmented reality endpoint device are obtained from the augmented reality endpoint device (paragraph 0036, determining the user’s location and surrounding structures; paragraph 0035, receiving and analyzing image data captured by the camera and retrieving indoor view data for displaying in the user’s field of view).
Regarding claim 16, Cronin discloses wherein the entryway of the first enclosed space is identified within the first field-of-view of the augmented reality endpoint device in accordance with at least one augmented reality anchor point of the venue (paragraph 0044, position of the object relative to the dimensional of the hotel room, or a position of the objects relative to a fixed reference point, which Examiner interprets as an anchor point of the venue, i.e. hotel).
Regarding claim 17, it is rejected based upon similar rational as above claim 1. Cronin further discloses a non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations (paragraph 0106, a non-transitory computer-readable medium having an executable computer program; executable computer program, when executed by a processor, causes the processor to perform operations).
Regarding claims 18-20, they are rejected based upon similar rational as above claims 1-3. Cronin further discloses an apparatus (figure 1; 100, computer system) comprising: a processing system including at least one processor (110, processor); and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations (182, computer readable medium; 184, instructions).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Duron et al., U.S. Patent Publication Number 2018/0173912 A1
Duron discloses paragraph 0024, identifies a venue having a doorway or portal that is openable and closeable by a door; paragraph 0024, one or more RFID-tagged object are to be identified and/or located and/or monitored and/or tracked.
Waltermann et al., U.S. Patent Publication Number 2021/0225042 A1
Waltermann discloses paragraph 0032, uses the user’s current location data (e.g., using the user’s device GPS capabilities, etc.) and the current image data in order to identify the meeting room that the user is selecting with device camera; paragraph 0026, might see the participants in any given meeting room; paragraph 00332, performs the Retrieve Specific Room Data for AR Display routine; paragraph 0039, room monitor data from sensors in the meeting room are retrieved from memory area; paragraph 0032, process overlays the image displayed on the user’s device screen with the level-based AR data pertaining to the identified meeting room.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOTILEWA . GOOD JOHNSON
Primary Examiner
Art Unit 2616
/MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619