DETAILED ACTION
Response to Amendment
Applicant’s amendments filed on 29 December 2025 have been entered. Claims 1, 7 and 13 have been amended. Claims 1-13 are still pending in this application, with claims 1, 7 and 13 being independent.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 20 2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-10, 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al. (US 20160098862 A1), referred herein as Wilson in view of German et al. (US 20230162396 A1), referred herein as German further in view of ADKINSON et al. (US 20210295599 A1), referred herein as ADKINSON.
Regarding Claim 1, Wilson teaches a method, performed by a first electronic device, of providing an augmented reality (AR) view (Wilson [0005] 1) compute a first view based upon the first location of the first user, the first view to be presented to the first user, the first view comprises a virtual object; [0022] With reference now to FIG. 1, a room 100 is illustrated, where the room is equipped with projectors and sensors. As will be described herein, the projectors are configured to project imagery onto projection surfaces of the room 100 based upon data output by the sensors, where the projected imagery effectuates SSAR for at least two users in the room 100; Abst: Various technologies pertaining to shared spatial augmented reality (SSAR) are described. Sensor units in a room output sensor signals that are indicative of positions of two or more users in the room and gaze directions of the two or more users), the method comprising:
obtaining, from a second electronic device, an image of a space captured by the second electronic device and AR experience information including information about at least one AR object placed in the space (Wilson [0027] To compute imagery that is projected by the projectors 102-106, a three-dimensional geometric model of the room 100 is to be generated and updated in real-time, and views of the users 110 and 112 are to be determined and updated in real-time. The view of each user is a function of position of each user in the room 100 (e.g., position of the head of each user) and gaze direction of each user),
Wilson does not but German teaches
wherein the AR information about the at least one AR object placed in the space includes identification information and arrangement information of the at least one AR object (German [0026] The digital 3D model 106 for the indoor area 102 also includes building feature information 114 that identifies structures (such as walls, ceilings, and floors) and passageways (such as doors, windows, stairways, and elevators). The digital 3D model 106 for the indoor area 104 also includes path information 116 that identifies paths between various locations within the indoor area 102);
Wilson in view of German further teaches
obtaining a first path for providing a space view based on the information about the at least one AR object placed in the space and the image of the space (German [0026] The digital 3D model 106 for the indoor area 104 also includes path information 116 that identifies paths between various locations within the indoor area 102; [0070] In the example described here in connection with FIG. 1, the server software 118 includes path-finding software 148 that is configured to determine a suitable route using the path information 116 included in the digital 3D model 106. Once a suitable route is determined, information about the route can be communicated from the server software 118 to the responder mobile device 130. The responder mobile software 122 can then receive the information and display it on the touch screen 138 for viewing by the emergency responder 132; FIG. 8);
obtain a second path for providing an object view based on the information about the at least one AR object placed in the space and the first path (German [0026] The digital 3D model 106 for the indoor area 104 also includes path information 116 that identifies paths between various locations within the indoor area 102; [0070] In the example described here in connection with FIG. 1, the server software 118 includes path-finding software 148 that is configured to determine a suitable route using the path information 116 included in the digital 3D model 106. Once a suitable route is determined, information about the route can be communicated from the server software 118 to the responder mobile device 130. The responder mobile software 122 can then receive the information and display it on the touch screen 138 for viewing by the emergency responder 132; FIG. 8), and
displaying, on a display of the first electronic device, an AR view based on the space view and the object view (German FIG. 8; [0073] The AR view 800 includes a series of arrows 804 that depicts where the emergency responder 132 should follow to travel to the occupant 128 along the route; Wilson [0050] rather than the projectors and sensor units being affixed to walls in the room, the projectors and/or sensor units can be implemented in headgear that is to be worn by the users 110-112. The projectors can have inertial sensors associated therewith, such that fields of view of users 110-112 in the room 100 can be determined based upon detected positions of the projectors in the headgear). In Wilson, projectors 102-106 or 502 teaches the second device, and the headgear of users teaches the first devices;
Wilson does not but ADKINSON teaches
wherein the first electronic device that displays the AR view is located in a different place from where the second electronic device that captures the space is located (ADKINSON [0002] remote augmented reality (AR), and specifically to reconstruction of a 3D model (or “digital twin”) and associated depth and camera data, and scale estimation from the reconstructed model and data, from a remote video feed; [0003] the devices can be capable of supporting a remote video session with which users can interact via AR objects in real-time; [0035] Following generation of the 3D model/digital twin, in embodiments, it is made available to users remote devices in real-time, such as a user of remote device 110).
German disclosed a method of determining a location of a mobile device associated with an occupant within an indoor area, therefore is an analogous to the present application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wilson to incorporate the teachings of German, and apply the augmented reality (AR) view of an indoor area that superimposes route information on a live image of the area currently being captured by one or more cameras of a responder mobile device into the shared spatial augmented reality (SSAR).
Doing so would able to communicate information about the location of the mobile device to the responder mobile device in real-time in the AR views that cause the two or more users to simultaneously perceive the virtual object in space.
ADKINSON disclosed systems and methods for creation of a 3D mesh from a video stream or a sequence of frames, therefore is an analogous to the present application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wilson to incorporate the teachings of ADKINSON, and apply the 3D meshing method of a 3D model within a 3D coordinate space of a remote augmented reality into the shared spatial augmented reality (SSAR).
Doing so would be capable of supporting a remote video session with which users can interact via AR objects in real-time.
Regarding Claim 2, Wilson in view of German and ADKINSON teaches the method of claim 1, and further teaches wherein the obtaining of the first path comprises:
modeling the space based on the information about the at least one AR object placed in the space and the image of the space (Wilson [0037] the views based upon the positional data 210 output by the tracker component 208 and the 3D geometric model 207 of the room 100 (updated in real-time to reflect positions of the users 110 and 112 in the room 100). These computed views can be abstracted over the projectors 114-118. The first view for the first user 110 indicates where imagery is to be projected in the room 100 to cause the first user 110 to perceive that the virtual object 120 is at a particular position in space in the room 100, while the second view for the second user 112 indicates where imagery is to be projected in the room to cause the second user 112 to perceive that the virtual object 120 is at the particular position in space in the room 100);
obtaining the first path based on the modeled space and the image of the space (German [0073] the information about the route can include information for displaying an augmented reality (AR) view of the indoor area 102 that superimposes route information on a live image of the area currently being captured by one or more cameras 140 of the responder mobile device 130; [0069] Method 600 further comprises determining a route from the location of the emergency responder 132 to the location of the occupant mobile device 126 using the digital 3D model 106 of the indoor area 102 (block 604) and communicating information about the route to the emergency responder 132 (block 606). Method 600 can be performed repeatedly in order to track the movements of the emergency responder 132 and/or the occupant 128 and update the information about the route provided to the emergency responder 132); and
Regarding Claim 3, Wilson in view of German and ADKINSON teaches the method of claim 2, and further teaches wherein the providing of the AR view comprises:
obtaining the space view based on the image of the space and the first path (German [0073] The AR view 800 includes a series of arrows 804 that depicts where the emergency responder 132 should follow to travel to the occupant 128 along the route. In this example, the AR view 800 also includes a “stairs” annotation 806 indicating that the emergency responder 132 should walk down the stairs in order to follow the route to the occupant 128);
obtaining the object view based on an object model for the at least one AR object and the second path (Wilson [0005] 2) compute a second view based upon the second location of the second user, the second view to be presented to the second user, the second view comprises the virtual object; German [0076] Methods 300 and 600 can be repeated in order to update the displayed route information in real-time (or near real-time) (for example, by updating the current location of the occupant 128 and the emergency responder 132). By displaying such up-to-date route information on a touchscreen 138); and
synthesizing the space view and the object view (Wilson [0059] At 806, at least one signal is transmitted to at least one projector that causes the at least one projector to project imagery onto projection surfaces, wherein the imagery, when projected onto the projection surfaces, causes the two users to perceive a virtual object in space between the two users; [0060] the imagery causes two users to perceive a virtual object moving in space between the two users. For example, the detected gesture may be a throwing motion and the imagery may cause the two users to perceive a ball moving from one user towards the other. In another example, the gesture may be a swipe, and the imagery projected by the projector may cause the two users to perceive a globe spinning in space between the two users).
Regarding Claim 4, Wilson in view of German and ADKINSON teaches the method of claim 3, and further teaches wherein the obtaining of the space view comprises:
warping at least one frame extracted from the image of the space (ADKINSON [0030] FIG. 2 depicts an example method 200 for placement of an AR object within a 3D model or mesh, where the AR object is reflected into a video stream from an end user device, such as device 102; [0048] In operation 302, a video stream or other sequence of frames of a scene or environment is captured by a capturing device, such as by a device 102);
and fusing the warped at least one frame (ADKINSON [0047] Turning to FIG. 3, an example method 300 for recreating an environment in a textured 3D mesh from a video or similar series of frames capturing motion; [0053] Following creation of a densified model, in operation 308 a 3D mesh of triangles is generated from the densified depth maps (or combined, the densified point cloud), using a suitable algorithm such as Volumetric TSDF (Truncated Signed Distance Function) Fusion, Poisson Reconstruction or Delaunay Reconstruction).
Regarding Claim 6, Wilson in view of German and ADKINSON teaches the method of claim 1. However, ADKINSON teaches further comprising recommending a placeable AR object based on at least one of the AR experience information and a result of analyzing the space (ADKINSON [0017] A device that supports AR typically provides an AR session on a device-local basis (e.g., not requiring communication with a remote system), such as allowing a user of the device to capture a video feed or stream using a camera built into the device, and superimpose AR objects upon the video as it is captured. Support for superimposing AR objects is typically provided by the device's operating system, with the operating system providing an AR application programming interface (API)).
The same motivation as claim 4 applies here.
Regarding Claims 7-9, Wilson in view of German and ADKINSON teaches a first electronic device for providing an augmented reality (AR) view (Wilson [0005] 1) compute a first view based upon the first location of the first user, the first view to be presented to the first user, the first view comprises a virtual object; [0022] With reference now to FIG. 1, a room 100 is illustrated, where the room is equipped with projectors and sensors. As will be described herein, the projectors are configured to project imagery onto projection surfaces of the room 100 based upon data output by the sensors, where the projected imagery effectuates SSAR for at least two users in the room 100; Abst: Various technologies pertaining to shared spatial augmented reality (SSAR) are described. Sensor units in a room output sensor signals that are indicative of positions of two or more users in the room and gaze directions of the two or more users), the first electronic device comprising:
a display; a communication module; a storage configured to store a program including at least one instruction; and at least one processor configured to execute the at least one instruction stored in the storage (Wilson [0058] results of acts of the methodologies can be stored in a computer-readable medium, displayed on a display device, and/or the like; FIG. 2).
The metes and bounds of the limitations of the claim substantially correspond to the claim as set forth in claims 1-3; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claims 10 and 12, Wilson in view of German and ADKINSON teaches the first electronic device of claim 7.
The metes and bounds of the limitations of the claim substantially correspond to the claim as set forth in claims 4 and 6; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claim 13, Wilson in view of German and ADKINSON teaches the method of claim 1, and further teaches a non-transitory computer-readable recording medium having recorded thereon a program for performing the method of claim 1 (Wilson [0058] computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media).
Claim(s) 5 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al. (US 20160098862 A1), referred herein as Wilson in view of German et al. (US 20230162396 A1), referred herein as German further in view of Yun et al. (US 20140315570 A1), referred herein as Yun.
Regarding Claim 5, Wilson in view of German and ADKINSON teaches the method of claim 3. However, Yun teaches wherein the obtaining of the object view comprises:
extracting a style feature of the space (Yun [0044] The mobile electronic device 12 may also extract feature points 21 from the 2D picture and may send the feature points 21 to the localization system 18, instead of sending the 2D picture itself); and
transforming a style of the at least one AR object based on the style feature of the space (Yun [0033] Each 3D feature point 21 may include, for example, 3D coordinates in the 3D indoor map 20 and a Scale Invariant Feature Transform (SIFT) descriptor (e.g. a 128-dimensional vector) that characterizes an appearance of a small area around the 3D feature point 21 to allow for feature point matching).
Regarding Claim 11, Wilson in view of German teaches the first electronic device of claim 7.
The metes and bounds of the limitations of the claim substantially correspond to the claim as set forth in claim 5; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Response to Arguments
Applicant’s arguments filed on 29 December 2025, with respect to the 103 rejection on claim(s) 1, 7 and 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Examiner notes that independent claims 1, 7 and 13 have been amended to include new limitation. Examiner finds these limitations to be unpatentable as can be found in above detail action.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617