DETAILED ACTION
Response to Amendment
Applicant’s amendments filed on 24 November 2025 have been entered. Claims 1 and 9 have been amended. Claims 17 and 18 have been canceled. Claims 1-18 are still pending in this application, with claims 1 and 9 being independent.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tomlin et al. (US 20180108179 A1), hereinafter Tomlin in view of Zahid (US 20220270230 A1), hereinafter Zahid and Davidson et al. (US 20210311205 A1), hereinafter Davidson.
Regarding Claim 1, Kumar in view of Zahid teaches a system for taking sensor readings of a region (Tomlin [0001] head-mounted display (HMD) devices may include various sensors that allow the HMD device to display a blend of reality and virtual objects on the HMD device as augmented reality), comprising:
a sensor device configured to take the sensor readings and to provide pose information of the sensor device (Tomlin [0027] the position sensor system 28 may include one or more location sensor 30 from which the HMD device 10 determines a location 62 (see FIG. 2) of the location sensor 30 in space. As used herein, a “location” may be a “pose” and may include position and orientation for a total of six values per location; [0042] At 604, the method 600 may include determining a location of a location sensor of the HMD device in space. As mentioned above, the location sensor may include an accelerometer, a gyroscope, a global positioning system, a multilateration tracker, or one or more optical sensors such as a camera, among others. Depending on the type of sensor, the location sensor itself may be configured to determine the location, or the controller may be configured to calculate the location of the location sensor based on data received therefrom);
Tomlin does not but Zahid teaches
computer software on a non-transient medium configured to, when run on a computer, determine a location to take a next sensor reading based on previous sensor readings and the pose information (Zahid [0072] After obtaining the first image, the system obtains a second image of the ceiling of the attic of the property (306). The second image of the ceiling can capture the status of the ceiling after the precipitation has lasted for a period of time; [0076] the system can detect a dark spot 122 in the first image 120 and detect a corresponding dark spot 132 in the second image 130. The system can compare the size of a dark spot 122 in the first image 120 and the size of a corresponding dark spot 132 in the second image 130); and
a display device configured to display a virtual object at the location overlayed on a view of the region (Tomlin [0029] the display 18 may be further configured to overlay a hologram 33 that corresponds to the pose of the handheld input device 32 in space over time).
Tomlin does not but Davidson teaches
wherein the sensor readings characterize a three-dimensional vector field (Davidson [0113] In FIG. 19, the self-describing fiducial (1900) is augmented with a three-dimensional ultrasonic anemometer (1902), which measures a three-dimensional wind vector representing the flow field (1910));
and to display virtual vector indicators representing flow direction and strength of the three-dimensional vector field within the view of the region (Davidson [0099] the wind/flow field (1504) is blowing from the right to left. The buildings (1502) and other obstructions influence the flow field (1504) and produce spatial and temporal variations in the speed and direction of the flow; [0103] Returning to FIG. 15, the location of the self-describing fiducials (1510, 1512, 1514) may be selected to make the relevant measurements of the flow field (1504) and to be visible to vehicles or navigating objects that will encounter the flows). The speed of wind anticipates the strength of a flow field.
Zahid discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting roof leaks. Zahid is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tomlin to incorporate the teachings of Zahid, and applying the techniques relate to roof leaks detection based on the surrounding environment and commands from a property monitoring system into methods and systems for displaying a computer generated image corresponding to the pose of a real-world object in a mixed reality system.
Doing so, the system can detect damages by analyzing the sensor data, e.g., an image or a video of a ceiling captured during the precipitation event for systems, methods, and graphical user interfaces for augmented reality sensor guidance.
Davidson discloses an inertial measurement unit (IMU) can provide measurements of specific force (i.e. acceleration) and angular rate. Davidson is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tomlin to incorporate the teachings of Davidson, and applying the wind field direction and strength calculation and representation into methods and systems for displaying a computer generated image corresponding to the pose of a real-world object in a mixed reality system.
Doing so, the sensor would be able to determine real time/predicted flow characteristics in its vicinity in advance, and appropriately adjusted its trajectory to compensate for systems, methods, and graphical user interfaces for augmented reality sensor guidance.
Regarding Claim 2, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the sensor device is a wearable device (Tomlin [0001] head-mounted display (HMD) devices may include various sensors that allow the HMD device to display a blend of reality and virtual objects on the HMD device as augmented reality).
Regarding Claim 3, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the display device is a head-mounted device (Tomlin Abst: The system may include of a head-mounted display (HMD) device, a magnetic track system and an optical system).
Regarding Claim 4, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the sensor device is configured to provide pose information by pose cameras mounted on the sensor device and markers on the display device (Tomlin [0003] The system may also comprise a magnetic tracking system configured to detect the pose of the object where the magnetic tracking system includes a base station configured to emit an electromagnetic field (EMF) and an EMF sensor configured to sense the EMF. The system may further comprise a second tracking system configured to also detect the pose of the object; [0052] The mixed reality system 700 may include some or all components of the nixed reality system 100 of FIG. 5, and may additionally comprise an optical tracking system 74 comprising at least one marker 76 and at least one optical sensor 78 configured to capture optical data 8).
Regarding Claim 5, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the sensor device comprises a flow field sensor to take the sensor readings (Tomlin [0033] FIG. 5 shows an example software-hardware diagram of the mixed reality system 100 including the HMD device 10. In addition to the HMD device 10, the mixed reality system 100 may also include an electromagnetic field sensor 40 affixed to an object 42 and configured to sense a strength 44 of the electromagnetic field 38). An electromagnetic flow sensor is a type of flow field sensor that specifically measures the flow rate of electrically conductive liquids.
Regarding Claim 6, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches further comprising a wireless network system and the sensor device and the display device are configured to communicate on the wireless network system (Tomlin [0035] a transceiver 54C that allows the electromagnetic field sensor 40 to wirelessly communicate with the base station 36 and/or controller 20; [0042] With reference to FIG. 6, at 602, the method 600 may include positioning a base station in a front portion of a housing of a head-mounted display (HMD) device).
Regarding Claim 7, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the virtual object comprises a virtual indicator object that is centered at the location and side indicators configured to indicate a distance to the location from the display device (Tomlin [0036] If the location sensor is a GPS receiver paired with an accelerometer, as another example, then the location 62 of the location sensor 30 may be determined by receiving the position from the GPS receiver and the orientation may be determined by the accelerometer; Zahid [0056] the image analysis engine 200 can label the location of the leak spot on the image B, and can generate an image 210 with the detected leak spot. After determining the ring shaped object 204 that corresponds to the change in the dark spot 122 and 132, the image analysis engine can generate a bounding box 212 that tightly surrounds the ring shaped object 204. The location of the bounding box 212 can include the coordinate of the top-left corner and the width and length of the bounding box 212. The image analysis engine can overlay the bounding box 212 on the image 130 to generate the image 210 with the detected leak spot). Distance can be calculated from GPS locations using geographic coordinates and mathematical formulas.
Regarding Claim 8, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the computer software comprises a machine learning model (Zahid [0033] the image analysis algorithm can include a machine learning algorithm and/or a computer vision algorithm to more accurately discriminate a changing dark spot from other features in the image).
Regarding Claims 9-16, Tomlin in view of Zahid and Davidson teaches a method for taking sensor readings of a region (Tomlin [0001] head-mounted display (HMD) devices may include various sensors that allow the HMD device to display a blend of reality and virtual objects on the HMD device as augmented reality). The metes and bounds of the limitations of the claims substantially correspond to the elements set forth in claims 1-8; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claim 18, Tomlin in view of Zahid and Davidson teaches the system of claim 7, and further teaches wherein the side indicators are configured to change size or distance from the central indicator to indicate the distance of the sensor device from the location (Davidson [0101] the uncertainty envelope (1508) impinges on both a building (1502) and the uncertainty envelope of a neighboring a navigating object (1518). This means that there is a risk that the navigating object (1506) may collide with either (or both) the building and the navigating object ahead of it. This situation could have been avoided if the navigating object (1506) had positioned itself better in the space before entering the flow (i.e. a greater following distance behind the navigating object (1518) ahead of it and/or moved farther to the right, slowed down, etc.) or had taken an alternative route). The navigating object is UMV with sensors.
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tomlin et al. (US 20180108179 A1), hereinafter Tomlin in view of Zahid (US 20220270230 A1), hereinafter Zahid, Davidson et al. (US 20210311205 A1), hereinafter Davidson and COLEMAN et al. (US 20190005397 A1), hereinafter COLEMAN.
Regarding Claim 17, Tomlin in view of Zahid and Davidson teaches the system of claim 1, and further teaches wherein the computer software utilizes an uncertainty quantification algorithm to determine the location to take the next sensor reading to optimize a reconstruction of an environmental field (COLEMAN [0006] one or more sensors capable of collecting biometric data, a processing unit electrically coupled to the one or more sensors and capable of executing an uncertainty quantification algorithm on the biometric data collected by the one or more sensors; [0007] the uncertainty quantification algorithm is capable of finding a posterior distribution).
COLEMAN discloses processor runs an uncertainty quantification (e.g. Bayesian inference) algorithm on the data collected by the sensor and characterizes the uncertainty (e.g. the full posterior distribution) around latent variables of interest. COLEMAN is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tomlin to incorporate the teachings of COLEMAN, and applying the uncertainty quantification algorithm into data collected by flow field sensors of methods and systems for displaying a computer generated image corresponding to the pose of a real-world object in a mixed reality system.
Doing so, a statistically complete representation of the data can then be sent to a human, to an actuator, or to a cloud server for subsequent decision making for systems, methods, and graphical user interfaces for augmented reality sensor guidance.
Response to Arguments
Applicant's arguments filed on 24 November 2025, with respect to the 103 rejection have been fully considered but are moot in view of the new grounds of rejection.
Examiner notes that independent claims 1 and 9 have been amended to include new limitation. Examiner finds these limitations to be unpatentable as can be found in above detail action.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617