Prosecution Insights
Last updated: April 19, 2026
Application No. 17/541,610

Collaborative Augmented Reality Measurement Systems and Methods

Final Rejection §103§112
Filed
Dec 03, 2021
Examiner
COBB, MICHAEL J
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Xactware Solutions Inc.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
329 granted / 432 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
451
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
34.7%
-5.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1, 4, 6, 7, 23, 26, 28, 29, and 43 have been amended; claims 3, 5, 17-22, 25, 27, and 36-38 have been canceled and claims 46-82 have been added; as a result, claims 1, 4, 6-13, 16, 23, 26, 28-35, and 39-82 are pending in the present application, with claims 1, 23, 46, 63, 76, and 78 being independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 14 July 2022 and 12 December 2024 have been considered by the examiner. Response to Arguments Applicant’s arguments, see page 29, filed 23 October 2025, with respect to the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejections of claims 36-38, along with accompanying amendments received on the same date, have been fully considered and are persuasive. Claims 36-38 have been cancelled. According, the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejection of claims 36-38 has been withdrawn. Applicant’s arguments, see page 29 and 30, filed 23 October 2025, with respect to the prior art rejection, along with accompanying amendments received on the same date, have been fully considered and are partially persuasive. With the exception of new independent claim 46, Applicant has amended the independent claims to capture the previously indicated allowable subject matter. As such, the prior art rejection as previously applied has been withdrawn. With respect to new independent claim 46, while applicant’s remarks state that corresponding subject matter of allowable claim 18 and intervening claim 17 were incorporated (see page 30 of applicant’s remarks), the independent claim does not contain said subject matter is substantially similar to previously rejected claim 1. It appears that new independent claim 58 corresponds to the subject matter of claim 18 and claim 57 corresponds to subject matter of claim 17. Claim Objections Claim(s) 13, 35, and 60 is/are objected to because of the following informalities: Claim 13 should recite “the [[a]] center of the display”, since a center of the display was defined in claim 1. Claim 35 should recite “the [[a]] center of the display”, since a center of the display was defined in claim 23. Claims 60 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 63. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4, 26, 47-49, 64, and 68-73 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 4, given the plain and ordinary meaning of the words themselves and/or when interpreted in light of the corresponding disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear as to how claim 4 aligns with claim 1. Claim 4 requires determining that one or more horizontal planes are detected and subsequently, selecting a nearest detected vertical or horizontal plane relative to the center of the display. Claim 1, from which claim 4 depends recites executing a first raycast originating from a center of the display to detect a vertical or horizontal plane; and determining whether a vertical or horizontal plane is detected...determining that no vertical or horizontal planes are detected. The examiner respectfully requests the applicant clarify the scope of claim 4 in light of claim 1. With respect to claim 26, given the plain and ordinary meaning of the words themselves and/or when interpreted in light of the corresponding disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear as to how claim 26 aligns with claim 23. Claim 26 requires determining that one or more horizontal planes are detected and subsequently, selecting a nearest detected vertical or horizontal plane relative to the center of the display. Claim 23, from which claim 26 depends recites executing a first raycast originating from a center of the display to detect a vertical or horizontal plane; and determining whether a vertical or horizontal plane is detected...determining that no vertical or horizontal planes are detected. The examiner respectfully requests the applicant clarify the scope of claim 26 in light of claim 23. With respect to claims 47-49, given the plain and ordinary meaning of the words themselves and/or when interpreted in light of the corresponding disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear how the one or more horizontal/vertical planes/infinite planes are detected/not detected as currently claimed. The originally filed disclosure appears to describe the detection as being based on a ray cast from originating from the scenter of the display. The examiner respectfully requests the applicant clarify the scope of the claimed limitation. For the purposes of further examination the examiner is interpreting claim 47 to recite something similar to “executing a first raycast originating from a center of the display to detect a vertical or horizontal plane” and claims 48 and 49 to recite something similar to “executing a second raycast originating from a center of the display to detect an infinite horizontal plane”. With respect to claim 64, given the plain and ordinary meaning of the words themselves and/or when interpreted in light of the corresponding disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear as to if claim 64 is claiming that one or more horizontal planes are detected in addition to the horizontal plane that was detected in claim 63 (...at a second corner diagonally across a horizontal plane...determining whether there are additional horizontal planes) or if the horizontal planes detected are inclusive of those previously determined. The examiner respectfully requests the applicant clarify the scope of the claimed limitation. With respect to claim 68, given the plain and ordinary meaning of the words themselves and/or when interpreted in light of the corresponding disclosure, the scope of the claimed limitations is unclear. For instance, it is not immediately clear as to how the two points captured in claim 68 aligns with the first and second point captured in claim 63. Is the rectile overlay of claim 68 different or the same as that in claim 63? The examiner respectfully requests the applicant clarify the scope of the claimed limitation. Claims depending thereon on do not cure the noted deficiencies and are accordingly also rejected using substantially similar rationale as to that set forth in the claims from which they depend. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 46, 47 and 50-57 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jovanovic et al. (US PG Publication 2020/0357132). Regarding claim 46, Jovanovic teaches a collaborative augmented reality system for measuring objects (see for instance, paragraphs 25, 26, 36, 38, 44, 113, and 117), comprising: a memory; and a processor in communication with the memory (see for instance, paragraphs 71-81 and fig. 47), the processor: establishing an audio and video connection between a mobile device of a first user and a remote device of a second user (The AR session comprises a collaboration with one or more users, see for instance, paragraph 36. In various embodiments, the collaboration is conducted via audio conference, video conference, telepresence, and the like, see for instance, paragraph 36. Photos are optionally taken remotely by one or more of the collaborators, see for instance, paragraph 36. Computer system may have any suitable physical form, including, but not limited to…mobile handheld devices, laptop or notebook computers, etc, see for instance, paragraph 72), whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user (see for instance, paragraphs 31, 36, 72, 81, and 108. The AR session comprises a collaboration with one or more other users, see for instance, paragraph 36. The application provides an AR environment, see for instance, paragraph 108. Computer system may have any suitable physical form, including, but not limited to…mobile handheld devices, laptop or notebook computers, etc, see for instance, paragraph 72) and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene (see for instance, 25, 26, 31, 36, 38, 44, 72, 81, 108, 113, and 117 and figs. 99 and 100. The augmented reality scene can be viewed on the mobile device of the first user, see for instance, paragraphs see for instance, paragraphs 31, 36, 72, 81, and 108. The AR session comprises a collaboration with one or more other users…the collaboration is conducted via video conference, telepresence, and the like, see paragraph 36. Different measurements can be performed, such as measuring a wall or detecting the distance between corners, see for instance, paragraphs 25, 26, 38, 44, 113, and 117 and figs. 99 and 100. suite of model tools, which include measuring tools, see for instance, paragraph 117. In addition, a smart picture/interactive photo can be used make 3D measurements by the same or a different user, see for instance, paragraph 35. Camera data and backing model data can be stored/packaged with the photo, see for instance, paragraph 37. The viewer may also provide the user with the option to make measurements, see for instance, paragraph 37); receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display (see for instance, paragraphs 25, 26, 38, 44, 113, and 117 and figs. 99 and 100. In this embodiment, the menu includes options to select measure a wall, see for instance, paragraph 38. Automatic corner detection allows the user to measure the distance between corners that are automatically detected…the automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Computer vision algorithms are utilized to measure objects in the space, see for instance, paragraph 113. The application provides a suite of model tools, which include measuring tools, see for instance, paragraph 117); detecting a plane for the scene displayed on the display (see for instance, paragraphs 25, 26, 35, 38, 43, 44, 48-51, 68, 69, 113, and 117 and figs. 86, 99, and 100. The AR session is calibrated by establishing a fixed coordinate system and establishing the position/orientation of the camera and the position/orientation of a horizontal or vertical plane in reference to the fixed coordinate system, see for instance, paragraph 35 and fig. 1. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43); determining a measurement of the object or feature based on the received measurement tool selection (see for instance, paragraphs 25, 26, 38, 44, 113, and 117 and figs. 99 and 100. In this embodiment, the menu includes options to select measure a wall, see for instance, paragraph 38. Automatic corner detection allows the user to measure the distance between corners that are automatically detected…the automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Computer vision algorithms are utilized to measure objects in the space, see for instance, paragraph 113. The application provides a suite of model tools, which include measuring tools, see for instance, paragraph 117. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100); and transmitting the measurement of the object or feature to a server (see for instance, paragraphs 92-94, figs. 48 and 49. Referring to Fig. 49, an application provision system alternatively has a distributed, cloud-based architecture and comprises elastically load balanced, auto-scaling web server resources and application server resources and synchronously replicated databases, see for instance, paragraph 94. First and second devices communicate using a real-time video link, whereby a second processing device controls capture in the first processing device, see for instance, paragraph 3. A second processing device comprising at least one processor configured to provide an application allowing a user to edit the position or orientation of the one or more horizontal or vertical planes in the space in reference to the fixed coordinate system….the second processing device comprises a server, a server cluster, a cloud computing platform, or a combination thereof, see for instance, paragraph 3. While not explicitly stated in Jovanovic, it would have been obvious to a person of ordinary skill in the art on the effective filing date of the invention, given the teachings of Jovanovic, to transfer the measurement of the object or feature to a server). Jovanovic further teaches that the AR session comprises a collaboration with one or more other users…the collaboration is conducted via video conference, telepresence, and the like, see paragraph 36. Different measurements can be performed, such as measuring a wall or detecting the distance between corners, see for instance, paragraphs 25, 26, 38, 44, 113, and 117 and figs. 99 and 100. suite of model tools, which include measuring tools, see for instance, paragraph 117. In addition, a smart picture/interactive photo can be used make 3D measurements by the same or a different user, see for instance, paragraph 35. Camera data and backing model data can be stored/packaged with the photo, see for instance, paragraph 37. The viewer may also provide the user with the option to make measurements, see for instance, paragraph 37.. Regarding claim 47, Jovanovic teaches the system of claim 46 and further teaches wherein the processor further performs the steps of: determining that one or more vertical or horizontal planes are detected (Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the use Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a suer aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56r to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43. Ray casting or hit testing, as used herein, refers to the use of a ray that intersects extending perpendicular to the screen of an electronic device that is useful for solving a variety of computational geometry, see for instance, paragraph 33. An AR interface allows the user to indicate the positions of corners of a floor of the space in reference to the fixed coordinate system, wherein the application is configured to project a reference point on the screen into a ray in world coordinates and determine an intersection point with the one or more horizontal or vertical planes plane via hit-testing thus detecting the corners of the floor of the space; assemble the detected corners into a floorplan of the space; generate virtual quasi-infinite vertical planes extending from each corner of the detected corners representing virtual walls of the space; provide an AR interface allowing the user to indicate the positions of intersection points between the ceiling and the virtual wall, see paragraph 4. The application is configured to project a reference point on the screen into a ray in world coordinates and determine an intersection point with the one or more horizontal or vertical planes plane via hit-testing thus detecting the comers of the floor of the space, see for instance, paragraph 47); and selecting a nearest detected vertical or horizontal plane relative to the center of the display (see for instance, figs. 41-46. Figs. 42-46 show non-limiting examples of an interactive model of a space for making measurements in real world coordinates, by selecting points on the screen, as well as for making annotations…the photo is interactive, which allows the user to tap on their screen to identify coordinates on the photo and cays; which allow for real world measurements in the 3D space, see for instance, paragraph 67. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43). Regarding claim 50, Jovanovic teaches the system of claim 46 and further teaches wherein the processor detects the plane for the scene based on an operating system (Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43. The computing device includes an operating system configured to perform executable instructions…example operating systems include iOS and Android, see for instance, paragraph 88). The motivation to combine Jovanovic and Piya is the same as that set forth with respect to claim 1. Unless otherwise stated, reference citations are to Jovanovic. Regarding claim 51, Jovanovic teaches the system of claim 46 and further teaches wherein the processor determines the measurement of the object or feature by: capturing at least two points indicated by a reticle overlay, wherein the at least two points are associated with the object or feature (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56); determining a distance between the captured points (see for instance, paragraphs 55-57 and figs. 36-39); and labeling and displaying the determined distance between the captured points (see for instance, paragraphs 55-57 and figs. 36-39. One of the advantages of editing points, corners, and/or segments includes an improvement in accuracy of the backing model…the user is able to measure small adjacent areas, see for instance, paragraph 55). Regarding claim 52, Jovanovic teaches the system of claim 51, and further teach wherein the processor captures the at least two points by: positioning a first point onto the augmented realty scene based on points of the detected plane (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a suer aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. Automatic corner detection allows the user to measure the distance between corners that are automatically detected…the automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Computer vision algorithms are utilized to measure objects in the space, see for instance, paragraph 113); generating an orthogonal guideline to measure a second point in a direction normal to a surface having the first point (see for instance, paragraphs 51-53, 58, and 118 and figs. 33-39. To afford users a potentially faster method for acquiring floorplan geometries, a new capture flow using a rectangular starting geometry, and subsequent segment definition and movement is provided herein, see for instance, paragraph 51. Fig. 66 shows the application providing instruction to the user to move to the next corner, see for instance, paragraph 108. The corners are optionally manually selected or automatically rectified to a square (e.g., 90 degrees) or other angles where appropriate, see for instance, paragraph 118 and figs. 87-89); and positioning a second point based on the orthogonal guideline (Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. To afford users a potentially faster method for acquiring floorplan geometries, a new capture flow using a rectangular starting geometry, and subsequent segment definition and movement is provided herein, see for instance, paragraph 51. Fig. 66 shows the application providing instruction to the user to move to the next corner, see for instance, paragraph 108. The corners are optionally manually selected or automatically rectified to a square (e.g., 90 degrees) or other angles where appropriate, see for instance, paragraph 118 and figs. 87-89). Regarding claim 53, Jovanovic teaches the system of claim 51 and further teaches wherein the processor further performs the steps of: generating an additional orthogonal guideline based on the second point, wherein the additional orthogonal guideline is tilted relative to the orthogonal guideline (see for instance, paragraphs 51-53 and figs. 29-41. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. To afford users a potentially faster method for acquiring floorplan geometries, a new capture flow using a rectangular starting geometry, and subsequent segment definition and movement is provided herein, see for instance, paragraph 51. Referring to fig. 40, a user selects a point previously created by moving a corner of a rectangle established to aid generation of a floorplan, see for instance, paragraph 57); positioning a third point along the additional orthogonal guideline (Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56.); determining a distance between the second and third points (see for instance, paragraphs 55-57 and figs. 36-39); and labeling and displaying the determined distance between the second and third points (see for instance, paragraphs 54-59 and figs. 36-41. One of the advantages of editing points, corners, and/or segments includes an improvement in accuracy of the backing model…the user is able to measure small adjacent areas, see for instance, paragraph 55). Regarding claim 54, Jovanovic teaches the system of claim 50 and further teaches wherein the processor captures the at least two points by: snapping to a first point (see for instance, paragraphs 44 and 54-59 and figs. 36-41. The automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56.); snapping to an orthogonal guideline to capture a second point (see for instance, paragraphs 44 and 51-59 and figs. 36-41. The automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. To afford users a potentially faster method for acquiring floorplan geometries, a new capture flow using a rectangular starting geometry, and subsequent segment definition and movement is provided herein, see for instance, paragraph 51); snapping to a plane on the orthogonal guideline (see for instance, paragraphs 44 and 51-59 and figs. 36-41. The automatic corner detection facilitates making measurements, by enabling the measuring tools to “snap” to the detected corners, see for instance, paragraph 44. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. To afford users a potentially faster method for acquiring floorplan geometries, a new capture flow using a rectangular starting geometry, and subsequent segment definition and movement is provided herein, see for instance, paragraph 51. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43); and extending a first measurement along the orthogonal guideline to capture a second measurement starting from the second point, wherein the first measurement includes the first point and the second point (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a suer aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100). Regarding claim 55, Jovanovic teaches the system of claim 54 and further teaches wherein the processor snaps to the first point by: executing a raycast hit test originating from a center of the display; updating a world position of the reticle overlay to be a world position of an existing point on the detected plane based on determining that the raycast hit test hits the existing point; or updating a world position of the reticle overlay to a position where the raycast hit test hits a plane based on determining that no existing point on the detected plane is hit, wherein the updated world position of the reticle overlay is indicative of a position of the first point (Ray casting or hit testing, as used herein, refers to the use of a ray that intersects extending perpendicular to the screen of an electronic device that is useful for solving a variety of computational geometry, see for instance, paragraph 33. Referring to fig. 30, the ground plane is detected and the user is further prompted to aim the camera of the device at a first floor corner of the space and tap a user interface element to capture the position of the first corner, see for instance paragraph 53. An AR interface allows the user to indicate the positions of corners of a floor of the space in reference to the fixed coordinate system, wherein the application is configured to project a reference point on the screen into a ray in world coordinates and determine an intersection point with the one or more horizontal or vertical planes plane via hit-testing thus detecting the corners of the floor of the space; assemble the detected corners into a floorplan of the space; generate virtual quasi-infinite vertical planes extending from each corner of the detected corners representing virtual walls of the space; provide an AR interface allowing the user to indicate the positions of intersection points between the ceiling and the virtual wall, see paragraph 4). Regarding claim 56, Jovanovic teaches the system of claim 54 and further teaches wherein the processor extends the first measurement along the orthogonal guideline to capture the second measurement starting from the second point by capturing a third point along the orthogonal guideline, wherein the first measurement and the second measurement are collinear (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a suer aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56. Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100. While not explicitly stated in Jovanovic, it would have been obvious to a person of ordinary skill at the effective filing date of the invention, that given the teachings of Jovanovic, for the second and first measurements to fall along the same line/be collinear depending on what features where being measured in the floor plan). Regarding claim 57, Jovanovic teaches the system of claim 46, and further teaches wherein the processor determines the measurement of the object or feature by: capturing a first point using a reticle overlay (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56); capturing a second point using the reticle overlay (Referring to Fig. 6, the user places a camera reticle on a first wall corner and taps a capture button, see for instance, paragraph 49. Referring to Fig. 31, the user places a camera reticle on a first floor corner and taps a capture button, see for instance, paragraph 53. Referring to Fig. 36, a user aims a reticle of a camera at a line segment of an AR floorplan, see paragraph 56. In Fig. 37, the user taps to add a point to the line segment of the floor perimeter, see paragraph 56. As shown in figs. 38 and 39, the user can tap and drag to move the new point and adjust the line of the floorplan to match a jog in the floor perimeter, see for instance, paragraph 56); capturing one or more points and linking the one or more points to the first point to close a polygon formed by the first point, the second point, and the one or more points, wherein the polygon is associated with the object or feature (see for instance, paragraphs 51-52, 54-59, 68-70, 112, 113, 116 and figs. 99 and 100); capturing a third point indicative of a vertical distance of a height of a polygon or a polygon prism formed at least by the polygon (Referring to Fig. 99, a user optionally makes measurements of objects on multiple 3D places defined within the smart picture, e.g., floors, walls, virtual walls, ceilings, etc, see for instance, paragraph 68 and fig. 99. A virtual wall defines a 3D plane within the smart picture allowing the user to make real world measurements of objects on that plane, see for instance, paragraph 69 and fig. 100 Computer vision algorithms detected one or more geometries in space, which include detected floor corners, floor perimeters, floors, walls, ceilings, doors, etc, see for instance, paragraph 43); and determining geometrical parameters of the polygon or the polygon prism (see for instance, paragraph 68). Allowable Subject Matter Claims 58, 59, 61, and 62 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 48 and 49 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. REASONS FOR ALLOWANCE Claims 1, 6-13, 16, 23, 28-35, 39-45, 63, 65-67, and 74-82 are allowed. Claims 4, 26, 64, and 68-72 are dependent on claims indicated as allowable and would be indicated as being allowed if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action. The following is an examiner’s statement of reasons for allowance: The prior art of record does not teach or reasonably suggest each and every claimed limitation and the specific interplay between the claimed limitations. For instance, with respect to claim 1, when considered as a whole with each and every claimed element, the prior art of record does not teach or reasonably suggest at least “establishing an audio and video connection between a mobile device of a first user and a remote device of a second user, whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene; receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display; detecting a plane for the scene displayed on the display; determining a measurement of the object or feature based on the received measurement tool selection; and transmitting the measurement of the object or feature to a server, wherein the processor detects the plane by: executing a first raycast originating from a center of the display to detect a vertical or horizontal plane; and determining whether a vertical or horizontal plane is detected, and wherein the processor further performs the steps of: determining that no vertical or horizontal planes are detected; executing a second ray cast originating from the center of the display to detect an infinite horizontal plane; and determining whether an infinite horizontal plane is detected”. Accordingly, the subject matter of claim 1 and claims depending thereon are found to be allowable. With respect to claim 23, when considered as a whole with each and every claimed element, the prior art of record does not teach or reasonably suggest at least “establishing an audio and visual connection between a mobile device of a first user and a remote device of a second user, whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene; receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display; detecting a plane for the scene displayed on the display; determining a measurement of the object or feature based on the received measurement tool selection; and transmitting the measurement of the object or feature to a server, wherein the step of detecting the plane for the scene comprises: executing a first raycast originating from a center of the display to detect a vertical or horizontal plane; determining whether a vertical or horizontal plane is detected; determining that no vertical or horizontal planes are detected; executing a second ray cast originating from the center of the display to detect an infinite horizontal plane; and determining whether an infinite horizontal plane is detected.” Accordingly, the subject matter of claim 23 and claims depending thereon are found to be allowable. With respect to claim 63, when considered as a whole with each and every claimed element, the prior art of record does not teach or reasonably suggest at least “establishing an audio and video connection between a mobile device of a first user and a remote device of a second user, whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene; receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display; detecting a plane for the scene displayed on the display; determining a measurement of the object or feature based on the received measurement tool selection; and transmitting the measurement of the object or feature to a server, wherein the processor determines the measurement of the object or feature by: capturing a first point using reticle overlay at a first corner; capturing a second point using the reticle overlay at a second corner diagonally across a horizontal plane of a face of a polygon prism, wherein the first corner and the second corner are associated with the object or feature; and determining whether there are additional horizontal planes to capture”. Accordingly, the subject matter of claim 63 and claims depending thereon are found to be allowable. With respect to claim 76, when considered as a whole with each and every claimed element, the prior art of record does not teach or reasonably suggest at least “establishing an audio and visual connection between a mobile device of a first user and a remote device of a second user, whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene; receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display; detecting a plane for the scene displayed on the display; determining a measurement of the object or feature based on the received measurement tool selection; and transmitting the measurement of the object or feature to a server, wherein the step of determining the measurement of the object or feature comprises: capturing a first point using a reticle overlay; capturing a second point using the reticle overlay; capturing one or more points and linking the one or more points to the first point to close a polygon formed by the first point, the second point, and the one or more points, wherein the polygon is associated with the object or feature; capturing a third point indicative of a vertical distance of a height of a polygon or a polygon prism formed at least by the polygon; determining geometrical parameters of the polygon or the polygon prism; determining to exclude an area from the polygon or from a face of the polygon prism; capturing a fourth point using the reticle overlay at a first comer; capturing a fifth point using the reticle overlay at a second comer diagonally across the same plane of the fourth point, wherein the first comer and the second comer are associated with the area to be excluded; determining the area bounded by the fourth and fifth points; and excluding the determined area from the polygon or from the face of the polygon prism.”. Accordingly, the subject matter of claim 76 and claims depending thereon are found to be allowable. With respect to claim 78, when considered as a whole with each and every claimed element, the prior art of record does not teach or reasonably suggest at least “establishing an audio and visual connection between a mobile device of a first user and a remote device of a second user, whereby the first user can view an augmented reality scene displayed on a display of the mobile device of the first user and the augmented reality scene is transmitted to and displayed on the remote device of the second user, the first user and the second user being able to collaboratively measure an object or a feature of a structure depicted in the augmented reality scene; receiving a measurement tool selection to measure an object or feature present in the scene displayed on the display; detecting a plane for the scene displayed on the display; determining a measurement of the object or feature based on the received measurement tool selection; and transmitting the measurement of the object or feature to a server, wherein the step of determining the measurement of the object or feature comprises: capturing a first point using a reticle overlay at a first corner; capturing a second point using the reticle overlay at a second corner diagonally across a horizontal plane of a face of a polygon prism, wherein the first corner and the second corner are associated with the object or feature; and determining whether there are additional horizontal planes to capture”. Accordingly, the subject matter of claim 78 and claims depending thereon are found to be allowable. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PG Publication 2019/0146599 to Gunnarsson et al. teaches Virtual Augmented Reality Modeling Application for Architecture. Bergquist et al teaches “Using Augmented Reality to Measure Vertical Surfaces”. Bergquist further teaches that when ARKit’s plane detection finds a plane, on the surface the user intends to measure, the user can select that plane, which triggers the creation of a new, semi-infinite plane with the same normal and position as the chosen plane, see for instance, page 7, “Plane Sele
Read full office action

Prosecution Timeline

Dec 03, 2021
Application Filed
Mar 11, 2022
Response after Non-Final Action
Feb 10, 2024
Non-Final Rejection — §103, §112
Aug 13, 2024
Response Filed
Nov 06, 2024
Final Rejection — §103, §112
Apr 07, 2025
Request for Continued Examination
Apr 08, 2025
Response after Non-Final Action
Apr 18, 2025
Non-Final Rejection — §103, §112
Oct 23, 2025
Response Filed
Nov 23, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597182
DATA INTERPOLATION PLATFORM FOR GENERATING PREDICTIVE AND INTERPOLATED PRICING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12586321
AUTOMATED MEASUREMENT OF INTERIOR SPACES THROUGH GUIDED MODELING OF DIMENSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579736
METHOD AND DEVICE FOR GENERATING THREE-DIMENSIONAL IMAGE BY USING PLURALITY OF CAMERAS
2y 5m to grant Granted Mar 17, 2026
Patent 12561105
ONLINE ELECTRONIC WHITEBOARD CONTENT SYNCHRONIZATION AND SHARING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12561859
Method and System for Visualizing a Graph
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+37.9%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month