Prosecution Insights
Last updated: April 19, 2026
Application No. 17/152,456

IMAGE BASED MEASUREMENT ESTIMATION

Non-Final OA §103
Filed
Jan 19, 2021
Examiner
BURLESON, MICHAEL L
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Home Depot Product Authority LLC
OA Round
7 (Non-Final)
75%
Grant Probability
Favorable
7-8
OA Rounds
2y 10m
To Grant
68%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
365 granted / 489 resolved
+12.6% vs TC avg
Minimal -6% lift
Without
With
+-6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Applicants Remarks pages 5-7, filed 10/27/25, with respect to the rejection(s) of claim(s) claims 1-14 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chen et al US 20160314370. Regarding claim 1, Applicant states that the prior art of record fails to teach of identifying, by the server, the reference object in the image as captured by the user device to retrieve a known measurement (Applicants Remarks pages 5-6). Examiner agrees with Applicant. Chen et al teaches The image server 104 may receive one or more images from the camera 108 or from the image database 106. The image may include at least a first object, e.g. a common object and a second object, e.g. the object of interest, such as a building or other structure. A common object may be an object with identifiable characteristics which is likely to appear in images captured by a user and which has been associated with a predetermined measurement assumption or predetermined measurement assumption range (paragraph 0032); Applicant states that the reference of Thornberry et al fails to teach of estimating a measurement of the building feature based on building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment (Applicants Remarks pages 6-7). Examiner agrees with Applicant. Thornberry et al teaches Measurement Value Calculation (estimation) (paragraph 0081). This is a data structure that contains data that can be used to determine (estimate) the appropriate measurement record to be used (paragraph 0081-0100) Note: the measurement record is a record of measurements of a building feature, the building feature would be the roof, for example as taught in paragraphs 0082-0100 or paragraph 0031-0050. Thereby teaching estimating a measurement of the building feature based on building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment. Applicant also states that Thornberry et al does not involve segmenting images, does not involve reference objects in images and does not teach estimating measurements through proportional relationships between image segments (Applicants Remarks page 7). Examiner agrees with Applicant. Chen teaches Measurements of objects of interest in an image may be determined based on measurement assumptions of common objects which appear in the same image. The estimates or assumptions of the measurements of common objects in the image may be used to determine the measurements of objects of interest. A UE may utilize the measurement assumptions of the common objects in the image to simultaneously calibrate camera parameters and measure object dimensions (paragraph 0028, 0040 and 0041). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al US 8705893 in view of Chen et al US 20160314370 further in view Segev et al 20210073449. Regarding claim 1, Zhang et al teaches a method, comprising: receiving, by server, the image including a building feature and a reference object (Camera (user device) captures images of building features (walls, doors, etc.) as user moves through a building (column 2, lines 49-67). Note: Images are received by a processor/computer for analysis; Zhang et al fails to teach identifying, by the server the reference object in the image as captured by the user device to retrieve a known measurement of the reference object; estimating, by the server, a measurement of the building feature based on the building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment; outputting, from the server to the user device, the estimated measurement of the building feature; Chen et al teaches identifying, by the server the reference object in the image as captured by the user device to retrieve a known measurement of the reference object (The image server 104 may receive one or more images from the camera 108 or from the image database 106. The image may include at least a first object, e.g. a common object and a second object, e.g. the object of interest, such as a building or other structure. A common object may be an object with identifiable characteristics which is likely to appear in images captured by a user and which has been associated with a predetermined measurement assumption or predetermined measurement assumption range (paragraph 0032); estimating, by the server, a measurement of the building feature based on the building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment (Measurements of objects of interest in an image may be determined based on measurement assumptions of common objects which appear in the same image. The estimates or assumptions of the measurements of common objects in the image may be used to determine the measurements of objects of interest. A UE may utilize the measurement assumptions of the common objects in the image to simultaneously calibrate camera parameters and measure object dimensions (paragraph 0028, 0040 and 0041).; outputting, from the server to the user device, the estimated measurement of the building feature (the UE 102 or image server 104 may cause a display of the image on a user interface (paragraph 0042); Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al’s server to include: identifying, by the server the reference object in the image as captured by the user device to retrieve a known measurement of the reference object; estimating, by the server, a measurement of the building feature based on the building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment; outputting, from the server to the user device, the estimated measurement of the building feature. The reason of doing so would have created an efficient and accurate way of identifying measuring building features/reference objects of Zhang et al. Zhang et al in view of Chen et al fails to teach segmenting, by the server, the image to form a segmented image using an image segmentation machine learning model, the segmented image comprising a set of image segments overlaid on the image as captured, the set of image segments comprising a reference object segment and a building feature segment; Segev et al 20210073449 teaches segmenting, by the server, the image to form a segmented image using an image segmentation machine learning model (Machine learning may refer to artificial intelligence or machine learning models or algorithms as described herein. using artificial intelligence to segment walls and rooms from images of floor plans (paragraph 0669). a Unet or Mask RCNN model may be used to segment walls from the images of 2D floorplan (paragraph 0670). Identifying wall boundaries may include performing one of a variety of types of analysis on the floor plan to extract room features such as doors, windows, walls, as wall length, area and many other possible features. The extraction of these features may be based on lines within the floor plan (paragraph 0671), the segmented image comprising a set of image segments overlaid on the image as captured, the set of image segments comprising a reference object segment and a building feature segment (architectural features within region 2802, such as chair 2812, window 2814, door 2816, and area 2818. Based on these and other architectural features, the disclosed method may determine semantic designation 2810, which may specify the chairman's office as “Office 1.” Semantic designation 2810, as well as other semantic designations for rooms in region 2802 may be associated with floor plan 2800, as shown. These might be overlaid on an image of the floor plan itself, be constructed as a separate image layer (paragraph 0786) Note: the architectural features, which read on building features, are images that are overlaid onto an image of the floor plan 2800; Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al in view of Chen et al’s server to include: estimating a measurement of the building feature based on the building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment. The reason of doing so would have created an efficient and cheaper way of measuring building features/reference objects of Thomas. Regarding claim 5, Zhang et al teaches estimating a dimension of the reference object from one or more survey responses (the camera 14 and/or system 10 is moved about an interior floor of a building or other like structure being studied. For example, a user wearing the backpack 12 walks or strolls at a normal pace (e.g., approximately 0.5 meters (m) per second (s)) down the hallways and/or corridors and/or through the various rooms of a given floor of the building. (column 6, lines 14-23) Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al US 8705893 in view of Chen et al US 20160314370 further in view Segev et al 20210073449 and further in view of Carter (US 5,992,113). Regarding claim 2, Zhang et al in view of Chen et al further in view Segev et al does not teach: obtaining, from a database of objects, a set of objects that fit within a width of the building feature based on the estimated measurement of the building feature. Cater teaches obtaining, from a database of objects, a set of objects that fit within a width of the building feature based on the estimated measurement of the building feature (column 1, lines 15-21 teaches in modern construction, a slight gap of up to 1 inch (less than 2 inches) between window edge or jam allowed to account for measuring error (note: obviously the measuring error is also be less than an inch in order for the window to be able to fit into the wall). Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al in view of Chen et al further in view Segev et al such that the error of the estimated measurement is less than 2 inches. The reason of doing so would have allowed the adjusted floor plan/drawing to be more accurately reflecting the actual construction. Claim(s) 3, 4, 7, is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al US 8705893 in view of Chen et al US 20160314370 further in view Segev et al 20210073449 and further in view Cornelison et al US 11257132. Regarding claim 3, Zhang et al in view of Chen et al further in view Segev et al does not teach retrieving a standard size of the reference object and estimating the measurement of the building feature based on the standard size of the reference object. Cornelison et al teaches retrieving a standard size of the reference object (the standardized reference objects may be stored in a database along with their standard sizes. For example, the database may indicate that a standard size for an outlet or outlet plate or cover is 2.75″×4.5.″ (column 12, lines 63-67); and estimating the measurement of the building feature based on the standard size of the reference object (column 2, lines 30-35, standardized reference object may be used to identified a size of another object). Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al in view of Chen et al further in view Segev et al to include: retrieving a standard size of the reference object and estimating the measurement of the building feature based on the standard size of the reference object. The reason of doing so would have created an efficient and cheaper way of measuring building features/reference objects of Thomas. Regarding claim 4, Zhang et al in view of Chen et al further in view Segev et al further in view of Cornelison et al teaches wherein the building feature and the reference object are on a same wall (Cornelison: If the image analysis and device control system 230 determines that the room is a front hallway, the plurality of standardized reference objects may be, for example, a key hole, a door handle, a door frame, a deadbolt, a door hinge, a stair, a railing, and the like. based on the room indication output determined at step 315, a plurality of standardized reference objects associated with the room. (column 12, lines 38-60). Note: the objects of reference can be found on the same wall of a room since they are all objects associated with a door, which is on one wall of a room Regarding claim 7, Zhang et al in view of Chen et al further in view Segev et al does not teach determining, based on the estimated measurement, a standard-sized product and outputting an indication of the standard-sized product with the estimate measurement. Cornelison et al teaches determining, based on the estimated measurement, a standard-sized product and outputting an indication of the standard-sized product with the estimate measurement (column 2, lines 30-35, an know dimension associated with the standardized reference object may be used to identified a size of another object; note: obviously if the size of the identified object is known, the identified object can be used as a reference object). Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al in view of Chen et al further in view Segev et al to include: determining, based on the estimated measurement, a standard-sized product and outputting an indication of the standard-sized product with the estimate measurement. The reason of doing so would have created an efficient and cheaper way of measuring building features/reference objects of Thomas. Claim(s) 6, is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al US 8705893 in view of Chen et al US 20160314370 further in view Segev et al 20210073449 and further in view of Chen (US 2003/0147553). Regarding claim 6: Zhang et al in view of Chen et al further in view Segev et al does not teach wherein the building feature in the image is partially occluded. Chen teaches wherein the building feature in the image is partially occluded. (paragraph 0019, 0026) Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Zhang et al in view of Chen et al further in view Segev et al to include: wherein the building feature in the image is partially occluded. The reason of doing so would have allowed all features of building to be processed and more valuable data to users would be determined. Allowable Subject Matter Claims 8-14 are allowed. Regarding claim 8, Thomas et al US 2006/0282235 in view of Segev et al 20210073449 further in view of Thornberry et al 20190066049 and further in view of Zhang et al US 8705893 teach: Regarding claim 8, Thomas et al teaches a server device (KBS of paragraph 0013, 0067, note: a server is a computer that share information with other computers), comprising: a processor (knowledge-based processor (KBS processor) 484 (paragraph 0103) and a non-transitory, computer-readable memory storing instructions (paragraph 0158); wherein the processor executes the instructions to: receiving, by server, (KBS of paragraph 0013, 0067, uploaded or store in a central computer server, paragraph 0014, all processing and printing can be performed at the central location) an image (raw data is received by KBS, paragraph 0106; note paragraph 13, KBS can generate an image from raw data; therefore the raw data is another form of the image to the KBS system; also see re-usable measured drawings, paragraph 0015, interpret the raw data and convert it to a meaningful set of structure descriptions, paragraph 0104), the image including a building feature (for example whether it defines a door, a fixture, or a piece of furniture (building feature) (fig 17 and paragraph 0105 and 0107); Segev et al 20210073449 teaches locate and segment a reference object in the image as captured by the user device to form a first segmented image using an image segmentation machine learning model, (Machine learning may refer to artificial intelligence or machine learning models or algorithms as described herein. using artificial intelligence to segment walls and rooms from images of floor plans (paragraph 0669). a Unet or Mask RCNN model may be used to segment walls from the images of 2D floorplan (paragraph 0670). Identifying wall boundaries may include performing one of a variety of types of analysis on the floor plan to extract room features such as doors, windows, walls, as wall length, area and many other possible features. The extraction of these features may be based on lines within the floor plan (paragraph 0671 the first segmented image comprising a first set of segments overlaid on the image as captured; (architectural features within region 2802, such as chair 2812, window 2814, door 2816, and area 2818. Based on these and other architectural features, the disclosed method may determine semantic designation 2810, which may specify the chairman's office as “Office 1.” Semantic designation 2810, as well as other semantic designations for rooms in region 2802 may be associated with floor plan 2800, as shown. These might be overlaid on an image of the floor plan itself, be constructed as a separate image layer (paragraph 0786); Thornberry et al 20190066049 teaches estimating a measurement of the building feature based on the building feature segment and in accordance with a relationship between the known measurement of the reference object and the reference object segment (Product Pricing to Measurement Data Association Data Repository (hereinafter “PPMDADA”). the PPMDADA 210 is used to house records that will allow the system to automatically determine the appropriate measurement record in the measurement data repository 204 taken by the estimator (paragraph 0078). The system can then access the PPMDADA 210 to determine the appropriate value from the measurement data repository 204. The system would then consult the measurement data repository 204 to determine the appropriate measurement value and unit of measure (paragraph 0111-0113). User Adds Multiple Items from the Product Pricing Data Repository 208 to the Estimate (paragraph 0115) Note: the known measurements are those stored in the PPMDADA 210 and are compared to the measurements input by user into measurement data repository 204 to get an estimate Zhang teaches receiving, an image as captured by a user device, the image including a building feature and a reference object (the acquired RGB-D video and/or image data from the camera 14. a reference frame (reference object) with the floor (building feature) and computes the frame-to-frame offsets from the continuous or near continuous RGB-D input obtained by the camera 14. to generate a floor plan drawings to be further modified and annotated by a user (column 6, lines 15-46). Thomas et al in view Segev et al further in view of Thornberry et al and further in view of Zhang et al fails to teach: locate and segment the building feature in the image as captured by the user device to form a second segmented image using the image segmentation machine learning model, the second segmented image comprising a second set of segments overlaid on the image as captured; estimate a measurement of the building feature based on the second segmented image and in accordance with a relationship between a known measurement of the reference object and in the first segmented image; and output the estimated measurement of the building feature to the user device. It is inherent that claims 9-14 are allowed for depending on allowable independent claim 8 Conclusion Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Michael Burleson Patent Examiner Art Unit 2681 Michael Burleson February 18, 2026 /MICHAEL BURLESON/ /AKWASI M SARPONG/SPE, Art Unit 2681 2/23/2026
Read full office action

Prosecution Timeline

Jan 19, 2021
Application Filed
Apr 15, 2023
Non-Final Rejection — §103
Aug 10, 2023
Applicant Interview (Telephonic)
Aug 10, 2023
Examiner Interview Summary
Aug 24, 2023
Response Filed
Sep 07, 2023
Final Rejection — §103
Nov 14, 2023
Response after Non-Final Action
Nov 27, 2023
Response after Non-Final Action
Dec 13, 2023
Request for Continued Examination
Dec 18, 2023
Response after Non-Final Action
Jan 27, 2024
Non-Final Rejection — §103
May 02, 2024
Response Filed
Jul 27, 2024
Final Rejection — §103
Nov 27, 2024
Response after Non-Final Action
Dec 10, 2024
Response after Non-Final Action
Jan 03, 2025
Request for Continued Examination
Jan 10, 2025
Response after Non-Final Action
Jan 11, 2025
Non-Final Rejection — §103
Apr 03, 2025
Interview Requested
Apr 11, 2025
Applicant Interview (Telephonic)
Apr 16, 2025
Response Filed
Apr 19, 2025
Examiner Interview Summary
Jul 10, 2025
Non-Final Rejection — §103
Oct 27, 2025
Response Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603965
PRINTING DEVICE SETTING EXPANDED REGION AND GENERATING PATCH CHART PRINT DATA BASED ON PIXELS IN EXPANDED REGION
2y 5m to grant Granted Apr 14, 2026
Patent 12585826
DOCUMENT AUTHENTICATION USING ELECTROMAGNETIC SOURCES AND SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12566125
SEQUENCER FOCUS QUALITY METRICS AND FOCUS TRACKING FOR PERIODICALLY PATTERNED SURFACES
2y 5m to grant Granted Mar 03, 2026
Patent 12561548
SYSTEM SIMULATING A DECISIONAL PROCESS IN A MAMMAL BRAIN ABOUT MOTIONS OF A VISUALLY OBSERVED BODY
2y 5m to grant Granted Feb 24, 2026
Patent 12562549
LIGHT EMITTING ELEMENT, LIGHT SOURCE DEVICE, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY, AND BIOLOGICAL INFORMATION ACQUISITION APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
75%
Grant Probability
68%
With Interview (-6.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month