Prosecution Insights
Last updated: April 18, 2026
Application No. 18/227,701

Image-Assisted Segmentation of Object Surface for Mobile Dimensioning

Non-Final OA §102§103
Filed
Jul 28, 2023
Examiner
VARNDELL, ROSS E
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Zebra Technologies Corporation
OA Round
2 (Non-Final)
85%
Grant Probability
Favorable
2-3
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
520 granted / 615 resolved
+22.6% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 615 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) is acknowledged. Applicant claims priority from U.S. Provisional Application No. 63/397,975, filed on August 15, 2022. Claim Rejections - 35 USC § 102/103 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6 and 11-16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pugh et al. (US20210142497A1 – hereinafter “Pugh”). The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6 and 11-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pugh et al. (US20210142497A1 – hereinafter “Pugh”) in view of Boardman et al. (US 9495764 B1 – hereinafter “Boardman”). Claim 1. Pugh disclose a method in a computing device, the method comprising: capturing, via a depth sensor ([0031]: "The user device 210 can include .. . one or more sensors (e.g., cameras 213, IMUs 214, depth sensors 215) ... "), (i) a point cloud depicting ([0042]: "The metric scale information is preferably a point cloud (e.g. a set of points such as 50 points, 100 points, etc.)") an object resting on a support surface ([0103] : "S450 preferably identifies horizontal planes (e.g., floors), but can additionally or alternatively identify vertical planes (e.g., walls)), and (ii) a two-dimensional image depicting the object and the support surface ([0036]: "S100 preferably includes receiving and/or capturing images and associated camera and sensor data"); detecting, from the point cloud, the support surface and a portion of an upper surface of the object ([0103] : "S450 preferably identifies horizontal planes (e.g., floors), but can additionally or alternatively identify vertical planes (e.g., walls); [0104]: "determining horizontal planes based on fitting planes to point clouds with a surface normal parallel to the gravity vector … determining floor planes by filtering point clouds for points labeled as semantic floor classes"; [0122]: "information identifying pixels that correspond to key geometric surfaces (e.g., walls, floors, horizontal surfaces, etc."); labelling a first region of the image corresponding to the portion of the upper surface as a foreground region ([0078]: "Segmenting the scene S420 preferably functions to determine semantic probabilities for each of a set of pixels”; Abstract: "generating at least one segmentation mask that identifies real objects included in the photorealistic image"; [0080]: "determining a class (and/or a class probability) for each of pixel forming the segments"; [0128]: "The occlusion masks are determined based on ... the semantic segmentation map”; [0121]: “S500 preferably functions to determine foreground occlusion masks”); based on the first region, performing a foreground segmentation operation on the image to segment the upper surface of the object from the image ([0078]: "performing at least one semantic segmentation process (e.g., using a classifier, a neural network, a convolutional neural network)"; object segmentation: [0124]: "identifying edge pixels included in edges of real objects included in the photorealistic image, by using at least one generated object mask"); determining, based on the point cloud, a three-dimensional position of the upper surface segmented from the image (3D positioning: Paragraph [0084]: "estimating dense pixelwise geometry S430 functions to determine a dense depth map for the image"; Abstract: "generating a dense depthmap that includes depth estimates for each pixel of the photorealistic image"; Point cloud fusion: [0108]: "fusing the photogrammetry point cloud with the neural depth map"; [0106] : "combining data to determine a fused depth map for the image"); and determining dimensions of the object based on the three-dimensional position of the upper surface ([0092] : "using metric scale depth estimates from depth sensors"; Height calculations: [0101]: "computing the camera height can be computed using the following equation: [mathematical formula]"; Object height: [0099]: "global scale can be determined by detecting heights of objects"; Pugh provides 3D positions with metric scale, which inherently enables dimensional determination). Pugh teaches all of the limitations with minimal differences. The examiner believes that the “dimensions of the object based on the three-dimensional position of the upper surface” is inherently taught since the Pugh provides the technical implementation framework. Nevertheless Boardman adds the explicit dimensional measurement focus (dimension calculation: C30:L50-55 ”In block 630, the routine then uses the generated model to estimate the object volume, and to optionally calculate or otherwise estimate measurement values for other attributes of the object”; volume calculation: C31:L3-7, L25-30, and L49-51 “the calculation of volume of an object may be performed based on measuring the amount of space between the surface of the pile and the ground it sits on, referred to generally in this specific example as the top and base surfaces, respectively … object volume is obtained by computing the volume of the object component … the volume may be obtained by computing the integral of the difference between the top field and a field derived from the bare earth model.”). Therefore, in the event Pugh does not teach the dimensional measurement it would have been obvious to one of ordinary skill in the art to combine Pugh and Boardman before the effective filing date to obviate the claimed invention. The motivation for this combination of references would have been to that both references address mobile device-based object analysis, they offer complementary solutions such as 3D scene processing enhances measurement accuracy, there is an industry trend toward mobile AR applications combining multiple sensor modalities, and an expected benefit from the combination of improved dimensional accuracy through sophisticated 3D processing. Finally, there would be a reasonable expectation of success in combining Pugh and Boardman including compatible mobile hardware platforms, feasibility of real-time 3D processing on mobile devices, and ARKit/ARCore platforms demonstrate combination of depth sensing with measurement. Claims 2 and 12. Pugh discloses the method of claim 1, further comprising: presenting the dimensions on a display of the computing device (Pugh [0031]: "The user device 210 can include ... one or more displays ... displays rendered scenes"). Claims 3 and 13. Pugh discloses the method of claim 1, further comprising: labelling a second region of the image corresponding to the support surface as a background region (Pugh [0078]: "generate object masks" and [0128]: "background region" classifications). Claims 4 and 14. Pugh discloses the method of claim 3, further comprising: detecting, in the point cloud, a further surface distinct from the upper surface and the support surface (Pugh [0103]: "identify vertical planes (e.g., walls)") ; and labelling a third region of the image corresponding to the further surface as a probably background region (Pugh [0080]: multiple object classifications with probability scores; [0139]: background mask). Claims 5 and 15. Pugh discloses method of claim 4, wherein detecting the further surface includes detecting a portion of the point cloud with a normal vector different from a normal vector of the upper surface by at least a threshold (Pugh [0104]: "surface normal parallel to the gravity vector" and [0113]: "surface normal accuracy"; [0115]: “A threshold over Euclidean distance can be used.”). Claims 6 and 16. Pugh discloses method of claim 4, further comprising: labelling a remainder of the image as a probable foreground region (Pugh [0080]: multi-class pixel classification with probability assignments; “S500 preferably functions to determine foreground occlusion masks”). Claim 11. A computing device, comprising: a depth sensor; and a processor configured to … The teachings of Pugh or Pugh and Boardman renders claim(s) 11 anticipated/obvious for the reasons discussed above for claim 1, mutatis mutandis. Claims 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over of Pugh or Pugh and Boardman as applied to claims 1 and 11 above, and further in view of Azam et al. (US 20240054621 A1 – hereinafter “Azam”). Claims 7 and 17. Pugh discloses method of claim 1, further comprising: prior to determining dimensions of the object Pugh discloses all of the subject matter as described above except for specifically teaching “determining whether the point cloud exhibits multipath artifacts by: selecting a candidate point on the upper surface; determining a reflection score for the candidate point; and comparing the reflection score to a threshold.” However, Azam in the same field of endeavor teaches determining whether the point cloud exhibits multipath artifacts ([0007] : “When generating the point cloud, artifacts (i.e., aberrations) can be unintendedly captured by the TOF scanner.”; [0048]: “removing reflection artifacts from point clouds”; [0091]) by: selecting a candidate point on the upper surface ([0012]: "selecting candidate 3D points encompassed by the bounding coordinates in the 3D space"; [0107]: "The software application 604 is configured to pick the 3D data points inside the bounding box”); determining a reflection score for the candidate point; and comparing the reflection score to a threshold ([0012]: "clustering the candidate 3D points by intensity values or reflectance values; and selecting at least one of the 3D points as the reflection artifact based at least in part on a threshold associated with the intensity values or the reflectance values")). Therefore, it would have been obvious to one of ordinary skill in the art to combine Pugh and Azam before the effective filing date of the claimed invention. The motivation for this combination of references would have been to solve the common problem that both references address (depth sensor accuracy issues) using compatible technologies (ToF sensors and point cloud processing) to provide complementary solutions (Pugh provides mobile dimensioning framework, Azam adds artifact detection) to address an industry need of mobile dimensioning with artifact-free point clouds for accuracy. The combination of Pugh and Azam has a reasonable expectation of success since both references use similar ToF sensor technology with compatible point cloud processing pipelines and Azam's artifact detection could enhance Pugh's dimensioning accuracy. Claims 8 and 18. The method of claim 7, wherein selecting the candidate point includes identifying a non-planar region of the upper surface (Boardman C16:L35-40 “the surface of the object 200 may have various irregularities or other features that may be identified in the image”), and selecting the candidate point from the non-planar region (Azam [0012]: "selecting candidate 3D points encompassed by the bounding coordinates in the 3D space"; [0107]: "The software application 604 is configured to pick the 3D data points inside the bounding box”). Therefore, it would have been obvious to one of ordinary skill in the art to combine Pugh, Boardman, and Azam before the effective filing date of the claimed invention. The motivation for this combination of references would have been to use Boardman’s surface irregularity detection to identify areas with “curvature,” “lack of continuity,” and/or “cavities, indentation, protrusions” (Boardman C16), apply Pugh’s surface normal analysis to quantify areas that the surface normals deviate from an expected planar projection, and use Azam’s candidate point selection methodology to select specific points within the identified non-planar regions for artifact analysis to yield expected results of improving the detection artifacts in of non-planar areas detected by the depth sensor. Allowable Subject Matter Claims 9-10 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record but not relied, yet considered pertinent to the applicant’s disclosure, is listed on the PTO-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ross Varndell whose telephone number is (571)270-1922. The examiner can normally be reached M-F, 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached at (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Jul 29, 2025
Non-Final Rejection — §102, §103
Feb 02, 2026
Response Filed
Apr 09, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603810
System and Method for Communications Beam Recovery
2y 5m to grant Granted Apr 14, 2026
Patent 12597238
AUTOMATIC IMAGE VARIETY SIMULATION FOR IMPROVED DEEP LEARNING PERFORMANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12582348
DEVICE AND METHOD FOR INSPECTING A HAIR SAMPLE
2y 5m to grant Granted Mar 24, 2026
Patent 12579441
SYSTEMS AND METHODS FOR IMAGE RECONSTRUCTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579786
SYSTEM AND METHOD FOR PROPERTY TYPICALITY DETERMINATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
85%
Grant Probability
98%
With Interview (+13.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 615 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month