Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,722

APPARATUS LOCALISATION

Non-Final OA §101§102§103
Filed
Mar 27, 2024
Examiner
PEDAPATI, CHANDHANA
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Oxford University Innovation Limited
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
14 granted / 22 resolved
+1.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant Limitations appearing inside of {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Claims 1-20 are pending in this application. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: input module in claim 17 segmentation module in claim 17 descriptor module in claim 17 matching module in claim 17 localisation module Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter (i.e., process machine, manufacture, or composition of matter) because the claim, reciting A machine-readable medium, is directed to a program/signal per se, mere information in the form of data, without a tangible medium. Note, it is not necessary for a claim to fall into a single category, as long as it is clear that it falls into at least one category (see MPEP §2106.03). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Zhang” (ZHANG LINTONG ET AL, "InstaLoc: One-shot Global Lidar Localisation in Indoor Environments through Instance Learning", pages 1-11, URL: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=wWODV5wAAAAJ&citation_for_view=wWODV5wAAAAJ:ufrVoPGSRksC, May 16, 2023), as cited in the IDS (03/27/2024). 1. Zhang teaches a computer-implemented method of localisation of an imaging apparatus in an environment, the method comprising: receiving a point cloud map indicative of the environment, the point cloud map captured by the imaging apparatus located at a position within the environment (Zhang, [p 2, Col 2, §III.A., ¶1]; The map M is a collection of registered lidar scans, Pt = {mi,t ∈ R3}, accumulated over time.); segmenting the point cloud map into a plurality of object segments each comprising an object feature (Zhang, [p 3, Col 2, §III.B., ¶1]; the network predicts for each point pk a semantic label sk corresponding to the object class that the point belongs to (e.g. chair, table, wall, ceiling) and an instance label ik representing the unique object that the point corresponds to (e.g. chair1, chair14 or chair42); assigning a unique descriptor to each of the object features (Zhang, [p3, Col 1, §III.A., ¶2]; we use object level descriptors that capture the distinguishing features of each object); matching at least one of the object features to a corresponding feature in an existing map of the environment using the unique descriptor (Zhang, [p3, Col 1, §III.A., ¶3];descriptors of the objects segmented from the query scan need to be matched against the database of objects with descriptors in the map to determine correspondences between the query scan and the map. We use the approach from [1] to group descriptors based on their similarity and to find correspondences.); and localising the position of the imaging apparatus in the environment based on the at least one matched object feature (Zhang, [p3, Col 1, §III.A., ¶3]; use RANSAC on a subset of correspondences to estimate the 6-DOF pose of the lidar sensor by aligning the matched objects between the two scans). 2. Zhang teaches the method of claim 1. Zhang further teaches comprising matching a plurality of the object features to respective corresponding features in the existing map of the environment using the unique descriptors of the plurality of the object features; and localising the position of the imaging apparatus in the environment based on the plurality of matched object features. (Zhang, [p3, Col 1, §III.A., ¶3];descriptors of the objects segmented from the query scan need to be matched against the database of objects with descriptors in the map to determine correspondences between the query scan and the map. We use the approach from [1] to group descriptors based on their similarity and to find correspondences. Finally, we use RANSAC on a subset of correspondences to estimate the 6-DOF pose of the lidar sensor by aligning the matched objects between the two scans). 3. Zhang teaches the method of claim 2. Zhang further teaches wherein each object feature belongs to an object category indicating the type of object, and at least two of the plurality of the object features belong to different object categories (See Zhang, Table 1, shown below, exhibits features belonging to different object categories; PNG media_image1.png 427 1065 media_image1.png Greyscale ). 4. Zhang teaches the method of claim 1. Zhang further teaches wherein the point cloud map represents an image of the environment captured by the imaging apparatus in a single position and a single orientation within the environment (Zhang, See expert from Fig 2, exhibits a single scan depicting an environment from a single position and orientation: PNG media_image2.png 172 249 media_image2.png Greyscale ). 5. Zhang teaches the method of claim 1. Zhang further teaches wherein segmenting the point cloud map comprises representing the point cloud map using sparse tensors (Zhang, [p 5, Col 1, §IV.B., ¶1]; sparse tensor framework). 6. Zhang teaches the method of claim 1. Zhang further teaches wherein segmenting the point cloud map comprises, for each point in the point cloud map: predicting a semantic label indicating an object class of the object feature comprising the point (Zhang, See Fig 2, shown below, “Semantic classes”, and [p 3, Col 2, §III.B., ¶1]; the network predicts for each point pk a semantic label sk corresponding to the object class that the point belongs to (e.g. chair, table, wall, ceiling)); and predicting an instance label indicating a unique instance of the object feature comprising the point (Zhang, See Fig 2, “Instance classes”, [p 3, Col 2, §III.B., ¶1]; and an instance label ik representing the unique object that the point corresponds to (e.g. chair1, chair14 or chair42).). PNG media_image3.png 301 1219 media_image3.png Greyscale 7. Zhang teaches the method of claim 1. Zhang further teaches wherein segmenting the point cloud map comprises grouping a plurality of points in the point cloud map, the plurality of points associated with an object feature (Zhang, [p 3, Col 2, §III.B., ¶2]; to obtain correct segmentation group points using “radius threshold proportionate to the vertical distance between two beams”). 8. Zhang teaches the method of claim 7. Zhang further teaches wherein the point cloud map is obtained using depth imaging (Liu, [p 3, Col 1, §3, ¶1]; high-resolution LiDAR depth camera), and grouping the plurality of points is performed using an adaptive radius threshold proportionate to a vertical distance between two depth imaging beams used to capture the point cloud map (Zhang, [p 3, Col 2, §III.B., ¶2];group points using “radius threshold proportionate to the vertical distance between two beams”). 9. Zhang teaches the method of claim 1. Zhang further teaches wherein segmenting the point cloud map is performed using a neural network (Zhang, [p 3, Col 1, §III.A., ¶4]; both the instance segmentation module and the instance description module are modeled using deep neural networks which work directly on 3D point cloud data). 10. Zhang teaches the method of claim 1. Zhang further teaches wherein assigning the unique descriptor comprises: for each of the object features, each of the object features comprising a plurality of points of the point cloud map using a network comprising a series of convolutional layers to generate a plurality of point descriptors each corresponding to a particular point in the plurality of points of the object feature; and pooling the plurality of point descriptors to generate the unique descriptor of the object feature (Zhang, [p 3, Col 2 – p4. Col 1, §III.C., ¶1]; The descriptor network output for each object instance Ij is an NjxD tensor where every row is a descriptor of length D for one point in the object instance. Finally, an average pooling layer computes the average of the Nj descriptors to create a single descriptor of length D for each object instance). 11. Zhang teaches the method of claim 1. Zhang further teaches wherein assigning the unique descriptor is performed using a neural network (Zhang, [p 3, Col 1, §III.A., ¶4]; both the instance segmentation module and the instance description module are modeled using deep neural networks which work directly on 3D point cloud data). 12. Zhang teaches the method of claim 1. Zhang further teaches wherein matching at least one of the object features to a corresponding feature in an existing map of the environment using the unique descriptor comprises: matching one or more object features in the point cloud map to a corresponding feature in the existing map; and identifying a closest match to the corresponding feature from the one or more matched object features using a correspondence grouping method (Zhang, [p 3, Col 1, §III.A., ¶3]; descriptors of the objects segmented from the query scan need to be matched against the database of objects with descriptors in the map to determine correspondences between the query scan and the map. We use the approach from [1] to group descriptors based on their similarity and to find correspondences. Finally, we use RANSAC on a subset of correspondences to estimate the 6-DOF pose of the lidar sensor by aligning the matched objects between the two scans). 13. Zhang teaches the method of claim 1. Zhang further teaches wherein the plurality of object segments comprises a first object feature in a first object segment and a second object feature in a second object segment, and wherein the first object feature comprises a different number of points of the point cloud map than the second object feature (See Zhang, Fig 1, exhibits different object features, such as desk, floor, sofa, etc.,) each comprising a different number of points of the point cloud. PNG media_image4.png 593 606 media_image4.png Greyscale ). 14. Zhang teaches he method of claim 1. Zhang further teaches wherein the environment is an indoor environment (Zhang, [p 2, Col 1, §1, ¶1]; lidar localization approach for indoor environments). 15. Zhang teaches the method of claim 1. Zhang further teaches wherein the imaging apparatus is a lidar and the point cloud map captured by the imaging apparatus is a lidar map (Zhang, [p 2, Col 2, §III.A., ¶1]; The map M is a collection of registered lidar scans, Pt = {mi,t ∈ R3}, accumulated over time.). 16. Zhang teaches the method of claim 1. Zhang further teaches wherein localising the position of the imaging apparatus comprises identifying the spatial position and directional orientation of the imaging apparatus (Zhang, [p 2, Col 2, §III.A., ¶1]; We seek to determine the pose of the lidar at time ti defined as follows, xi ≜ [ti,Ri] ∈ SO(3) × R3 (1) where ti ∈ R3 is the translation, Ri ∈ SO(3) is the orientation of Q in M.) Claim 17 is similarly analyzed as analogous claim 1. 18. Zhang teaches the apparatus of claim 17. Zhang further teaches wherein one or more of: the segmentation module comprises a neural network configured to segment the point cloud map into the plurality of object segments; or the descriptor module comprises a neural network configured to assign the unique descriptor to each of the object features (Zhang, See Fig 2, shown below, “Semantic classes”, and [p 3, Col 2, §III.B., ¶1]; the network predicts for each point pk a semantic label sk corresponding to the object class that the point belongs to (e.g. chair, table, wall, ceiling) and an instance label ik representing the unique object that the point corresponds to (e.g. chair1, chair14 or chair42).). 19. Zhang teaches the apparatus of claim 17. Zhang further teaches wherein: the apparatus comprises the imaging apparatus located with the apparatus (Zhang, Abstract; Localization for autonomous robots) the imaging apparatus is configured to capture the point cloud map (Zhang, [p 2, Col 2, §III.A., ¶1]; The map M is a collection of registered lidar scans, Pt = {mi,t ∈ R3}, accumulated over time.) and provide the point cloud map to the input module (Zhang, [p 3, Col 1-2, §III.B., ¶1]; Given a lidar scan, i.e. a set of N 3D as input, the network (i.e., input module) predicts for each point pk a semantic label sk …We use the state-of-the-art Softgroup [22] network architecture to construct this module); and the localisation module is configured to localise the position of the apparatus (Zhang, [p 4, Col 2, §III.D., ¶1]; for the 6 DoF pose estimation, we apply a RANSAC step on the subset of correspondences to align the query scan with the prior map, with τ and ϵ.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view “Liu” (Liu et al. HIDA: Towards Holistic Indoor Understanding for the Visually Impaired via Semantic Instance Segmentation With a Wearable Solid-State LiDAR Sensor. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, October 2021, pp. 1780-1790). 20. Zhang teaches A machine-readable medium having program code stored thereon which (Zhang, [p 5, Col 1, §IV.B., ¶1]; 4GB mobile GPU, NVIDIA Quadro T2000), when executed by a computer, causes the computer to perform the method of claim 1. While Zhang implies that a computer is used in order for the GPU to operate, “a computer” is not explicitly disclosed. However, Liu, a similar field of endeavor, teaches a machine-readable medium having program code stored thereon which, when executed by a computer, causes the computer to perform the method of claim 1 (Liu, [p 3, Col 1, §3, ¶1]; A laptop placed in a backpack is the second component of our system. The laptop with a GPU processor). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a computer as taught by Liu to the invention of Zhang. The motivation to do so would be to ensure that the instance segmentation and localization can be performed in an online manner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 Notice of References Cited Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is 571-272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANDHANA PEDAPATI/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602896
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597095
INTELLIGENT SYSTEM AND METHOD OF ENHANCING IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12571683
ELEVATED TEMPERATURE SCREENING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548180
HOLE DIAMETER MEASURING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541829
MOTION-BASED PIXEL PROPAGATION FOR VIDEO INPAINTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month