Prosecution Insights
Last updated: April 19, 2026
Application No. 18/115,855

Systems and Methods for Image-Based Location Determination and Parking Monitoring

Final Rejection §103§112
Filed
Mar 01, 2023
Examiner
RODRIGUEZ, ANTHONY JASON
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Sensen Networks Group Pty Ltd.
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
-5%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
3 granted / 18 resolved
-45.3% vs TC avg
Minimal -21% lift
Without
With
+-21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks page 8, filed 01/02/2026, with respect to the Objection of claim 6 have been fully considered and are persuasive. The Objection of claim 6 has been withdrawn. Applicant’s arguments, see Remarks pages 8-10, filed 01/02/2026, with respect to the Rejections of claims 1-4, 12-17, 19-24 and 26 under 35 U.S.C. 101 have been fully considered and are persuasive. The Rejections of claims 1-4, 12-17, 19-24 and 26 have been withdrawn. Applicant’s arguments, see Remarks pages 10-11, filed 01/02/2026, with respect to the Rejections of amended claim(s) 12, 15, and 26 under 35 U.S.C. 102(a)(1) have been fully considered and are moot in view of the new grounds of rejection (detailed in the rejections below) necessitated by Applicant’s amendment to the claim(s). Applicant’s arguments, see Remarks pages 11-13, filed 01/02/2026, with respect to the Rejections of amended claim(s) 1 under 35 U.S.C. 103 have been fully considered and are moot in view of the new grounds of rejection (detailed in the rejections below) necessitated by Applicant’s amendment to the claim(s). Claim Objections Claim 26 is objected to because of the following informality: The limitation "computer-readable storage medium" should read "computer-readable non-transitory storage medium" in order to be consistent with Paragraph 0179 of the Specification. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-4, and 6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the remote computing device." There is insufficient antecedent basis for this limitation in the claim. For the purposes of examination, the limitation is interpreted as “the computing device.” Regarding claims 3-4 and 6, they are rejected under 112b for inheriting and failing to cure the deficiencies of the parent claim 1. As per claim(s) 6, arguments made in rejecting claim(s) 1 are analogous. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 12-15, 19-21, and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (Localization using GPS and VISION aided INS with an Image Database and a Network of a Ground-based Reference Station in Outdoor Environments) hereinafter referenced as Choi, in view of Stumpe et al. (US 11132416 B1) hereinafter referenced as Stumpe, and Panboonyuen et al. (Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning), hereinafter referenced as Panboonyuen, and Li et al. (Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data) hereinafter referenced as Li. Regarding claim 1, Choi discloses: A vehicle mounted system for location determination in an urban area (Choi: 5.2. Localization based on VISION aided INS and the transition between mode 1 and mode 2: “Fig. 15 shows the trajectories of the UGV in path 1. Since this path has numerous features as trees and buildings on each side, visual localization estimates the position accurately except for the initial section.”), the system comprising: at least one camera, wherein the at least one camera is positioned to capture images of the urban area; and a computing device in communication with the at least one camera to receive the captured images of the urban area, the computing device comprising at least one processor and a memory accessible to the at least one processor (Choi: 2. LOCALIZATION SYSTEM ARCHITECTURE: “Subsystem 2 is UGV rover… Subsystem 2 includes a GPS receiver (NovAtel, OEMV3-L1), IMU (Honeywell, HG1700AG56), stereo camera (Point Grey, Flea2), autonomous navigation computers (SBC: 1.66GHz Core Duo, 1GB DDR2), WiBro MS (mobile station), and Gigabit LAN.”); wherein the memory comprises a library of reference background images and metadata for each reference background image, wherein the metadata comprises location information (Choi: 3.3. Visual localization design: “The structure of visual localization is shown in Fig. 2. It is classified into off-line and on-line processes. In the off-line process, stereo camera and DGPS aided INS are used to collect images and the position data. After the feature extraction, a tree-based DB is created.”); and wherein the memory stores program code executable by the at least one processor to configure the at least one processor to: receive an input image data from the computing device, wherein the input image data includes image data of at least one image captured by the computing device at a location to be determined (Choi: 3.3. Visual localization design: “The online process is for recognition, which includes searching through the tree and matching between images. When a query image is provided, a ranked list of matched images is computed. Relative pose estimation between the query image and DB images including the best matched image is then performed for localization.”); process the received input image data using a background matching module to identify matching reference background image; determine location information corresponding to the input image data based on the metadata of the matching reference background image in the library; and transmit the determined location information to the remote computing device (Claim limitation is interpreted according to the interpretation disclosed in the Rejection of claim 1 under 35 U.S.C. 112(b) disclosed above) (Choi: Figure 2; 3.3. Visual localization design: “The online process is for recognition, which includes searching through the tree and matching between images. When a query image is provided, a ranked list of matched images is computed. Relative pose estimation between the query image and DB images including the best matched image is then performed for localization... After the scores are calculated at each leaf, the image with the highest score is assumed to be the best match and the absolute position of the query image is coarsely estimated.”; Wherein the absolute position of the selected database image is used to calculate the captured query image’s position.), and the at least one processor is further configured to identify the matching reference background image by: extracting background descriptors from the at least one captured image; and selecting one or more candidate matching images from the library of background images based on the extracted background descriptors (Choi: 3.3. Visual localization design: “The online process is for recognition, which includes searching through the tree and matching between images. When a query image is provided, a ranked list of matched images is computed. Relative pose estimation between the query image and DB images including the best matched image is then performed for localization. In the feature extraction stage, SURF is used.”). Choi does not disclose expressly: wherein the background matching module comprises a background feature extractor neural network trained to extract background descriptors corresponding to the permanent structures from captured images, the at least one processor is further configured to identify the matching reference background image by: extracting background descriptors from the at least one captured image using the background feature extractor neural network. Stumpe discloses: a background feature extractor neural network trained to extract background descriptors corresponding to the permanent structures from captured images, wherein the at least one processor is further configured to identify the matching reference background image by: extracting background descriptors from the at least one captured image using the background feature extractor neural network (Stumpe: Col 10: Lines 20-36: “the new images may be identified as being of the business location based on visual features. Visual features may include colors, shapes, and locations of other businesses, building structures, or other persistent features...The number of matching visual features may be determined using a computer vision approach such as a deep neural network. The visual features surrounding an image region depicting the business in the reference image, or image context, may first be identified and then may be matched with visual features in the new images. In this way, visual features may be used to define the image context and the image regions associated with the business location in the reference image and comparison image.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the SURF based feature extraction algorithm disclosed by Choi with the deep neural network for identifying persistent features taught by Stumpe. The suggestion/motivation for doing so would have been “If an image has the same or similar image context as the reference image, the image is probably depicting the same location as the reference image. For example, if the business is depicted in the reference image as surrounded by two other businesses and a traffic light, then an image region in the comparison image may be identified by identifying where the two businesses and the traffic light are in the comparison image.” (Stumpe: Col 10: Lines 37-44). Further, one skilled in the art could have substituted the elements as described above by known methods with no change in their respective functions, and the substitution would have yielded nothing more than predictable results. Choi in view of Stumpe does not disclose expressly: wherein the background feature neural network comprises an attention determination layer trained to determine attention weights for the background descriptors in the at least one captured image, wherein the background descriptors corresponding to a persistent background are given a high attention weight and the background descriptors corresponding to a non-persistent background are given a low attention weight, wherein the permanent structures are identified using the high attention weight. Panboonyuen discloses: a neural network comprising an attention determination layer trained to determine attention weights for descriptors in the at least one captured image (Panboonyuen: Abstract: “In the remote sensing domain, it is crucial to complete semantic segmentation on the raster images, e.g., river, building, forest, etc., on raster images…In this paper, we aim to propose a novel CNN for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose applying a recent CNN called a global convolutional network (GCN), since it can capture different resolutions by extracting multi-scale features from different stages of the network.” 5. Experimental Results and Discussion: “The implementation is based on a deep learning framework, called “Tensorflow-Slim” [36], which is extended from Tensorflow...All models are trained for 50 epochs with a mini-batch size of 4, and each batch contains the cropped images that are randomly selected from training patches. These patches are resized to 521 x 521 pixels. The statistics of BN is updated on the whole mini-batch.” ; Wherein the deep-learning implementation is trained.), wherein the descriptors corresponding to important features are given high attention weights and the descriptors corresponding to non-important features are given a low attention weights, wherein the structures are identified using the high attention weight (Panboonyuen: 3.3. The Channel Attention Block: “To apply this atttentional layer to our network, the channel attention block is shown in Block A in Figure 2 and its detailed architecture is shown in Figure 4. It is designed to change the weights of the remote sensing features on each stage (level), so that the weights are assigned more values on important features adaptively.”; Wherein the features with higher attention weights are used, and play a larger role, for the identification and classification of structures/objects within the images.). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the persistent feature identifying deep neural network disclosed by Choi in view of Stumpe with the convolutional neural network containing attention blocks as taught by Panboonyuen. The suggestion/motivation for doing so would have been “Attention mechanisms [16,17] in neural networks are very loosely based on the visual attention mechanism found in humans and equips a neural network with the ability to focus on a subset of its inputs (or features): it selects specific inputs...It is designed to change the weights of the remote sensing features on each stage (level), so that the weights are assigned more values on important features adaptively.” (Panboonyuen: 3.3. The Channel Attention Block). Further, one skilled in the art could have substituted the elements as described above by known methods with no change in their respective functions, and the substitution would have yielded nothing more than predictable results. Choi in view of Stumpe and Panboonyuen fails to disclose: the at least one processor is further configured to identify the matching reference background image by: performing geometric matching between the at least one captured image and the candidate matching images to select the matching reference background image. Li discloses: performing geometric matching between the captured image and the candidate matching images to select the matching reference background image (Li: 3.2. Image Retrieval Using SIFT-Based Voting Strategy: “To improve the robustness of the system, a further step is to check to the correctness of the top voted images based on pair-wise geometric consistency. This process can detect any falsely ranked/selected reference images as well as remove mismatches. First RANSAC is used to estimate the homography (projective transformation) between the two images, and remove mismatches (Figure 11).”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the image similarity measure determined by the RANSAC process disclosed by Li for the selection of best matched database image disclosed by Choi in view of Stumpe and Panboonyuen. The suggestion/motivation for doing so would have been “This process can detect any falsely ranked/selected reference images as well as remove mismatches…Therefore in this way the candidate image space is further filtered, so that it contains only the reference images with corresponding views in the query image.” (Li: 3.2. Image Retrieval Using SIFT-Based Voting Strategy). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Choi in view of Stumpe and Panboonyuen with Li to obtain the invention as specified in claim 1. Regarding claim 3, Choi in view of Stumpe, Panboonyuen, and Li: The vehicle mounted system of claim 1, wherein the geometric matching comprises identifying common visual features in the at least one captured image and each of the candidate matching images (Li: 3.2. Image Retrieval Using SIFT-Based Voting Strategy: “To improve the robustness of the system, a further step is to check to the correctness of the top voted images based on pair-wise geometric consistency. This process can detect any falsely ranked/selected reference images as well as remove mismatches. First RANSAC is used to estimate the homography (projective transformation) between the two images, and remove mismatches (Figure 11).”). Regarding claim 4, Choi in view of Stumpe, Panboonyuen, and Li: The vehicle mounted system of claim 1, wherein the geometric matching is performed using a random sample consensus process. (Li: 3.2. Image Retrieval Using SIFT-Based Voting Strategy: “To improve the robustness of the system, a further step is to check to the correctness of the top voted images based on pair-wise geometric consistency. This process can detect any falsely ranked/selected reference images as well as remove mismatches. First RANSAC is used to estimate the homography (projective transformation) between the two images, and remove mismatches (Figure 11).”). As per claim(s) 12, arguments made in rejecting claim(s) 1 are analogous. Regarding claim 13, Choi in view of Stumpe, Panboonyuen, and Li: The method of claim 12, wherein the at least one camera is mounted on the vehicle (Choi: Figure 1; 2. LOCALIZATION SYSTEM ARCHITECTURE: “Subsystem 2 is UGV rover…Subsystem 2 includes a GPS receiver (NovAtel, OEMV3-L1), IMU (Honeywell, HG1700AG56), stereo camera (Point Grey, Flea2),”; Wherein the stereo camera is mounted on the UGV.). Regarding claim 14, Choi in view of Stumpe, Panboonyuen, and Li: The method of claim 12, wherein the determination of the location of the vehicle is performed in real time by the vehicle mounted computing device (Choi: Abstract: “In the data fusion of visual localization and INS, an asynchronous and time delayed data fusion algorithm is presented because visual localization is always time-delayed compared with INS. By using DGPS to obtain the reference position under the dynamic conditions of the reference station, the restrictions of the conventional DGPS are overcome and all UGVs within WiBro communication range of the reference station can accurately estimate the position with a common GPS.”) (Li: 2. METHODOLOGY: “The main improvements in this work are to geo-reference image feature points and use these features as 3D natural landmarks for positioning and navigation. By matching the real time query image with pre-stored geo-referenced images, the 3D landmarks represented by feature points are recognized and geo-information can be transferred from reference image to query image through these common feature points.”). As per claim(s) 15, arguments made in rejecting claim(s) 1 are analogous. Regarding claim 19, Choi in view of Stumpe, Panboonyuen, and Li: The vehicle mounted system of claim 15, wherein the computing device is configured to determine the location in real-time (Choi: Abstract: “In the data fusion of visual localization and INS, an asynchronous and time delayed data fusion algorithm is presented because visual localization is always time-delayed compared with INS. By using DGPS to obtain the reference position under the dynamic conditions of the reference station, the restrictions of the conventional DGPS are overcome and all UGVs within WiBro communication range of the reference station can accurately estimate the position with a common GPS.”) (Li: 2. METHODOLOGY: “The main improvements in this work are to geo-reference image feature points and use these features as 3D natural landmarks for positioning and navigation. By matching the real time query image with pre-stored geo-referenced images, the 3D landmarks represented by feature points are recognized and geo-information can be transferred from reference image to query image through these common feature points.”). Regarding claim 20, Choi in view of Stumpe, Panboonyuen, and Li: The vehicle mounted system of claim 15, wherein the at least one camera is mounted on the vehicle to capture images of a vicinity of the vehicle. (Choi: Figure 1; 2. LOCALIZATION SYSTEM ARCHITECTURE: “Subsystem 2 is UGV rover…Subsystem 2 includes a GPS receiver (NovAtel, OEMV3-L1), IMU (Honeywell, HG1700AG56), stereo camera (Point Grey, Flea2),”). Regarding claim 21, Choi in view of Stumpe, Panboonyuen, and Li discloses: The vehicle mounted system of claim 20, wherein the vehicle comprises an on-board GPS receiver (Choi: 2. LOCALIZATION SYSTEM ARCHITECTURE: “Subsystem 2 includes a GPS receiver (NovAtel, OEMV3-L1), IMU (Honeywell, HG1700AG56), stereo camera (Point Grey, Flea2),”) and the vehicle is configured to trigger location determination using the system for location determination in response to an image based location determination trigger event (Choi: 4. INTEGRATED LOCALIZATION SYSTEM: “The integrated localization system uses the hierarchical federation of three measurement layers, i.e., GPS, INS, and visual localization. Therefore, the time synchronization between the localization systems should be considered very carefully in the filter design… The CCD processing computer captures images at 30fps (frame per second) for the world modeling of stereo vision and transfers images into the Vision navigation computer by a Giga-Bit LAN switch, as shown in Fig. 1(b).”; Wherein the image-based location determination is triggered based on the image capturing fps.). As per claim(s) 26, arguments made in rejecting claim(s) 12 are analogous. In addition, Section 2. LOCALIZATION SYSTEM ARCHITECTURE of Choi discloses a UGV rover comprising a “vision navigation computer, sensor navigation computer, and an integrated processing computer,” thus disclosing a computer-readable storage medium. Claim(s) 6, 22, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi in view of Stumpe, Panboonyuen, and Li, and further in view of Vishal et al. (Accurate Localization by Fusing Images and GPS Signals) hereinafter referenced as Vishal. Regarding claim 6, Choi in view of Stumpe, Panboonyuen, and Li discloses: The vehicle mounted system of claim 1, wherein the memory stores program code executable by the at least one processor to further configure the at least one processor to: receive input image from the remote computing device; and transmit the location to the remote computing device (Claim limitations are interpreted according to the interpretation disclosed in the Rejection of claim 6 under 35 U.S.C. 112(b) disclosed above) (Choi: 3.3. Visual localization design: “The on line process is for recognition, which includes searching through the tree and matching between images. When a query image is provided, a ranked list of matched images is computed. Relative pose estimation between the query image and DB images including the best matched image is then performed for localization.”) Choi in view of Stumpe, Panboonyuen, and Li does not disclose expressly: receiving GPS data corresponding to the input image, wherein the GPS data comprises a low data quality indicator; generating a GPS correction signal based on the determined location information; and transmitting the GPS correction signal. Vishal discloses: receiving GPS data corresponding to the input image, wherein the GPS data comprises a low data quality indicator (Vishal: Abstract; 4.1. Details of Algorithm: “We have a set S = {(V1, G1),(V2, G2), ...(Vn, Gn)} where Vi and Gi is the i th video and it’s corresponding GPS signal. Each GPS signal Gi has a noise attached with them”; Wherein the noise indicates low GPS quality); generating a GPS correction signal based on the determined location information; and transmitting the GPS correction signal (Vishal: 4.1.1 Robust Estimation through Random Walks: The estimated GPS locations of I yielded by the triplets is accurate only if the GPS tag of reference images mi and mj is accurate. We use Random Walks on estimated triplets to discover the reliable subset of estimations…We include the original GPS tag of I, for the estimation of its correct GPS-location”; 7. Discussion and Conclusion: “We propose methods that use noisy GPS to improve the vision-based localization. We also propose methods that use vision based localization to improve the GPS signals. Finally we use these two steps in an iterative manner to get further improvement.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the image geo-localization calculation process disclosed by Choi in view of Stumpe, Panboonyuen, and Li by implementing the image’s noisy GPS location as taught by Vishal. The suggestion/motivation for doing so would have been “Although both vision and GPS based localization algorithms have many limitations and inaccuracies, there are some interesting complementarities in their success/failure scenarios that justify an investigation into their joint utilization.” (Vishal: Abstract) . Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Choi in view of Stumpe, Panboonyuen, and Li with Vishal to obtain the invention as specified in claim 6. Regarding claim 22, Choi in view of Stumpe, Panboonyuen, and Li discloses: The vehicle mounted system of claim 15. Choi in view of Stumpe, Panboonyuen, and Li does not disclose expressly: wherein the image based location determination trigger event may comprise at least one of: low precision GPS data being generated by the on-board GPS receiver; or crossing of a predefined geo-fence by the vehicle. Vishal discloses: wherein the image based location determination trigger event comprises: low precision GPS data being generated by the on-board GPS receiver (Vishal: 6. Experiments and Results: “In this section, we demonstrate the utility of our approach with quantitative experiments. We capture the videos using a Contour action camera with resolution 1920 × 1080 at 30f ps. The device also has an inbuilt GPS sensor which recorded the corresponding GPS signal at 1Hz.”; 7. Discussion and Conclusion: “The objective of our work has been to fuse GPS signals and images together in order to improve the localization. We propose methods that use noisy GPS to improve the vision based localization.”; Wherein the capturing of images and their corresponding noisy GPS data for location determination constitutes an image-based location determination trigger event.). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of filtering database images based on a query image distance threshold disclosed by Vishal for the retrieval of the queried database images disclosed by Choi in view of Stumpe, Panboonyuen, and Li. The suggestion/motivation for doing so would have been “In order to avoid the perceptual aliasing and camera occlusion we consider only those images which lie within a radius of distance d from the query image.” (Vishal: 3. Use of GPS for Better Visual Localization and Extracting Useful Features). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Choi in view of Stumpe, Panboonyuen, and Li with Vishal to obtain the invention as specified in claim 22. As per claim(s) 25, arguments made in rejecting claim(s) 6 are analogous. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTHONY J RODRIGUEZ/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Mar 01, 2023
Application Filed
Sep 22, 2025
Response after Non-Final Action
Oct 20, 2025
Non-Final Rejection — §103, §112
Jan 02, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499701
DOCUMENT CLASSIFICATION METHOD AND DOCUMENT CLASSIFICATION DEVICE
2y 5m to grant Granted Dec 16, 2025
Patent 12488563
Hub Image Retrieval Method and Device
2y 5m to grant Granted Dec 02, 2025
Patent 12444019
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
-5%
With Interview (-21.4%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month