Prosecution Insights
Last updated: April 19, 2026
Application No. 18/683,086

SYSTEMS AND METHODS FOR IMPROVED ACOUSTIC DATA AND SAMPLE ANALYSIS

Non-Final OA §101§102
Filed
Feb 12, 2024
Examiner
HELCO, NICHOLAS JOHN
Art Unit
2667
Tech Center
2600 — Communications
Assignee
VERACIO LTD.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
26 granted / 36 resolved
+10.2% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This action is in response to the Application filed on 02/12/2024. Claims 1-20 are pending. Priority The present Application claims priority to Provisional Application 63/233,545 with filing date 08/16/20221, as well as PCT/US22/38035 with filing date 07/22/2022, both of which are acknowledged. Information Disclosure Statement The Information Disclosure Statement (IDS) filed on 02/12/2024 has been fully considered by the examiner. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. Analysis for claim 1 is provided in the following. Claim 1 is reproduced in the following (annotation added): A method comprising: receiving, by a computing device, a sample image and an acoustic image associated with a sample; determining, by a machine learning model, an alignment of the sample image with the acoustic image; determining, based on the alignment of the sample image with the acoustic image, and based on orientation data associated with the sample, an orientation line associated with the sample; and causing, at a user interface, display of an output image, wherein the output image is indicative of the sample image and the orientation line. Step 1: Does the claim belong to one of the statutory categories? Claim 1 is directed to a process, which is a statutory category of invention (YES). Step 2A Prong One: Does the claim recite a judicial exception? Steps c-d recite mental processes, including observations, evaluations, or judgments, at a high level of generality such that they can be practically performed in the human mind. Part c recites determining any kind of alignment between two images, as long as they are a sample image and acoustic image, respectively. Part d recites the determination of an orientation line associated with the sample, with the only limitation being that the orientation line is based on the alignment and orientation data. Thus, a human can easily determine an orientation line associated with the sample by considering the alignment and orientation data in any way. Note that the courts do not distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer (see MPEP 2106.04(a)(2).III) (YES). Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? Part a is a preamble reciting a method with no further limitations. Part b recites mere data gathering of the two images. Parts e-f recite the causing of display of an output image that indicates the sample image and orientation line, which is regarded as merely using a computer as a tool to perform mental processes (NO). Step 2B: Does the claim as a whole amount to significantly more than the recited exception? The claim as a whole recites mere data gathering to perform mental processes, and the output of using a computer to perform said mental processes (NO). Claim 1 is not eligible. Similar analysis is applicable to independent claims 8 and 15. Claim 8 further recites an apparatus comprising one or more processors and computer-executable instructions. Claim 15 further recites a non-transitory computer-readable storage medium comprising processor-executable instructions. All of these elements are considered to be reciting computerized systems at a high level of generality. Thus, claims 8 and 15 are not eligible as applied to claim 1 above. Claims 2, 9, and 16 limit the machine learning model to specific types of models, which does not integrate the judicial exceptions into a practical application. Claims 2, 9, and 16 are not eligible. Claims 3, 6, 10, 13, 17, and 20 recite mere data gathering. Claims 3, 6, 10, 13, 17, and 20 are not eligible. Claims 4, 11, and 18 recite classifying, by the machine learning model, a plurality of pixels of each of the sample image acoustic image, which can be practically performed in the human mind. Claims 4, 11, and 18 are not eligible. Claims 5, 12, and 19 recite determining an alignment of the sample image with the acoustic image, based on the classification of the plurality of pixels of the two images, which can be practically performed in the human mind. Claims 5, 12, and 19 are not eligible. Claims 7 and 14 limit the orientation data to include more specific types of data, which does not integrate the judicial exceptions into a practical application. Claims 7 and 14 are not eligible. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Maeso et al. (U.S. Publ. US-2021/0192712-A1). Regarding claim 1, Maeso discloses a method (see figure 5 and paragraph 0046) comprising: receiving, by a computing device (see figure 1, control system 44 and paragraph 0033), a sample image (see figure 5, steps 202-204 and paragraphs 0047-0049, where a first/sample image of samples within a borehole can be reconstructed from resistivity, electromagnetic, or turn-indexed sensor data, and can depict resistivity or position data) and an acoustic image associated with a sample (see figure 5, steps 206-212 and paragraphs 0050-0051, where a second dataset including ultrasonic/acoustic data depicting samples in the borehole can be processed to extract characterization/orientation data, such as sample positions, dimensions, or orientations, via a machine learning model); determining, by a machine learning model, an alignment of the sample image with the acoustic image (see figure 5, step 214 and paragraphs 0059 and 0065, where the characterization data from the ultrasonic/acoustic images is overlaid on the reconstructed first/sample image, which the examiner interprets as the result of alignment between the two images); determining, based on the alignment of the sample image with the acoustic image, and based on orientation data associated with the sample, an orientation line associated with the sample (see figure 5, step 214 and paragraph 0059, where the combined/aligned image can include contours or other shapes that apply the above characterization/orientation data to the samples in the first image); and causing, at a user interface, display of an output image, wherein the output image is indicative of the sample image and the orientation line (see figure 5, step 214 and paragraph 0059, where the combined/aligned image can be displayed). Regarding claim 2, Maeso discloses wherein the machine learning model comprises at least one of: a segmentation model, an image classification model, an ensemble classifier, or a prediction model (see paragraphs 0073-0075, where the machine learning model can be an image classification model or ensemble classifier). Regarding claim 3, Maeso discloses receiving, from an imaging device, the acoustic image (see figure 5, step 206 and paragraph 0050, where an ultrasonic sensor can provide the ultrasonic/acoustic images). Regarding claim 4, Maeso discloses classifying, by the machine learning model, a plurality of pixels of the sample image (see paragraph 0074) and a plurality of pixels of the acoustic image (see paragraph 0050). Regarding claim 5, Maeso discloses determining, based on the classification of the plurality of pixels of the sample image and the plurality of pixels of the acoustic image, the alignment of the sample image with the acoustic image (see paragraphs 0065-0066, which specify that the pixel classification methods used in conjunction with the method of figure 5 can be applied to both the ultrasonic images and/or any other images, such as the first/sample images). Regarding claim 6, Maeso discloses receiving, from an imaging device, the orientation data (see figure 5, step 206 and paragraph 0050, where an ultrasonic sensor can provide the ultrasonic/acoustic images, from which the characterization/orientation data is extracted). Regarding claim 7, Maeso discloses wherein the orientation data is indicative of an orientation and a depth of the sample within a borehole (see paragraph 0051, where the characterization/orientation data includes the orientation of samples and their positions/depths within the borehole). Regarding claim 8, Maeso discloses an apparatus (see figure 1, control system 44 and paragraph 0031) comprising: one or more processors (see figure 1, downhole processor 46A, surface processor 46B and paragraph 0033); and computer-executable instructions (see figure 1, downhole memory 48A, surface memory 48B and paragraph 0033). The remainder of claim 8 recites steps identical to those of claim 1. Therefore, Maeso anticipates claim 8 as applied to claim 1 above. Regarding claim 9, Maeso discloses claim 9 as applied to claim 2 above. Regarding claim 10, Maeso discloses claim 10 as applied to claim 3 above. Regarding claim 11, Maeso discloses claim 11 as applied to claim 4 above. Regarding claim 12, Maeso discloses claim 12 as applied to claim 5 above. Regarding claim 13, Maeso discloses claim 13 as applied to claim 6 above. Regarding claim 14, Maeso discloses claim 14 as applied to claim 7 above. Regarding claim 15, Maeso discloses a non-transitory computer-readable storage medium comprising processor-executable instructions (see paragraph 0033). The remainder of claim 15 recites steps identical to those of claim 1. Therefore, Maeso anticipates claim 15 as applied to claim 1 above. Regarding claim 16, Maeso discloses claim 16 as applied to claim 2 above. Regarding claim 17, Maeso discloses claim 17 as applied to claim 3 above. Regarding claim 18, Maeso discloses claim 18 as applied to claim 4 above. Regarding claim 19, Maeso discloses claim 19 as applied to claim 5 above. Regarding claim 20, Maeso discloses claim 20 as applied to claim 6 above. Prior Art Cited but not Applied The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Stuart (U.S. Publ. US-2019/0129027-A1) discloses determining, by a machine learning model, an alignment of the sample image with the acoustic image (see paragraphs 0066-0068, where visible light images and acoustic images can be aligned based on the features present in the images). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOHN HELCO whose telephone number is (703)756-5539. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached at telephone number 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /NICHOLAS JOHN HELCO/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Feb 12, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602867
METHOD FOR AUTONOMOUSLY SCANNING AND CONSTRUCTING A REPRESENTATION OF A STAND OF TREES
2y 5m to grant Granted Apr 14, 2026
Patent 12597092
Systems and Methods for Altering Images
2y 5m to grant Granted Apr 07, 2026
Patent 12586370
VEHICLE IMAGE ANALYSIS SYSTEM FOR A PERIPHERAL CAMERA
2y 5m to grant Granted Mar 24, 2026
Patent 12573018
DEFECT ANALYSIS DEVICE, DEFECT ANALYSIS METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND LEARNING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12561754
METHOD AND SYSTEM FOR PROCESSING IMAGE BASED ON WEIGHTED MULTIPLE KERNELS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+44.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month