Prosecution Insights
Last updated: April 19, 2026
Application No. 18/898,388

ADAPTIVE SYSTEM FOR AUTONOMOUS MACHINE LEARNING AND CONTROL IN WEARABLE AUGMENTED REALITY AND VIRTUAL REALITY VISUAL AIDS

Non-Final OA §102§112§DP
Filed
Sep 26, 2024
Examiner
BERHAN, AHMED A
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Eyedaptic Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
936 granted / 1071 resolved
+25.4% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
30 currently pending
Career history
1101
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
28.2%
-11.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1071 resolved cases

Office Action

§102 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Preliminary amendment The preliminary amendment to the claim filed on 03/07/2025 has been acknowledged. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim [5] is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim [5] recites the limitation the one or more structure of interest comprises “text” , However the current disclosure does not explicitly disclose the recited “text”, thus constitutes a new matter. Appropriate correction consistent to applicant’s disclosure is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim [2] is rejected on the ground of non-statutory double patenting as being unpatentable over claim [1] of U.S. Patent No. [11, 563,885] . Although the claims at issue are not identical, they are not patentably distinct from each other because claim [2] of the current application .is an obvious variant and encompassed by claim [1] U.S. Patent No. [11, 563,885]. Claims [2, 8-10 and 12] are rejected on the ground of nonstatutory double patenting as being unpatentable over claims [1, 9, 12-13 and 15] of U.S. Patent No. [12, 132, 984] . Although the claims at issue are not identical, they are not patentably distinct from each other because Claims [2, 8-10 and 12] of the current application .is an obvious variant and encompassed by claims [1, 9, 12-13 and 15] of U.S. Patent No. [12, 132, 984] Below are tables showing the conflicting claims US. 18/898,388 US. PAT. No. 11, 563,885 2. A method of presenting images to a user of a visual aid device, comprising the steps of: capturing real-time video images of a scene with a camera of the visual aid device; inputting at least one of the real-time video images into a machine learning system of the visual aid device; in the machine learning system, analyzing the at least one of the real-time video images to select image processing parameters that are appropriate for the scene; in a processor of the visual aid device, applying the image processing parameters that accompany to the real-time video images to produce modified images; and presenting the modified images to the user on a display of the visual aid device. 1. A method of presenting images to a user of a visual aid device, comprising the steps of: capturing real-time video images of a scene with a camera of the visual aid device; inputting at least one of the real-time video images into a first machine learning system of the visual aid device; in the first machine learning system, analyzing the at least one of the real-time video images to select a wide display state template that is appropriate for the scene; in a processor of the visual aid device, applying parameters that accompany the wide display state template to the real-time video images to produce modified images that apply no magnification to the real-time video images; and presenting the modified images to the user on a display of the visual aid device. US. 18/898,388 US. PAT. No. 12, 132, 984 2. A method of presenting images to a user of a visual aid device, comprising the steps of: capturing real-time video images of a scene with a camera of the visual aid device; inputting at least one of the real-time video images into a machine learning system of the visual aid device; in the machine learning system, analyzing the at least one of the real-time video images to select image processing parameters that are appropriate for the scene; in a processor of the visual aid device, applying the image processing parameters that accompany to the real-time video images to produce modified images; and presenting the modified images to the user on a display of the visual aid device. 1. A method of presenting images to a user of a visual aid device, comprising the steps of: capturing real-time video images of a scene with a camera of the visual aid device; inputting at least one of the real-time video images into a classifier of the visual aid device; in the classifier, analyzing the at least one of the real-time video images to select a display state template that is appropriate for the scene; inputting the display state template and at least one of the real-time video images into a machine learning system of the visual aid device; in the machine learning system, determining if a user-preferred device configuration exists for the scene and, if the user-preferred device configuration exists, modifying the display state template based on the user-preferred device configuration; applying parameters that accompany the display state template to the real-time video images to produce modified images; presenting the modified images to the user on a display of the visual aid device. 8. The method of claim 4, wherein the moderate magnification is tapered from a central portion of the real-time images to neutral at edges of the real-time video images. 9. The method of claim 8, further comprising tapering the central magnification to neutral at edges of the real-time video images. 9. The method of claim 2, further comprising receiving an input from the user to provide an immediate but temporary learning in a second machine learning system of an association between the scene and the image processing parameters. 12. The method of claim 1, further comprising receiving an input from the user to provide an immediate but temporary learning in a second machine learning system of an association between the scene and the display state template. 10. The method of claim 9, wherein the input is received by a button on the visual aid device. 13. The method of claim 12, wherein the input is received by a button on the visual aid device. 12. The method of claim 2, further comprising: evaluating real-time sensor data of the visual aid device with the processor; and presenting the modified images to the user on a display of the visual aid device only if the visual aid device is not being subjected to significant rotation or acceleration. 15. The method of claim 1, further comprising, prior to the presenting step: evaluating real-time sensor data of the visual aid device with the processor; and wherein presenting the modified images to the user further comprises presenting the modified images to the user on a display of the visual aid device only if the visual aid device is not being subjected to significant rotation or acceleration. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) [2-3, 6-7 and 11] is/are rejected under 35 U.S.C. 102 (a2) as being anticipated by Wang (US. 2017/0347110). Reclaim [2], Wang discloses a method of presenting images to a user of a visual aid device (see figs. 5 and 27), comprising the steps of: capturing real-time video images of a scene with a camera of the visual aid device (see fig. 27 and ¶0440, Original video data 2740 can be provided into the method or system of the embodiment using a camera 2735 ); inputting at least one of the real-time video images into a machine learning system of the visual aid device (see fig. 5, ¶¶0254 and 0453, these embodiments, input data to be processed by a trained machine learning process can have a tailored model, [by the virtue of processing the input video date in order to apply reconstruction model on the low resolution portion of the image]); in the machine learning system, analyzing the at least one of the real-time video images to select image processing parameters that are appropriate for the scene (see ¶¶00254 and 0453, In such embodiments, the selection of the one or more most similar pre-trained model(s) can be done based on one or more metrics associated with the pre-trained models compared to the input data, [by the virtue of selecting low quality or low resolution image for reconstruction]); in a processor of the visual aid device, applying the image processing parameters that accompany to the real-time video images to produce modified images (see ¶¶0436 and 0454, At step 170, each of the segments of video are output from the reconstruction process as higher-resolution frames at the same resolution as the original video 70. The quality of the output video 180 is substantially similar to that of the original video 70); and presenting the modified images to the user on a display of the visual aid device (see ¶¶0437 and 0458, At step 180, the segments of video are combined such that the video can be displayed).. Reclaim [3], Wang further discloses wherein analyzing the at least one of the real-time video images further comprises identifying one or more structures of interest (see ¶0453, This step comprises separating the low-resolution video). Reclaim[6]Wang further discloses, wherein the image processing parameters are applied to a portion of the real-time video images (see ¶0453, comprises separating the low-resolution video [to the low-resolution video]). Reclaim[7] Wang further discloses, wherein the image processing parameters are applied to most of the real-time video images (see ¶0453, low-resolution video, [a majority portion of the live video from the camera can be low resolution or low-quality data]). Reclaim[11] Wang further discloses, wherein the machine learning system identifies individual image features useful for selecting the image processing parameters (see ¶0453, separating the low-resolution video). Allowable Subject Matter Claims [4] is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Examiner note: claim [5] will be objected as an allowable if applicant overcomes the above 112(a) rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A BERHAN whose telephone number is (571)270-5094. The examiner can normally be reached 9:00Am-5:00pm (MAX- Flex). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED A BERHAN/Primary Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §102, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604099
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12604097
EXPOSURE CONVERGENCE METHOD AND RELATED IMAGE PROCESSING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12604083
Recommendation Method of Video Recording Mode, Electronic Device, and Readable Storage Medium
2y 5m to grant Granted Apr 14, 2026
Patent 12598857
IMAGING DEVICE INCLUDING AN ELECTRODE HAVING A TANTALUM NITRIDE LAYER AND ANOTHER LAYER
2y 5m to grant Granted Apr 07, 2026
Patent 12598392
EFFICIENT PROCESSING OF IMAGE DATA FOR GENERATING COMPOSITE IMAGES
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+11.5%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1071 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month