Prosecution Insights
Last updated: April 19, 2026
Application No. 18/170,621

DEEP LEARNING BASED OBJECT IDENTIFICATION AND/OR CLASSIFICATION

Non-Final OA §103
Filed
Feb 17, 2023
Examiner
YANG, JIANXUN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
City University Of Hong Kong
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
472 granted / 635 resolved
+12.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
45 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-26 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claim(s) 1, 3-5, 9, 14, 16-18, 22 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al (Ensemble CNN for Classifying Holograms, 2019) in view of Rivenson et al (Phase recovery, 2018). Regarding claims 1, 14 and 26, Lam teaches a computer-implemented method for object identification and/or classification, comprising: receiving digital hologram data of a digital hologram of an object, the digital hologram data comprising phase information and magnitude information; and processing the digital hologram data based on a neural-network-based ensemble model to identify and/or classify the object, (Lam, Fig. 2, digital histogram generation followed by magnitude CNN and phase CNN; Fig. 4, a CNN based ensemble model for hologram object classification; inputs of the CNN ensemble model are magnitude component and phase component of the hologram; “Through an ensemble decision maker, the CNN that outputs a class identity with a higher matching score will be selected as the identity of the input hologram”, p5; “obtain the identity of an object in a hologram”, p2) wherein the digital hologram data is associated with wavefront from the object, the wavefront being complex-valued, (Lam, Fig. 2, “digital hologram” data is represented by magnitude and phase, meaning that the digital hologram data is complex data; the complex hologram data is the wavefront data as compared with the conventional light signals based only on intensity) wherein the neural-network-based ensemble model comprises a first neural network arranged to process the magnitude information, and a second neural network arranged to process the phase information, (Lam, Figs. 2 and 4, magnitude CNN and phase CNN) wherein the computer-implemented method further comprises training, testing and/or validating the neural-network-based ensemble model based on data for training, testing and/or validating, the data for training, testing and/or validating comprising digital wavefront data of multiple digital wavefronts of each object, and (Lam, Fig. 2, “a large set of augmented holograms is generated, and applied to train a deep-learning network that is implemented with a pair of CNNs. One of the CNNs receive the magnitude component of the holograms as the input data, while the other accepts the phase component”, “holograms of handwritten characters are employed to train, and to test the CNN”, p3) Lam does not expressly disclose but Rivenson teaches: wherein the digital wavefront data includes defective, flawed or incomplete data, and (Rivenson, “These results highlight that challenging problems in imaging science can be overcome through machine learning, providing new avenues to design powerful computational imaging systems”, [abstract]; “the twin-image artifact of the in-line holography, which is a result of the lost phase information, is strong and severely obstructs the spatial features of the sample in both the amplitude and phase channels, as illustrated in Figures 1 and 2”, p2:c2; Fig. 2, “These images are contaminated with twin-image and self-interference-related spatial artifacts due to the missing phase information in the hologram detection process”, “The yellow arrows point to artifacts in f, g, n, o (due to out-of-focus dust particles or other unwanted objects) ...”; using digital hologram data that is "contaminated" with artifacts and "dust particles" (defective/flawed) for training a neural network. It would be obvious to train the classifier of D1 on the "flawed" biological data of D2 to achieve robust classification of real-world samples) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Rivenson into the system or method of Lam in order to train the classifier of Lam on the "flawed" biological data of Rivenson to achieve robust classification of real-world samples. The combination of Lam and Rivenson also teaches other enhanced capabilities. The combination of Lam and Rivenson further teaches: wherein the data for training. testing and/or validating is obtained by capturing from biological samples using a laser-based optical system. (Rivenson, “We validated this method by reconstructing the phase and amplitude images of various samples, including blood and Pap smears and tissue sections”, [abstract]; “In this work, we chose to demonstrate the proposed framework using lens-free digital in-line holography of transmissive samples, including human tissue sections and blood and Pap smears”, p2:c2; “Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography”, [abstract]; “Under plane wave illumination, we can assume that A has zero phase at the detection plane, without loss of generality, that is, A= |A|”, p5:c2; capturing data from biological samples (blood, Pap smears, tissue) using a coherent optical system (plane wave illumination/holography), which implies a laser or similar coherent source; practically all conventional holography requires lasers as light sources to ensure the light is coherent, meaning the waves are in phase and have a consistent frequency to produce clear interference patterns) Regarding claims 3 and 16, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 1, wherein the neural-network-based ensemble model comprises a convolutional-neural-network-based ensemble model, model, and the first neural network and the second neural network are a first convolutional neural network and a second convolutional neural network respectively. (Lam, Fig. 4) Regarding claims 4 and 17, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 1, wherein the neural-network-based ensemble model further comprises a concatenate unit arranged to combine magnitude features extracted by the first neural network and phase features extracted by the second neural network for identification and/or classification of the object. (Lam, Fig. 4; “Through an ensemble decision maker, the CNN that outputs a class identity with a higher matching score will be selected as the identity of the input hologram”, p5; obviously, it can be considered that the ensemble decision block acts like a concatenated unit in a way that it combines/groups the outputs of the magnitude CNN and the phase CNN together, and selects one of these two outputs as the ensemble classification output based on which output has the higher matching score) Regarding claims 5 and 18, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 1, further comprising: obtaining the digital hologram data of the digital hologram of the object, the obtaining comprises: receiving a hologram of the object obtained using an imaging device; and processing the hologram by performing a digital signal processing operation to obtain the digital hologram data. (Lam, Fig. 4, preprocessing stage; “A hologram acquisition system, such as one based on optical scanning holography [2] or phase shifting holography [3], is used to capture digital holograms of physical objects”, p2) Regarding claims 9 and 22, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 1, further comprising outputting or displaying the identification and/or classification result. (Lam, Fig. 4, obviously, the output of class identity of the hologram object may be displayed in a screen for human monitoring) Claim(s) 2 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al (Ensemble CNN for Classifying Holograms, 2019) in view of Rivenson et al (Phase recovery, 2018) and further in view of Cella et al (US2022/0187847). Regarding claims 2 and 15, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination does not expressly disclose but Cella teaches the computer-implemented method of claim 1, wherein the neural-network-based ensemble model comprises an attention based transformer model. (Cella, “transformer-based, encoder-decoder architectures using attention mechanisms may be used in conjunction with or in place of convolutional neural networks”, [1801]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Cella into the modified system or method of Lam and Rivenson in order to use an attention based transformer model in place of a CNN for utilizing self-attention mechanisms to capture global dependencies and relations between image patches directly. The combination of Lam, Rivenson and Cella also teaches other enhanced capabilities. Claim(s) 6 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al (Ensemble CNN for Classifying Holograms, 2019) in view of Rivenson et al (Phase recovery, 2018) and further in view of Johnson (US2010/0172001). Regarding claims 6 and 19, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination does not expressly disclose but Johnson teaches the computer-implemented method of claim 5, wherein the digital signal processing operation comprises: performing a Fourier transform operation on the hologram; (Johnson, “the image is the Fourier transform of the hologram”, [0007]) after the Fourier transform operation, extracting hologram data associated with the object; and after the extraction, performing an inverse Fourier transform operation on the extracted hologram data to obtain the digital hologram data. (Johnson, “it is possible to define a desired image, for example symbols for a display or a frame for a video sequence, and to compute the inverse Fourier transform of the desired image to determine a hologram”, [0007]; the desired image containing symbols may be extracted from the image from the Fourier transform of the input hologram; the desired image is then, via inverse Fourier transform, converted to a feature specific hologram) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Johnson into the modified system or method of Lam and Rivenson in order to use Fourier transform and inverse Fourier transform to generate a feature specific hologram. The combination of Lam, Rivenson and Johnson also teaches other enhanced capabilities. Claim(s) 7-8, 10-13, 20-21 and 23-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lam et al (Ensemble CNN for Classifying Holograms, 2019) in view of Rivenson et al (Phase recovery, 2018) and further in view of Cuche et al (US6262818). Regarding claims 7 and 20, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination does not expressly disclose but Cuche teaches the computer-implemented method of claim 5, the imaging device comprises a camera associated with an interferometer. (Cuche, “FIG. 2A is a view showing diagrammatically one of the possible configuration for the recording of an off-axis hologram”, c5:25-35; “The advantage of a configuration based on a standard interferometer configuration is that it allows the recording of off-axis holograms with very small angles between the directions of propagation of the object and reference waves. This feature is important when low-resolution media such as a CCD camera are used as image acquisition systems”, c11:1-10) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Cuche into the modified system or method of Lam and Rivenson in order to use an off-axis interferometer for holography with the advantage of reconstructing images from a single hologram without a twin image which is the case for in-line holography. It allows for retrieving both amplitude and phase information of an object wave. The combination of Lam, Rivenson and Cuche also teaches other enhanced capabilities. Regarding claims 8 and 21, the combination of Lam, Rivenson and Cuche teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 7, wherein the interferometer is an off-axis interferometer and the hologram is an off-axis hologram. (Cuche, see comments on claim 7) Regarding claims 10 and 23, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination of Lam, Rivenson and Cuche teaches the computer-implemented method of claim 1, wherein the object comprises a biological tissue sample. (Cuche, “a semi-transparent specimen (e.g. biological cells or tissues)”, c3:65-end) Regarding claim 11, the combination of Lam, Rivenson and Cuche teaches its/their respective base claim(s). The combination further teaches the computer-implemented method of claim 10, wherein the biological tissue sample is sized for microscopy; and/or (Cuche, “a semi-transparent specimen (e.g. biological cells or tissues)”, c3:65-end; the biological cells or tissues may be studied in microscopy using hologram techniques) Regarding claims 12 and 24, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination of Lam, Rivenson and Cuche teaches the computer-implemented method of claim 1, wherein the digital hologram data is associated with an electromagnetic wavefront from the object. (Cuche, “The present invention is not restricted to the optical domain and can be applied for the numerical reconstruction of holograms recorded with any kind of electromagnetic (e.g. X-ray) or non-electromagnetic (e.g. acoustics or heats) waves”; c6:45-55) Regarding claims 13 and 25, the combination of Lam and Rivenson teaches its/their respective base claim(s). The combination of Lam, Rivenson and Cuche teaches the computer-implemented method of claim 1, wherein the digital hologram data is associated with an acoustic wavefront from the object. (Cuche, “The present invention is not restricted to the optical domain and can be applied for the numerical reconstruction of holograms recorded with any kind of electromagnetic (e.g. X-ray) or non-electromagnetic (e.g. acoustics or heats) waves”; c6:45-55) Response to Arguments Applicant's arguments filed on 12/15/2025 with respect to one or more of the pending claims have been fully considered but are moot in view of the new ground(s) of rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /JIANXUN YANG/ Primary Examiner, Art Unit 2662 1/25/2026
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Apr 01, 2025
Non-Final Rejection — §103
Aug 14, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Dec 15, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602917
OBJECT DETECTION DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602853
METHODS AND APPARATUS FOR PET IMAGE RECONSTRUCTION USING MULTI-VIEW HISTO-IMAGES OF ATTENUATION CORRECTION FACTORS
2y 5m to grant Granted Apr 14, 2026
Patent 12590906
X-RAY INSPECTION APPARATUS, X-RAY INSPECTION SYSTEM, AND X-RAY INSPECTION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586223
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL OBJECT COMBINING STRUCTURED LIGHT AND PHOTOMETRY AND TERMINAL DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586152
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TRAINING IMAGE PROCESSING MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.6%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month