Prosecution Insights
Last updated: April 19, 2026
Application No. 18/418,386

NEURAL IMPLICIT FUNCTION FOR END-TO-END RECONSTRUCTION OF DYNAMIC CRYO-EM STRUCTURES

Non-Final OA §103§112
Filed
Jan 22, 2024
Examiner
ISLAM, PROMOTTO TAJRIAN
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Shanghaitech University
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Section 3. Neural Implicit Representation discloses a method for reconstruction of 3D structure and discloses a pipeline for the reconstruction process that is shown in Fig. 1. However, several components of the pipeline (i.e., the pose network, flow network, and density network) are not clearly defined in the specification nor are they clearly labeled or notated within Fig. 1. Furthermore, the specification does not provide sufficient detail such that one skilled in the art can understand how each subnetwork interacts, and how the inputs and outputs of each network are associated with each other. Independent claim 1 recites an “encoding” step and sections 3.1-3.2 disclose an encoding layer and encoder, respectively, however there is no notation of an encoding layer/encoder in Fig. 1 submitted. The Examiner recommends that the Applicant thoroughly review the submitted specification and drawings to ensure that the claimed matter is clearly disclosed and understandable based on the disclosure present in the specification, and that the drawings correctly correspond to what is disclosed in the specification. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “the pose network is configured to map an image to a rotation and a translation”, however prior to that limitation there is only discussion of a “plurality of images”. It is unclear what “an image” is referring to and how the “plurality of images” is associated with the model (as there is no language regarding each image from the plurality of images being input into the model). Additional clarification is needed to connect how the “plurality of images” are used by the claimed machine learning model. Claim 1 recites a plurality of networks (pose, flow, etc.), but it is unclear what the inputs and outputs are from each of these models, and furthermore how each network interacts with each other (i.e., how do the inputs and outputs from the pose network interact with the inputs/outputs of the flow network and density network?). Claim 1 recites that the density network generates a “projection image” and that the CTF network generates a “rendered image”. However, both of these are written as singular “image” and it is unclear how this is associated with the “plurality of images”. Claim 1 recites “training the machine learning model using the plurality of images”, where it is unclear which plurality of images the machine learning model is training on (i.e., the initial plurality of images, the plurality of images after the various vectors have been assigned, or the projection and/or rendered images through the encoding process)? If the limitation is referring to the original plurality of images, then it is unclear how/why the “encoding” step and generation of projection and/or rendered images is relevant. The Examiner recommends that the Applicant review the language of the entire claim set to ensure that the claims clearly recite the claimed invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, and 9 are rejected as being unpatentable over Zhong et al. (“CryoDRGN: Reconstruction of Heterogenous Cryo-EM Structures using Neural Networks”, DOI: 10.1038/s41592-020-01049-4, Publication Year: 2021, hereinafter “Zhong”) in view of Punjani and Fleet (US 2023/0335216; hereinafter “Punjani”). Regarding Claim 1, Zhong discloses a computer-implemented method comprising (Supplementary Table 1, Zhong discloses a software used to reconstruct three-dimensional models, and specifically notes that the software includes a model trained on a computer machine containing a CPU and GPU card.): obtaining a plurality of images representing projections of an object placed in a plurality of poses and a plurality of translations (Methods, Datasets, Zhong discloses utilizing datasets of simulated particle images which applied a random rotation and in-plane translation to the images.); encoding, by a computer device, a machine learning model comprising a pose network, a flow network, a density network, and a CTF network, wherein the pose network is configured to map an image to a rotation and a translation via the pose embedding vector, the flow network is configured to concatenate a spatial coordinate with the flow embedding vector, the density network is configured to derive a density value in accordance with the spatial coordinate and to generate a projection image, and the CTF network is configured to modulate the projection image appended with the CTF embedding vector to generate a rendered image (Fig. 1, Methods, Zhong discloses a process of taking an input image X from a random unknown orientation (i.e., pose), inputting through a positionally encoded MLP to output reconstructed image (i.e., projection image), from which a CTF function and translation are applied to generate a final image (i.e., rendered image).); training the machine learning model using the plurality of images (Methods, Training system, Zhong discloses training the cryoDRGN model using different datasets of images.); and reconstructing a 3D structure of the object based on a trained machine learning module (Figs. 1-2, Zhong discloses a trained cryoDRGN model which can generate a 3D reconstruction of an object.). Zhong does not disclose assigning a pose embedding vector, a flow embedding vector, and a Contrast Transfer Function (CTF) embedding vector to each of the plurality of images. Punjani discloses assigning a pose embedding vector, a flow embedding vector, and a Contrast Transfer Function (CTF) embedding vector to each of the plurality of images (Fig. 3, [0061], Punjani discloses obtaining experimental images along with CTF parameters, pose parameters, and flow generator parameters which are fit to the experimental image.); Zhong and Punjani are considered to be analogous to the claimed invention as they are in the same field of 3D image reconstruction utilizing machine learning techniques. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Zhong by incorporating the various parameters disclosed by Punjani in order to aid the machine learning model in the reconstruction process. The motivation for this combination being the ability to incorporate tunable/adjustable parameters which can be used to aide in reconstruction. Regarding Claim 6, Zhong in view of Punjani teaches the computer-implemented method of claim 1, further comprising: prepending a positional encoding layer to map the spatial coordinate to a high-frequency representation (Methods, Zhong discloses modifying positional encoding by increasing all wavelengths by a factor of 2π.). Regarding Claim 9, Zhong in view of Punjani teaches the computer-implemented method of claim 1, wherein each of the plurality of images is a cryogenic electron microscopy (cryo-EM) image (Datasets, Zhong discloses utilizing cryo-EM images.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PROMOTTO TAJRIAN ISLAM whose telephone number is (703)756-5584. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PROMOTTO TAJRIAN ISLAM/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Jan 22, 2024
Application Filed
Mar 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601732
CELL IMAGE ANALYSIS DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12586174
Method for Processing Yarn Spindle Data, Electronic Device and Storage Medium
2y 5m to grant Granted Mar 24, 2026
Patent 12579674
IMAGE SELECTION APPARATUS, IMAGE SELECTION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567257
Method and Apparatus for Obstacle Recognition, Device, Medium, and Robot Lawn Mower
2y 5m to grant Granted Mar 03, 2026
Patent 12555401
Auto-Document Detection & Capture
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+17.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month