Prosecution Insights
Last updated: April 19, 2026
Application No. 18/479,832

METHOD FOR ACQUIRING EVALUATION VALUE

Non-Final OA §101§102
Filed
Oct 03, 2023
Examiner
AZARIAN, SEYED H
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
807 granted / 901 resolved
+27.6% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
9 currently pending
Career history
910
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
21.5%
-18.5% vs TC avg
§102
31.4%
-8.6% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 901 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e., process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. The four eligible categories of invention include: (1) process which is an act, or a series of acts or steps, (2) machine which is an concrete thing, consisting of parts, or of certain devices and combination of devices, (3) manufacture which is an article produced from raw or prepared materials by giving to these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery, and (4) composition of matter which is all compositions of two or more substances and all composite articles, whether they be the results of chemical union, or of mechanical mixture, or whether they be gases, fluids, powders or solids. MPEP 2106(I). Claim 1, is rejected under 35 U.S.C. 101 abstract idea, while the claims recite a series of steps or acts to be performed, as an example such as, “identifying an image of the first subject and the second subject in a frame within which the first subject and the second subject appear; identifying multi-dimensional information relating to the first subject and the second subject based on the identified image”. Prong 1 analysis: The steps do not amount to significantly more than the abstract idea. The recited steps could be implemented by the user or a human operator observing an image, recited steps “showing a phase distribution of light transmitted through an object; deriving an evaluation value of the object based on the phase image”. Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart MPEP 2106). Prong 2 analysis: The additional elements regarding claim 1, “and correcting the evaluation value by using a correction coefficient determined in accordance with an orientation of the object in the phase image”. could be implemented by the user further gathering, analyzing data and organizing by “mathematical manipulation or algorithm”, does not preempt all possible ways for correction coefficient based on the images of the object, or any specific way to implement calculating. The steps do not amount to significantly more than the abstract idea, they are recited at a high level of generality and are conventional, well known and routine. The claim as a whole is an abstract idea. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart MPEP 2106). DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6 and 7, are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Okamoto et al (U.S. Pub No: 2017/0370780 A1). Regarding claim 1, Okamoto discloses a method for acquiring an evaluation value, comprising: generating a phase image showing a phase distribution of light transmitted through an object (see abstract, when the optical system is illuminated with an illumination light flux emitted from one extant input image point, an interference image generated by superimposing an extant output light flux output from the optical system and a reference light flux coherent with the extant output light flux is imaged to acquire interference image data, and thus to acquire measured phase distribution, and this acquisition operation is applied to each extant input image point. Thus, each measured “phase distribution” is expanded by expanding functions μn(u, v) having coordinates (u, v) on a phase defining plane as a variable to be represented as a sum with coefficients Σn{Ajn.Math.μn(u, v)}. When the optical system is illuminated with a virtual illumination light flux, a phase Ψ(u, v) of a virtual output light flux is determined by performing interpolation calculation based on coordinates of a virtual light emitting point. Also, page 12, paragraph, [0187] corresponding to the fact that +1st, 0th, and −1st order diffraction light are generated from a sinusoidal concentration diffraction grating, in holography (including digital holographic imaging), also in a reconstructed image, three kinds of normal images including a +1st-order image, a 0th-order image (transmitted light), and −1st-order image (conjugate image) are generated); deriving an evaluation value of the object based on the phase image (see above, also page 1, paragraph, [0025] an object to be achieved by the present invention is to provide, in a realistic image forming optical system, a technique of acquiring an optical phase and the distribution of the optical phase that are useful when the optical system is evaluated. In this technique); and correcting the evaluation value by using a correction coefficient determined in accordance with an orientation of the object in the phase image (see pages 2-3, paragraphs, [0025] and [0041], an object to be achieved by the present invention is to provide, in a realistic image forming optical system, a technique of acquiring an optical phase and the distribution of the optical phase that are useful when the “optical system is evaluated”. In this technique, extant input image points located at prescribed extant input image spatial coordinates are actually input to obtain interference image data with respect to each extant input image point; however, while the number of the extant input image points is suppressed within a range where the interference image data can be economically obtained, in addition to aberration as designed, error in profile of a refractive surface or a reflective surface of each image forming optical element and internal presence of defects such as eccentricity, surface distance error, and assembly error such as tilt are included. Further, in this method, diffractive optical image forming simulation with respect to an arbitrary input image pattern is performed, for example, or OTF and Zernike expanding “coefficients are calculated”, whereby the optical system is evaluated. Giving, as a “correction” thereto, a value of specific coordinates (u, v) on the phase defining plane (T) to calculate a broad sense phase difference δΨ(u, v) which is a difference between the measured broad sense phase and the traced calculated broad sense phase by the sum with coefficients Σn{Δjn.Math.μn(u, v)} of the expanding functions μn(u, v) and thus to determine the broad sense phase Ψ(u, v) as a sum Ψ(u, v)+δΨ(u, v) of the ideal broad sense phase Ψ(u, v) and the broad sense phase difference δΨ(u, v). Finally, page 8, paragraph, [0137] since the group of interpolation expanding coefficients An belonging to a virtual light emitting point located at arbitrary coordinates (x, y, z) in the virtual input image spatial coordinate system (x, y, z) can be thus calculated, the value of the phase Ψ(u, v) of the light electric field at arbitrary coordinates (u, v) on the phase defining plane (T) can be determined by applying a value of specific coordinates (u, v) to the following formula (7) corresponding to the formula (1) and using a sum with coefficients Σn{An.Math.μn(u, v)} of the expanding functions μn(u, v)). Regarding claim 2, Okamoto discloses the method for acquiring an evaluation value according to claim 1, wherein the phase image is generated based on an interference image formed by interference between object light transmitted through the object and reference light coherent to the object light (see claim 1, also pages 2-3, paragraphs, [0033] and [0037] this method is characterized by comprising: when the optical system is illuminated with an illumination light flux emitted from one extant input image point located at prescribed extant input image spatial coordinates (xs1, ys1, zs1) with respect to an input side of the optical system, imaging, with an imaging element (Uf), an “interference image generated” by superimposing an extant output light flux output from the optical system (Ot) and a reference light flux “coherent” with the extant output light flux to acquire interference image data (Df), and thus to acquire measured broad sense phase distribution Ψs1(u, v) belonging to the extant input image point (Ps 1) on a phase defining plane (T) located at a prescribed relative position with respect to an output side of the optical system (Ot), based on the interference image data (Df) such that this acquisition operation is applied to each of the extant input image points. Through calculation by a difference, acquiring expanding coefficients configured to represent, as a sum with coefficients Σn{Δjn.Math.μn(u, v)}, each of broad sense phase difference distributions . . . , Ψsj(u, v)−Ψtj(u, v), and . . . which are each a difference between the measured broad sense phase distribution and the traced calculated broad sense phase distribution). Also page 3, paragraphs, [0043-0046] an optical system phase acquisition method according to the fourth invention of the present invention is characterized in that in the process for acquiring the measured broad sense phase distributions Ψs1(u, v), Ψs2(u, v), and . . . belonging to each of the extant input image points (Ps1, Ps2, and . . . ), after acquisition of the interference image data collectively including information of “interference images” about all the extant input image points (Ps1, Ps2, and . . . ), the interference image data is separated into the measured broad sense phase distributions. An optical system phase acquisition method according to the fifth invention of the present invention is characterized in that a relay optical system is inserted “between the optical system” and the imaging element so that an extant output light flux output from the optical system is input to the relay optical system, and the relay optical system outputs a relay output light flux which is an output light flux from the relay optical system, and a target in which an interference image generated by superimposing the reference light flux (Fr) is imaged by the imaging element is replaced with the extant output light flux output from the optical system to provide the relay output light flux. An optical system evaluation method according to the seventh invention of the present invention is a method of performing evaluation through image forming simulation of the optical system). Regarding claim 3, Okamoto discloses the method for acquiring an evaluation value according to claim 1, wherein the evaluation value is a total phase amount obtained by integrating and accumulating phase amounts for pixels of the phase image (see claim 1, also page 5, paragraphs, [0078-0079] however, phase distribution on the phase defining plane is information having a value at each point on a two-dimensional plane. When the phase distribution is handled as it is as a two-dimensional arrangement in a similar form to that of data consisting of many “pixels” and acquired by imaging by the above-described imaging element, the amount of held data and the “amount of calculation” to be processed become very large. Therefore, the phase distribution is expanded by coefficients having coordinates on the phase defining plane as a variable. Accordingly, in information on the phase distribution corresponding to each of the actually input typical point images, only expanding coefficients in the expanding by coefficients, that is, only a set of the expanding coefficients is held, and when a phase corresponding to each point image constituting a “virtual input” pattern is acquired during simulation, a suitable set is selected from sets of groups of the expanding coefficients belonging to the actually input typical point images. When a group of new expanding coefficients is generated by interpolation calculation depending on a position of the point image constituting the virtual input pattern, a value of a phase at arbitrary coordinates on the phase defining plane can be calculated using the group of the expanding coefficients and the expanding functions. Also, page 14, paragraphs, [0221-0223] when the pairs of orders n, m and n′, m′ are the same as each other, the product, that is, a square “integration value” Snm=π/(n+1) when the auxiliary order m is 0, and Snm=π/2(n+1) when the auxiliary order m is not 0. The Zernike expanding coefficient Anm can be determined by using the orthogonality. Namely, when any Zernike polynomial of the formula (13), for example, the k-th Zernike polynomial in the serial number is selected, a product of a value of the measured phase distribution Ψsj(u, v) or the traced phase distribution Ψtj(u, v) desired to be expanded in the coordinates (α, β) and a value of the k-th Zernike polynomial Znm(ρ, θ) in the same coordinates (α, β) (using coordinate conversion (α, β).fwdarw.(ρ, θ) based on the formula (12)) is subjected to “numerical integration” in a unit circle, that is, the range of α̂2+β̂2≦1. The Zernike expanding coefficient Anm corresponding to the k-th Zernike polynomial Znm (ρ, 0) can be obtained by dividing the calculated integrated value by the square integration value Snm corresponding to the k-th Zernike polynomial Znm (ρ, θ)). Regarding claim 4, Okamoto discloses the method for acquiring an evaluation value according to claim 1, wherein a virtual object simulating a shape, a dimension, and a refractive index of the object is created based on the phase image, and the correction coefficient is derived by using the virtual object (see claim 1, also paragraphs, [0001-0002] The present invention uses a technology used in so-called digital holographic imaging, and in a realistic image forming optical system, this invention relates to a method of acquiring an optical phase and the distribution of the optical phase on, for example, an exit pupil plane of the optical system that is useful when the optical system is evaluated. The realistic image forming optical system is configured by, for example, coaxially arranging one or more image forming optical elements such as a lens having a refractive surface of a concave surface or a convex surface, a lens achieved by the fact that a refractive medium has a specific “refractive index” distribution, and a mirror having a reflective surface of a concave surface or a convex surface. By using the present method, when an image of an unrealistic arbitrary “virtual” input pattern is formed by a realistic optical system, an output pattern, a resolution, and so on can be confirmed through “simulation”. Therefore, this method is applicable to inspection of an image forming performance of a realistic optical system. Also page 2, paragraphs, [0020] and [0029-0030], as described above, merely evaluate physical characteristics inside an optical system, light quantity distribution of an output light flux, and so on with the use of the digital holographic imaging technology or “correct” defects of a taken image due to aberration of the optical system and defects of photographing conditions and thus cannot confirm and evaluate an output pattern, a resolution, and so on through “simulation” when an image of an arbitrary virtual input pattern is formed. After the acquisition of the group of the expanding coefficients Ajn, with respect to a virtual light emitting point located at coordinates (x, y, z) in a virtual input image spatial coordinate system (x, y, z), in order to acquire a broad sense phase on the phase defining plane (T) of a virtual output light flux from the optical system (Ot) when the optical system (Ot) is illuminated with a virtual illumination light flux emitted from the virtual light emitting point. Applying interpolation calculation to the group of the expanding coefficients Ajn based on the position occupied by the coordinates (x, y, z) at the virtual light emitting point among a set of the extant input image spatial coordinates. Also page 3, paragraphs, [0038-0040] after the acquisition of the expanding coefficients, with respect to a virtual light emitting point located at coordinates (x, y, z) in a virtual input image spatial coordinate system (x, y, z), in order to acquire a broad sense phase Ψ(u, v) on the phase defining plane of a virtual output light flux from the optical system when the optical system is illuminated with a virtual illumination light flux emitted from the virtual light emitting point. First applying interpolation calculation to the group of the expanding coefficients based on the position occupied by the coordinates (x, y, z) at the virtual light emitting point among a set of the extant input image spatial coordinates and . . . to previously calculate the group of interpolation expanding coefficients. Then calculating an optical path length Γ(u, v) from a virtual light emitting point located at coordinates (x, y, z) to the phase defining plane (T) through ray tracing simulation of rays emitted from the virtual light emitting point located at the coordinates (x, y, z), based on design data of the optical system (Ot) to acquire an ideal broad sense phase Ψ(u, v) on the phase defining plane (T) based on the optical path length). Regarding claim 7, Okamoto discloses the method for acquiring an evaluation value according to claim 1, wherein the object is a cell (see claim 1, also page 9, paragraph, [0142] first, a plurality of rays emitted from the input image point (Pt) corresponding to the extant input image point (Ps1) are set. As a preferable method of setting the rays, for example, an entrance pupil plane of the optical system (Ot) is divided into “cells” with suitable sizes, and the rays are set to be a skew ray group emitted from the input image point (Pt1) and passing through the center of each cell. Also page 10, paragraph, [0155] in the above-described ray tracing simulation based on the design data of the optical system (Ot) applied to the rays emitted from the input image points (Pt1, Pt2, . . . , Ptj, . . . ), as a method of setting rays to be traced, there has been described the case where the entrance pupil plane of the optical system (Ot) is divided into cells with suitable sizes, and the rays are set to be a skew ray group emitted from the input image points (Pt1, Pt2, . . . , Ptj, . . . ) and passing through the center of each cell. However, hereinafter, in view of the purpose of determining the value of the phase Ψ (u, v) of the light electric field at the coordinates (u, v) on the phase defining plane (T), suppose that rays (ray group) to be used is already determined, the ray tracing simulation applied to one of the rays will be described). With regard to claim 6, the arguments analogous to those presented above for claims 1, 2, 3, 4 and 7 are respectively applicable to claim 6. Allowable Subject Matter Claims 5 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Seyed Azarian whose telephone number is (571) 272-7443. The examiner can normally be reached on Monday through Thursday from 6:00 a.m. to 7:30 p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Matthew Bella, can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information about the PAIR system, see http:// pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SEYED H AZARIAN/Primary Examiner, Art Unit 2667 September 17, 2025
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Sep 15, 2025
Examiner Interview (Telephonic)
Sep 18, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602783
SYSTEM AND METHODS FOR AUTOMATIC IMAGE ALIGNMENT OF THREE-DIMENSIONAL IMAGE VOLUMES
2y 5m to grant Granted Apr 14, 2026
Patent 12597134
IMAGE PROCESSING DEVICE, METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12598264
Color Correction for Electronic Device with Immersive Viewing
2y 5m to grant Granted Apr 07, 2026
Patent 12586206
METHOD FOR IDENTIFYING A MATERIAL BOUNDARY IN VOLUMETRIC IMAGE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573039
IMAGING SYSTEMS AND METHODS USEFUL FOR PATTERNED STRUCTURES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+11.7%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 901 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month