Prosecution Insights
Last updated: April 19, 2026
Application No. 18/198,898

MULTI-VIEW SINGLE-FRAME PHASE DEMODULATION METHOD BASED ON STRUCTURED LIGHT FIELD AND RELATED COMPONENTS

Non-Final OA §101§103
Filed
May 18, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Shenzhen University
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-9 are presented in the case. Priority Acknowledgment is made of applicant's claim for foreign priority based on application CN202210562804.5 filed in China on 05/23/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”) Claim 1, 8 and 9 have the following abstract idea analysis. Step 1: The claim is directed to “a method, system and crm”. The claims are directed to the statutory categories accordingly. For clarity "non-transitory" should be added to claim 9. Specification ¶80-81 does recite "non-transient". Step 2A Prong 1: claims recite the abstract idea limitations of "calculating the numerators and denominators of the multiple views via an arc tangent function and obtaining wrapped phases of multiple views". These limitations include mathematical concepts (such as a mathematical formula or equation). See MPEP § 2106 where the Court “held that simply implementing a mathematical principle on a physical machine, namely a computer, was not a patentable application of that principle”. Thus, this step is an abstract idea in the “math process” grouping". The specification also provides example operations performed USPGPUB ¶56. Other sections of the claims such as "constructing an LFDNet neural network; collecting a data set by a structured light field system and training the LFDNet neural network to optimize the LFDNet neural network; and inputting a multi-view fringe image to be predicted into the optimized LFDNet neural network, outputting numerators and denominators of multiple views" are advanced processes, too generic and/or high level to be listed as a judicial exception given the available descriptions and MPEP comparisons. Step 2A Prong 2: The judicial exceptions recited in these claims are not integrated into a practical application. Merely invoking "a neural network", "a processor" or "memory" does not yield eligibility. Claims are still in line with mental concepts such as claim 1, 8 and 9 are not specific to a practical application. The additional elements as such are processors and instructions which do not include specialized hardware. See MPEP § 2106.05(f). Claim 1, 8 and 9 do not include a particular field but even doing so may not be sufficient to overcome the abstract idea rejection. Merely applying an model to a field or data without an advancement in the new field or new hardware is ineligible. MPEP § 2106.05(h). Step 2B: The claims do not contain significantly more than their judicial exceptions. Processors, memory and other hardware are in their standard forms in the field. These additional elements are well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Claims lacks any particular "how" or algorithm for a solution in a field in a novel way. Claims require more specificity on processes that would be incapable of simple mathematics, mental processes or use more substantial structure than conventional devices such as non-textbook implementations. Regarding claims 2-7 merely narrow the previously recited abstract idea limitations with more abstract concepts and/or routine fundamental processes. For the reasons described above with respect to claim 1, 8 and 9 this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Abstract idea steps 1, 2A prong 1 and 2 remain the same as independent analysis above. See specification for more practical application concepts as none are seen in claims 2-7. With respect to step 2B These claims disclose similar limitations described for the independent claims above and do not provide anything significantly more than mathematical or mental concepts. Claims 2-7 recite the additional elements of " acquiring a tensor feature H×W×V of a multi-view fringe image and using same as an input tensor of the LFDNet neural network, wherein H represents a tensor height, W represents a tensor width, and V represents the number of tensor channels; and performing a plurality of times of convolution processing, downsampling processing, and upsampling processing on the input tensor to obtain an output tensor H×W×2V of the LFDNet neural network. performing convolution processing on the input tensor by a first dense convolution block to obtain a feature tensor H×W×4V; performing downsampling processing on the feature tensor H×W×4V by a first downsampling block to obtain a feature tensor H2×W2×2⁢V; performing convolution processing on the feature tensor by a second dense H2×W2×2⁢V by a second dense convolution block to obtain a feature tensor H2×W2×5⁢V; performing downsampling processing on the feature tensor H2×W2×5⁢V by a second downsampling block to obtain a feature tensor H4×W4×3⁢V; performing convolution processing on the feature tensor H4×W4×3⁢V by a third dense convolution block to obtain a feature tensor H4×W4×6⁢V; performing upsampling processing on the feature tensor H4×W4×6⁢V by a first upsampling block to obtain a feature tensor H2×W2×5⁢V; splicing the feature tensor H2×W2×5⁢V outputted by the first upsampling block with the feature tensor H2×W2×5⁢V outputted by the second dense convolution block in channel dimension through a first switching connection to obtain a first spliced tensor; performing convolution processing on the first spliced tensor by a fourth dense convolution block to obtain a feature tensor H2×W2×1⁢3⁢V; performing upsampling processing on the feature tensor H2×W2×1⁢3⁢V by a second upsampling block to obtain a feature tensor H×W×4V; splicing the feature tensor H×W×4V outputted by the second upsampling block with the feature tensor H×W×4V outputted by the first dense convolution block in channel dimension through a second switching connection to obtain a second spliced tensor; performing convolution processing on the second spliced tensor by a fifth dense convolution block to obtain a feature tensor H×W×11V; and performing convolution processing on the feature tensor H×W×11V by an output convolution block to obtain an output tensor H×W×2V of the LFDNet neural network, wherein the output tensor corresponds to respective numerators and denominators of V multi-view fringe images. measuring S different scenes by the structured light field system, and performing 12-step phase shift fringe projection on each scene and collecting to obtain a 12-step phase shift fringe image; and calculating the 12-step phase shift fringe image of each view in each scene according to the following formula: Nu=Σn=1 N(I n sin δn); De=Σn=1 N(I n cos δn), where Nu represents the numerator, De represents the denominator, N is the number of steps of phase shift, In is a phase shift fringe image, and δn represents a phase shift amount; splicing the numerator Nu with the denominator De in channel dimension to obtain an output tensor H×W×2V, so as to obtain data of each scene, which can be represented as {I→Nu,De}; and obtaining a data set {Is→Nus, Des|s=1, 2, . . . , S} based on the measured S different scenes. dividing the data set into a training set, a validation set, and a test set; training the LFDNet neural network by utilizing the training set, and predicting in a training process by utilizing the validation set and calculating a predicted result error to optimize the LFDNet neural network; and predicting the optimized LFDNet neural network by utilizing the test set and calculating the predicted result error to validate a network effect and the accuracy of the phase demodulation method. calculating wrapped phases of multiple views according to the following formula: φv=-a⁢tan⁡(N⁢uvD⁢ev),v=1,2,…⁢V, where a tan( )represents an arc tangent function and v represents the number of views. projecting a single fringe image to a scene to be detected by a projector engine in the structured light field system; and collecting deformed fringes in the scene to be detected by a light field camera in the structured light field system to obtain the 12-step phase shift fringe images of multiple views. a computer device, comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor." These elements are more abstract concepts, generic applications to a field of use or well-understood, routine, conventional activity (see MPEP § 2106.05(d) and can't be simply appended to qualify as significantly more or being a practical application. What type of application, or structure of components beyond generic machine learning is still unknown for these claims. Therefore claims 2-7, 9-14 and 16-20 also recite abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Regarding claim 9, the claim limitation recites “A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program”. However, the usage of the phrase “machine-readable storage medium” is broad enough to include both “non-transitory” and “transitory” media. The specification further explicitly does not limit the utilization of a non-transitory computer-readable medium (See specification, PGPUG ¶ [0080]-[0081] where “storage medium" transitory and non-transitory mediums are discussed, however, readable medium is not defined). “The computer-readable storage medium may be”. When the specification is silent, the BRI of a CRM and a computer readable storage media (CRSM) in view of the state of the art covers a signal per se. See Ex parte Mewherter, 2012-007962 (PTAB, 2013). Therefore, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Feng et al. (US 20230122985 A1 hereinafter Feng) in view of Chen et al (US 20220203700 A1 hereinafter Chen) As to independent claim 1, Feng teaches a multi-view single-frame phase demodulation method based on a structured light field, comprising: [phase analysis in fringe images ¶3] constructing an LFDNet neural network; [constructs a GAN network for image patterns ¶6-9 "A multi-scale generative adversarial neural network model is constructed"] collecting a data set by a structured light field system and training the LFDNet neural network to optimize the LFDNet neural network; and [using a camera and a projector to collect 1050 training fringe images from 150 different scenarios with a 7-step phase-shifting method. These images are then used to train the multi-scale generative adversarial network ¶56-57 "the training data collected in step 3 are used to train the multi-scale generative adversarial network"] inputting a [[multi-view]] fringe image to be predicted into the optimized LFDNet neural network, outputting numerators and denominators of multiple views, and [fringe images are input and sine/cos are output (numerators/denominators) ¶10 "a fringe pattern is fed into the trained multi-scale network where the generator outputs the sine term, cosine term, and the modulation image of the input pattern"] calculating the numerators and denominators of the multiple views via an arc tangent function and obtaining wrapped phases of multiple views. [arctangent function to compute phases ¶10, ¶54 "The sine term M(x, y) and cosine term D(x, y) are substituted into the arctangent function to calculate the phase ϕ(x, y):"] Feng does not specifically teach a multi-view fringe image. However, Chen teaches a multi-view fringe image. [collects a series of fringe images from multiple cameras (multi-view) ¶46 "collect a series of interference fringe images at each height position and record the corresponding vertical position value"], [inputs images and obtains coordinates ¶123] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the image input by Feng by incorporating the multi-view fringe image disclosed by Chen because both techniques address the same field of image analysis and by incorporating Chen into Feng improves inspection of images with real time processing and identification of defects. [Chen ¶99] As to dependent claim 6, the rejection of claim 1 is incorporated, Feng and Chen further teach wherein the inputting a multi-view fringe image to be predicted into the optimized LFDNet neural network, outputting numerators and denominators of multiple views, and calculating the numerators and denominators of the multiple views via an arc tangent function and obtaining wrapped phases of multiple views comprises: [Feng arctangent function to compute phases with sine/cos ¶10, ¶54 "The sine term M(x, y) and cosine term D(x, y) are substituted into the arctangent function to calculate the phase ϕ(x, y):"] calculating wrapped phases of multiple views according to the following formula: φv=-atan⁡(NuvDev),v=1,2,…V, where a tan( )represents an arc tangent function and v represents the number of views. [Feng arctangent formula ¶54-57] As to dependent claim 8, the rejection of claim 1 is incorporated, Feng and Chen further teach a computer device, comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein when executing the computer program, the processor implements the multi-view single-frame phase demodulation method based on a structured light field according to claim 1. [Feng Computer ¶56] As to dependent claim 9, the rejection of claim 1 is incorporated, Feng and Chen further teach a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program; and when executed by a processor, the computer program causes the processor to perform the multi-view single-frame phase demodulation method based on a structured light field according to claim 1. [Feng Computer ¶56] Claims 2 is rejected under 35 U.S.C. 103 as being unpatentable over Feng in view of Chen, as applied in the rejection of claim 1 above, and further in view of Mazo (US 10580131 B2) As to dependent claim 2, Feng and Chen teach the rejection of claim 1 that is incorporated. Feng and Chen do not specifically teach wherein the constructing an LFDNet neural network comprises: acquiring a tensor feature H×W×V of a multi-view fringe image and using same as an input tensor of the LFDNet neural network, wherein H represents a tensor height, W represents a tensor width, and V represents the number of tensor channels; and performing a plurality of times of convolution processing, downsampling processing, and upsampling processing on the input tensor to obtain an output tensor H×W×2V of the LFDNet neural network. However, Mazo teaches wherein the constructing an LFDNet neural network comprises: acquiring a tensor feature H×W×V of a multi-view fringe image and using same as an input tensor of the LFDNet neural network, wherein H represents a tensor height, W represents a tensor width, and V represents the number of tensor channels; and [slices (images) and 3X3 features Col. 21 ln. 41-64 "processing the target 2D slice 502 and single expanding component 512. No skip connections are necessarily implemented between the other contracting components (506A, 506C) that process the nearest neighbor slices (504A, 504C)"], [tesnorflow Col. 25 ln. 11-22] performing a plurality of times of convolution processing, downsampling processing, and upsampling processing on the input tensor to obtain an output tensor H×W×2V of the LFDNet neural network. [convolutional, down/up sampling for multiple image slices Col. 21 ln. 41-64 "3×3 convolutional layers"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the fringe images disclosed by Feng and Chen by incorporating the wherein the constructing an LFDNet neural network comprises: acquiring a tensor feature H×W×V of a multi-view fringe image and using same as an input tensor of the LFDNet neural network, wherein H represents a tensor height, W represents a tensor width, and V represents the number of tensor channels; and performing a plurality of times of convolution processing, downsampling processing, and upsampling processing on the input tensor to obtain an output tensor H×W×2V of the LFDNet neural network disclosed by Mazo because all techniques address the same field of image analysis and by incorporating Mazo into Feng and Chen reduces the time to assess image and identify abnormal features [Mazo Col. 1 ln. 19-40] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Huang et al. (US 20230306675 A1) teaches a neural light field network (see ¶17) It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Beau Spratt whose telephone number is 571 272 9919. The examiner can normally be reached 8:30am to 5:00pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571 272 7212. The fax phone number for the organization where this application or proceeding is assigned is 571 483 7388. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866 217 9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800 786 9199 (IN USA OR CANADA) or 571 272 1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month