Prosecution Insights
Last updated: April 19, 2026
Application No. 18/822,496

THREE-DIMENSIONAL IMAGE GENERATION METHOD FOR GENERATING AN IMAGE ACCORDING TO TWO IMAGES OF DIFFERENT TIMES, AND GENERATING A THREE-DIMENSIONAL IMAGE ACCORDINGLY

Non-Final OA §103§112
Filed
Sep 03, 2024
Examiner
PUNTIER, CHRIS ALEJANDRO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Qisda Corporation
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
29 granted / 31 resolved
+31.5% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
12 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4,5,6,12,18 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the limitation "the stripe" in line 1. There is insufficient antecedent basis for this limitation in the claim because there is no “stripe” introduced in the claim or the claim it depends on. Claim 5 recites the limitation "the stripe" in line 1. There is insufficient antecedent basis for this limitation in the claim because there is no “stripe” introduced in the claim or the claim it depends on. Claim 6 recites the limitation “a difference” in line 2. It is unclear what is meant in the claim by taking “a difference” of two images. Claim 12 recites the limitation “a difference” in line 2. It is unclear what is meant in the claim by taking “a difference” of two images. Claim 18 recites the limitation “a difference” in line 2. It is unclear what is meant in the claim by taking “a difference” of two images. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,7,13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Blayvas(US-20080285056-A1) and Thirion(US-20020012478-A1). Regarding claim 1, Blayvas discloses A three-dimensional image generation method comprising: projecting a first light pattern to an object to generate a first image at a first time; capturing the first image(para.[0020] “FIG. 1 shows a 3D scanner consisting of a pattern projector 11 and a camera 14. The pattern projector 11 consists of several infrared light emitting diodes 12 and a pattern mask 13. The camera 14 has a lens 15 and an image sensor 16. The 3D scanner operates as follows: the first infrared LED turns on, and projects a first pattern through the pattern mask 13 on the object 17. The image of the object and the first infrared pattern projected on it is focused by the lens 15 on the image sensor 16. ” Blayvas expressly teaches projecting a first light pater onto an object and capturing the resulting first image. A patter projector, first projected patter, object and image acquisition are all disclosed. Although the exact phrase “first image at a first time” is not taught, the sequential operation of the scanner implies a first capture event.); projecting a second light pattern to the object to generate a second image at a second time; capturing the second image(para.[0020] “Then the first LED is switched off and the second LED is switched on, projecting the second pattern. The second image is acquired with the pixels in the visible band containing conventional image of the object and the infrared pixels containing the image of the second infrared pattern. This process is repeated n times (where n is one or more), for each of the n LEDs.” Blayvas teaches a second projected pattern and capturing a second image after the first. The sequential order is built into the LED switching.). However, Blayvas alone does not fully disclose generating a third image corresponding to a third time according to the first image and the second image; and generating a three-dimensional image of the object according to the first image, the second image and the third image; wherein the first time precedes the second time, and the second time precedes the third time. The combination of Blayvas and Thirion do disclose generating a third image corresponding to a third time according to the first image and the second image ( Thirion discloses at para.[0043-0045] “The invention also proposes a method for processing comparable digital images, which comprises the following steps: determining a registration transformation between one of the images and the other, starting from the two sets of image data, re-sampling a first of the two sets of image data, representing the registration image, into a third set of image data relating to the same image and able to be superposed directly, sample by sample, on the second set of image data…” para.[0068] “A registration module 10 is charged with receiving the first and second sets of image data in order to determine a registration transformation termed “rigid” TR between the first and second images.” Para.[0070] “The registration transformation applied to the image 1 then feeds a sampling module 20 intended to re-sample the image 1 processed by the registration transformation TR so as to form a third set of image data representing a third image able to be precisely superposed, sample by sample, on the second image (reference image).” Thirion teaches taking first and second image data and generating a third image from them via registration and re-sampling, aligning with the claim element.); and generating a three-dimensional image of the object according to the first image, the second image and the third image(Blayvas at para.[0020] “At the end of the process there are n images of the object in the visible band under natural illumination, and n images in the infrared band, with the object illuminated by n different (or shifted) infrared patterns. Processing of the infrared images, allows reconstruction of the 3D shape of the object.” Further in para.[0026] “Therefore, the projected sine patterns are mutually phase-shifted by 2λ/3. The infrared images acquired with the first, second and third projected patterns are denoted as I1, I2 and I3. The phase φ of the projected pattern is obtained from the three images as: φ=arctan└√{square root over (3)}(I1−I3)/(2I2−I1−I3)┘[2], which can be verified via the trigonometric equalities. Knowing the phase φ of the projected pattern for each pixel allows the 3D reconstruction of the shape by triangulation [2].” Blavyas teaches support for generating a 3D shape from a first second and third image. The three images “denoted as I1, I2 and I3” are taught as being used to obtain phase information and reconstruct the shape of the object, aligning with the claim element. Since the third image taught by Blayvas is a pattern image and not a generated third image, the teachings of Thirion can be incorporated for the generated third image.); wherein the first time precedes the second time, and the second time precedes the third time (Blayvas teaches at para.[0020] “The 3D scanner operates as follows: the first infrared LED turns on, and projects a first pattern through the pattern mask 13 on the object 17. The image of the object and the first infrared pattern projected on it is focused by the lens 15 on the image sensor 16. The image sensor has pixels sensitive to the visible band and to the infrared band. The pixels sensitive to the visible band acquire the conventional image of the object under the natural illumination. The pixels sensitive to the infrared acquire the infrared image of object with the infrared pattern projected on it. Then the first LED is switched off and the second LED is switched on, projecting the second pattern. The second image is acquired…” While the phrase “third time” is not explicitly stated, the sequential order of the image capturing implies a first and second time. A third time can also be inferred when generating the third image as taught by Thirion as mentioned above.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Thirion into the teachings of Blayvas in order to a be able to generate a third image from previous inputs to improve the reconstruction continuity between captures and reduce errors. Regarding claim 7, claim 7 is similar to claim 1, but claim 7 recites generating a second image corresponding to a second time according to the first image and the third image rather than generating a third image corresponding to a third time according to the first image and the second image. This difference does not patentably distinguish claim 7 because Thirion teaches generating an additional image from two images, and it would have been obvious to apply that teaching in Blayvas to generate an intermediate image between two captured images for improved reconstruction and 3D image generation. Every other element of claim 7 is rejected under the same rationale as claim 1. Regarding claim 13, claim 13 is similar to claim 1, but claim 13 recites generating a first image corresponding to a first time according to the second image and the third image rather than generating a third image corresponding to a third time according to the first image and the second image. This difference does not patentably distinguish claim 7 because Thirion teaches generating an additional image from two images, and it would have been obvious to apply that teaching in Blayvas to generate an initial image prior to the two captured images for improved reconstruction and 3D image generation. Every other element of claim 7 is rejected under the same rationale as claim 1. Claim(s) 2,8,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Blayvas as modified by Thirion as applied to claim 1 above, and further in view of Trail (US-20170123526-A1). Regarding claim 2, the comibination of Blayvas and Thirion disclose all the elements of claim 1 as discussed above. However, the combination does not disclose wherein the first light pattern is the same as the second light pattern. Trail does disclose wherein the first light pattern is the same as the second light pattern(para. [0040] “In some embodiments, the structured light emitter 310 outputs a single frequency or a narrowband spectrum of light. In alternate embodiments, the structured light emitter 310 outputs N single frequencies or N narrow bands with distinct center-frequencies.” Trail explicitly teaches maintaining the light patter emitted. This means that sequential emissions will have the same pattern aligning with the claim element) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Trail into the combination of teachings of Blayvas and Thirion in order to use the same light pattern to allow for reduced complexity and easier calibration. Claim 8 and 14, which are similar in scope to claim 2, thus rejected under the same rationale. Claim(s) 3-6,9-12,15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Blayvas as modified by Thirion as applied to claim 1 above, and further in view of Takabayashi (US-20120133954-A1). Regarding claim 3, the combination of Blayvas and Therion discloses all the elements of claim 1 as discussed above. However, the combination does not fully disclose wherein generating the third image corresponding to the third time according to the first image and the second image comprises: generating a coordinate position of a stripe of the third image according to a coordinate position of a stripe of the first image and a coordinate position of a stripe of the second image. Takabayashi does disclose wherein generating the third image corresponding to the third time according to the first image and the second image comprises: generating a coordinate position of a stripe of the third image according to a coordinate position of a stripe of the first image and a coordinate position of a stripe of the second image(para.[0093-0095] “The boundary position between the light and dark portions in a position “a” of the horizontal coordinate of the binary coded light pattern 21 and the reversed binary coded light pattern 22 corresponds to the boundary position between the light and dark portions in the 1st bit stripe light pattern 20…. Paying attention to the correspondence to the boundary position similar to the above, the boundary position between the light and dark portions in positions “b” and “c” of the binary coded light pattern 21 and the reversed binary coded light pattern 22 is common to the boundary position between the light and dark portions in the 2nd bit stripe light pattern 20… Paying attention to the correspondence to the boundary position similarly to the above, the boundary position between the light and dark portions in positions “d,” “e,” “f,” and “g” of the binary coded light pattern 21 and the reversed binary coded light pattern 22 is common to the boundary position between the light and dark portions in the 3rd bit stripe light pattern 20.” Treating stripe as a light pattern as specified in the specification, Takabayashi distinctly works with first second and third light patterns and calculates the boundary position of the first image based on the second and third captured images. Takabayashi distinctly discloses the principle of deriving a strip coordinate in one image from corresponding stripe coordinates in two other images. This can be used in conjunction with the sequential image generation discussed in claim 1.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Takabayashi into the combination of teachings of Blayvas and Thirion because Takabayashi already teaches the idea of coordinate-related position of a projected stripe in one captured image can be calculated from corresponding light pattern information in two other captured images to improve the estimation of the stripe position. Regarding claim 4, the combination of Blayvas and Thirion disclose all the elements of claim 1 as discussed above. However, they do not disclose wherein the coordinate position of the stripe of the first image is S12, the coordinate position of the stripe of the second image is S21, the coordinate position of the stripe of the third image is S31, and S31= (S12+S21)/2. Takabayashi does disclose wherein the coordinate position of the stripe of the first image is S12, the coordinate position of the stripe of the second image is S21, the coordinate position of the stripe of the third image is S31, and S31= (S12+S21)/2(para.[0019] “The luminance lines 174 and 175 are averaged to obtain an average as a luminance line 176. The luminance line 173 of the stripe light pattern 20 and the luminance line 176 of the average intersect with each other at a position “a”. The position “a” determined by the above process is taken as a boundary position. What is described above is the average image comparison method.” Takabayashi teaches a positional calculation using the captured light patterns. The average is analogous to the “S31 = (S12+S21)/2” as stated in the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Takabayashi into the combination of teachings of Blayvas and Thirion in order to have simple and stable interpolation of stripe position. Regarding claim 5, the combination of Blayvas and Thirion disclose all the elements of claim 1 as discussed above. However, it does not disclose wherein the coordinate position of the stripe of the first image is S11, the coordinate position of the stripe of the second image is S21, the coordinate position of the stripe of the third image is S31, and S31=(S21-S11) +S21. The combination of Blayvas, Thirion and Takabayashi does disclose wherein the coordinate position of the stripe of the first image is S11, the coordinate position of the stripe of the second image is S21, the coordinate position of the stripe of the third image is S31, and S31=(S21-S11) +S21(Takabayashi para.[0014] “It is necessary to accurately determine a horizontal x-coordinate position (hereinafter referred to as a boundary position) between the light and dark portions from the captured image data to improve accuracy in the three dimensional measurement of the spatial coding method.” Here Takabayashi teaches the coordinate positions of a strip in a structured light 3D context. Further Takabayashi in para.[0096] “More specifically, the boundary position of a first captured image of a first light pattern (the light pattern 20) can be calculated based on a second captured image of a second light pattern (the binary coded light pattern 21) and a third captured image of a third light pattern (the reversed binary coded light pattern 22).” Here Takabayashi teaches the first second and third image calculations based on the other images. Further as mentioned in the rejection for claim 4 Takabayashi teaches using averages or simple mathematical operations to derive a positional result. Further Thirion discloses in para.[0101] “In the case of a discretized representation D − 1,2(G) of the deformation field, it is also possible to use a stochastic distribution of points Pl, the co-ordinates of which are floating in the space E. It is then sufficient, in order to evaluate D1,2(Pl), to use the discretized field D− 1,2 and an n-linear type interpolation of the discrete field, within the mesh G in which the point Pl falls. In this mesh i, the point Pl has co-ordinates αl,βl and γl, all between 0 and 1 (inclusive values). The interpolation will be 2-linear in the case of a 2D image and 3-linear in the case of a 3D image.” Thirion discloses interpolation in the context of registration and re-sampling of image data. Since the formula used in the claim element is a form of interpolation this is an obvious adjustment.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Takabayashi and Thirion into the teachings of Blayvas in order to efficiently estimate later stripe positioning and have more predictable results. Regarding claim 6, the combination of Blayvas and Thirion disclose all the elements of claim 1 as discussed above. However, they do not fully disclose further comprising: determining whether a difference between the first image and the second image is greater than a predetermined value; wherein the third image is generated if the difference between the first image and the second image is less than the predetermined value. Takabayashi does disclose further comprising: determining whether a difference between the first image and the second image is greater than a predetermined value; wherein the third image is generated if the difference between the first image and the second image is less than the predetermined value(para.[0110-0111] “In step S502, it is determined whether the distance between the calculated intersections in the maximum and the minimum luminance value 41 and 42 is shorter than a predetermined pixel. In general, even if the boundary between the light and dark portions is expanded owing to blur due to a focal length or reflectance of the object 17, the expansion is within several pixels. If the distance between the intersections exceeds several pixels, this seems to be a measurement error. If the distance between the intersections is shorter than several pixels (YES in step S502), the processing proceeds to step S503. If the distance between the intersections is not shorter than several pixels (NO in step S502), the processing proceeds to error.” Here Takabayashi describes a process where should the calculated distance be within a certain threshold the process continues. This logic is analogous to the claim element.). Claims 9,15 are similar in scope to claim 3, thus rejected under the same rationale. Regarding claim 10, the combination of Blayvas, Thirion and Takabayashi disclose all the elements of claim 9 as discussed above. The combination also discloses wherein the coordinate position of the stripe of the first image is S11, the coordinate position of the stripe of the third image is S31, the coordinate position of the stripe of the second image is S21, and S21=(S11+S31)/2(para.[0019] “The luminance lines 174 and 175 are averaged to obtain an average as a luminance line 176. The luminance line 173 of the stripe light pattern 20 and the luminance line 176 of the average intersect with each other at a position “a”. The position “a” determined by the above process is taken as a boundary position. What is described above is the average image comparison method.” Takabayashi teaches a positional calculation using the captured light patterns. The average is analogous to the “S31 = (S12+S21)/2” as stated in the claim element.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Takabayashi into the combination of teachings of Blayvas and Thirion in order to have simple and stable interpolation of stripe position. Regarding claim 11, the combination of Blayvas, Thirion and Takabayashi disclose all the elements of claim 9 as discussed above. The combination also disclose wherein the coordinate position of the stripe of the first image is S12, the coordinate position of the stripe of the third image is S31, the coordinate position of the stripe of the second image is S22, and S22=S12+(S12-S31) (Takabayashi para.[0014] “It is necessary to accurately determine a horizontal x-coordinate position (hereinafter referred to as a boundary position) between the light and dark portions from the captured image data to improve accuracy in the three dimensional measurement of the spatial coding method.” Here Takabayashi teaches the coordinate positions of a strip in a structured light 3D context. Further Takabayashi in para.[0096] “More specifically, the boundary position of a first captured image of a first light pattern (the light pattern 20) can be calculated based on a second captured image of a second light pattern (the binary coded light pattern 21) and a third captured image of a third light pattern (the reversed binary coded light pattern 22).” Here Takabayashi teaches the first second and third image calculations based on the other images. Further as mentioned in the rejection for claim 4 Takabayashi teaches using averages or simple mathematical operations to derive a positional result. Further Thirion discloses in para.[0101] “In the case of a discretized representation D − 1,2(G) of the deformation field, it is also possible to use a stochastic distribution of points Pl, the co-ordinates of which are floating in the space E. It is then sufficient, in order to evaluate D1,2(Pl), to use the discretized field D− 1,2 and an n-linear type interpolation of the discrete field, within the mesh G in which the point Pl falls. In this mesh i, the point Pl has co-ordinates αl,βl and γl, all between 0 and 1 (inclusive values). The interpolation will be 2-linear in the case of a 2D image and 3-linear in the case of a 3D image.” Thirion discloses interpolation in the context of registration and re-sampling of image data. Since the formula used in the claim element is a form of interpolation this is an obvious adjustment.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Takabayashi and Thirion into the teachings of Blayvas in order to efficiently estimate later stripe positioning and have more predictable results. Claim 12,18 are similar in scope to claim 6, thus rejected under the same rationale. Claim 16, is similar in scope to claim 10 thus rejected under the same rationale. Claim 17, is similar in scope to claim 11 thus rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRIS ALEJANDRO PUNTIER whose telephone number is (703)756-1893. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRIS ALEJANDRO PUNTIER/ Examiner, Art Unit 2616 /DANIEL F HAJNIK/ Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 03, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586298
CONTROLLED ILLUMINATION FOR IMPROVED 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12586291
Fast Large-Scale Radiance Field Reconstruction
2y 5m to grant Granted Mar 24, 2026
Patent 12573103
ENVIRONMENT MAP UPSCALING FOR DIGITAL IMAGE GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12548226
SYSTEMS AND METHODS FOR A THREE-DIMENSIONAL DIGITAL PET REPRESENTATION PLATFORM
2y 5m to grant Granted Feb 10, 2026
Patent 12536679
APPLICATION MATCHING METHOD AND APPLICATION MATCHING DEVICE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+10.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month