Prosecution Insights
Last updated: April 19, 2026
Application No. 18/645,346

DETERMINING 3D DATA FOR 2D POINTS IN INTRAORAL SCANS

Non-Final OA §102§103
Filed
Apr 24, 2024
Examiner
NIRJHAR, NASIM NAZRUL
Art Unit
2896
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Align Technology, Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
379 granted / 512 resolved
+6.0% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
75.4%
+35.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the correspondence filled on 12/29/25. Claims 88-101 are presented for examination. On 12/29/25 applicant stated that applicant cancelled claim 82-87. Claim sheet needs to reflect that status of the claims. IDS Considerations The information disclosure statement (IDS) submitted on 6/28/24 is/are being considered by the examiner as the submission is in compliance with the provisions of 37 CFR 1.97. Election/Restriction Applicant has elected Group II, claims 88-101 in response to restriction requirement sent on 11/5/25. Applicant cancelled claim 82-87. Claim Objections Claims 100 objected to because of the following informalities: claim 100 ends with a semicolon followed by and. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of 35 U.S.C. 102 (a)(1)/ (a)(2) which forms the basis for all obviousness rejections set forth in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 88-89, 92, 94 and 98 is/are rejected under 35 U.S.C. 102 (a)(2) as being unpatentable over Mottelson (U.S. Pub. No. 20240398519 A1). Regarding to claim 88 and 98: 88. Mottelson teach an intraoral scanning system, comprising: an intraoral scanner comprising: (Mottelson [0022] The two or more images of the three-dimensional (3D) object may be acquired using a scanning device such as an intraoral 3D scanning device) one or more structured light projectors (Mottelson [0104] The desired output for a given input depends on what the neural network is required to do. In some embodiments, the desired outputs include pixel level annotation of where the features of a structured light pattern are located, i.e. the location of pattern features, a pixel level depth map of the corresponding 3D surface or labels for classifying pixels into light and dark checkers.) configured to project a light pattern comprising a plurality of projector rays (Mottelson [0023] The present disclosure further relates to a dental scanning system for generating a digital representation of a three-dimensional (3D) object, the scanning system comprising: [0024] a scanning device comprising: [0025] one or more projector units configured for projecting a predefined pattern onto the 3D object, wherein the pattern comprises a plurality of pattern features, wherein each projector unit is associated with a projector plane; [0026] one or more cameras, preferably two or more cameras, for each projector unit, wherein each camera is configured for acquiring images of at least a part of the object, wherein each of the cameras has a predefined fixed position relative to the one or more projector units) onto a dental site; and a plurality of cameras configured to capture a plurality of images of at least a portion of the light pattern projected onto the dental site, (Mottelson [0030] the two or more images will be acquired by a scanning device, such as an intraoral 3D scanning device, comprising one or more cameras for acquiring images. The images within a set of images may be acquired simultaneously, i.e. at the same moment in time. The scanning device further comprises one or more projector units for projecting a predefined pattern onto a surface of the 3D object. The projected pattern comprises a plurality of pattern features, such that the acquired images similarly comprise a plurality of image features. ) wherein each camera of the plurality of cameras is configured to capture an image of the plurality of images, (Mottelson [0030] the two or more images will be acquired by a scanning device, such as an intraoral 3D scanning device, comprising one or more cameras for acquiring images. The images within a set of images may be acquired simultaneously, i.e. at the same moment in time. The scanning device further comprises one or more projector units for projecting a predefined pattern onto a surface of the 3D object. The projected pattern comprises a plurality of pattern features, such that the acquired images similarly comprise a plurality of image features.) the image comprising a plurality of points of at least the portion of the light pattern projected onto the dental site; (Mottelson [0097] The scanning device may be an intraoral scanning device for acquiring images inside the oral cavity of a subject. The projector unit(s) of the scanning device are configured for projecting a predefined illumination pattern, such as a static pattern, onto a surface, e.g. onto the surface of the three-dimensional object. Once projected on the surface, some light will be reflected from the surface, which may then enter the camera(s) of the scanning device, whereby images of the 3D object can be acquired) and a computing device configured to: determine, for each projector ray of the plurality of projector rays, one or more candidate points of the plurality of points that might have been caused by the projector ray; (Mottelson [0115] As an example, to associate a given image feature with a pattern feature, a camera ray emanating from a first camera passing through its focal center and the image feature in the image plane of that camera is considered. This camera ray will appear as a point in the image plane of the first camera. However, the projector unit (or a second camera) will see the ray emanating from the first camera as a line/curve in the projector plane (in a realistic scenario where lens distortion is present, the line will not be a straight line), since the projector unit and camera views the object from two different views. The projected line/curve may also be referred to as an epipolar line. Accordingly, when camera rays associated with image features are projected onto the image plane of other cameras or onto projector planes, said rays form curves in those planes, and those curves are referred to herein as epipolar lines. It should be noted that only in the ideal case (i.e. with no lens distortion present) are the epipolar lines straight lines. In general, when a 3D line, such as a camera ray or a projector ray, is projected onto a plane, said line forms a curve in the plane.) process information for each projector ray using a trained machine learning model, wherein the trained machine learning model generates one or more outputs comprising, for each projector ray, and for each candidate point associated with the projector ray, (Mottelson [0183] The neural network may be trained using supervised learning with a large number of input-output pairs, where an input-output pair is understood to be the aforementioned image (W×H×3 tensor) and at least one of the desired types of output data. Preferably, all 7 types of output data are used in each input-output pair during the supervised learning. As an example, a training dataset may comprise 15000 of such input-output pairs, but larger sets are likely to increase the performance and reliability of the network. [0184] The training data may comprise rendered images of the 3D object, such as teeth. FIG. 10 shows an example of a suitable image for training purposes. The following data may be used to generate training data: [0185] a reference 3D model of a jaw or part of a jaw; [0186] a color map for the 3D model; [0187] a trajectory that the scanning device moved along when capturing the reference scan; [0188] the intrinsic parameters for the camera to be used to render the images; and [0189] the intrinsic parameters for the projector used to generate the pattern; and [0190] the relative positions of the cameras and the projector [0191] The above-mentioned training data may be given as input, which may be used to generate ground truth (desired output data) and input data for each pose along the trajectory by ray-tracing. FIGS. 11-14 show different types of output data. FIG. 11 shows a checkerboard pattern, wherein a plurality of checkers (black/white) have been identified by the neural network) a probability that the candidate point (Mottelson [0175] As training input, the neural network may be given a rendering of a 3D object, such as a dental object. A suitable image for training purposes is illustrated in FIG. 10. The objective is for the neural network to deliver one or more types of data as output. Examples are given below: [0176] a channel containing the likelihood [probability] that the pixel is in a part of the pattern [projector ray] with phase 0 (corresponding to a black checker)) corresponds to the projector ray; and (Mottelson [0154] FIG. 5 It should be noted that within a given projector ray corresponding to a pattern feature [projector ray], each image feature can only have one depth. The reason why an image feature may have several depths assigned, is that the image feature can be assigned to several different pattern features. Therefore, there is no ambiguity when ordering the image features associated with a given pattern feature according to depth. Then in step 502, a sliding window technique is employed to look through the ordered list of image features within a given projector ray/pattern feature.) determine three-dimensional (3D) coordinates for at least some of the plurality of points in the plurality of images (Mottelson [0136] A seed proposal may be understood as a collection of image feature groups, wherein each image feature group comprises at least one image feature. At least one attribute of an image feature group may be a point in 3D space, e.g. described by a set of Cartesian coordinates (x, y, z), wherein z denotes the aforementioned depth assigned to the image feature group. The image feature groups may preferably comprise more attributes such as: projector ray index, and/or the indices of the individual image features within the group.) based on the one or more outputs of the trained machine learning model. (Mottelson [0149] FIG. 1 shows a flowchart of an embodiment of the presently disclosed method. The first step 101 of the method is to receive or acquire images of a three-dimensional (3D) object. The images may be acquired/provided by a scanning device such as an intraoral 3D scanning device. The images may be acquired by projecting a predefined pattern onto a surface of the 3D object, wherein the pattern comprises a plurality of pattern features, such as corners in a checkerboard pattern. The acquired images will similarly comprise a plurality of image features. In step 102, said image features are determined. The image features may be determined automatically by a neural network trained to determine the image features.) Regarding to claim 89: 89. Mottelson teach the intraoral scanning system of claim 88, wherein the computing device is further configured to: determine, for each projector ray, and for each candidate point of the one or more candidate points that might have been caused by the projector ray, (Mottelson [0032] for each pattern feature/projector ray, to associate a number of image features among the determined image features to the pattern feature/projector ray. Each pattern feature may be associated with one or more image features, such as a plurality of image features. A next step of the method may be to determine, for each image feature, one or more possible depths of the image feature. Accordingly, several depths may be assigned to each image feature, since at this point it is not known which depth is the true solution to the correspondence problem for that particular image feature. The depth(s) may be determined by triangulation) a distance at which the candidate point intersects with the projector ray, (Mottelson [0032] A depth may be understood as the distance from the projector location along a given projector ray to a point in 3D, where said projector ray intersects a camera ray within a given tolerance/distance. In particular, a triangulation [distance at which the candidate point intersects] approach may be utilized in case each camera of the scanning device has a predefined fixed position relative to one or more projector units.) wherein the information for the projector ray that is input into the trained machine learning model comprises the distance. (Mottelson [0107] Some parts of the training dataset can be made to simulate diffuse surfaces such as gypsum, e.g., by suppressing specular reflection and subsurface scattering. This can ensure that the network will also perform well on other materials than enamel and gingiva. The training dataset may be chosen to overrepresent challenging geometries such as scan flags, preparations and margin lines [distance]. This makes it more likely that the network will deliver desired outputs for such situations in the field. [0041] As mentioned previously, the step of determining one or more image features within each set of images may be performed by a neural network, such as a convolutional neural network (CNN), trained to perform said determination. A connected subset may be understood as a subset of pattern features, wherein any two pattern features are connected via a path going through adjacent features in the regular grid. A technical effect of utilizing connected subset of pattern features is that it improves the reliability of the solution to the correspondence problem, i.e. it provides for a more robust tracking algorithm. All solutions to the correspondence problem may be consistent within a group of pattern features/projector rays.) Regarding to claim 92: 92. Mottelson teach the intraoral scanning system of claim 89, wherein the one or more structured light projectors comprise a plurality of structured light projectors, and wherein the computing device is further configured to: determine, for each projector ray, an index of a structured light projector of the plurality of structured light projectors that generated the projector ray, (Mottelson [0136] A seed proposal may be understood as a collection of image feature groups, wherein each image feature group comprises at least one image feature. At least one attribute of an image feature group may be a point in 3D space, e.g. described by a set of Cartesian coordinates (x, y, z), wherein z denotes the aforementioned depth assigned to the image feature group. The image feature groups may preferably comprise more attributes such as: projector ray index, and/or the indices of the individual image features within the group. The projector ray index may be understood as an index denoting which projector ray is associated to the image feature group.) wherein the information for the projector ray that is input into the trained machine learning model comprises the index. (Mottelson [0160] FIG. 11 shows a rendered image of a 3D object, here some teeth of a subject. An illumination pattern (here a checkerboard pattern) is visible in the image. Such an image is an example of a suitable image to be used as training input to the neural network described herein.) Regarding to claim 94: 94. Mottelson teach the intraoral scanning system of claim 89, wherein the computing device is further configured to: determine, for each projector ray, and for one or more candidate points associated with the projector ray, one or more features associated with the projector ray and the one or more candidate points, wherein the information for the projector ray that is input into the trained machine learning model comprises the one or more features, and (Rejected for the same reason as claim 89) wherein the one or more features comprise at least one of: a distance from an epi-polar line associated with the projector ray; for an image associated with a candidate point, a triangulation error that is determined based on a distance between a camera that captured the image and an origin of the projector ray; a spot size of the captured point; or a color of the dental site at the intersection of a candidate point with the projector ray as determined from one or more color images captured at least one of before or after capture of the plurality of images. (These are part of OR condition, rejection is not required) an intensity associated with a captured point; (Mottelson [0061] In a further implementation of the second method, the processor sets a threshold, such that a detected feature that is below the threshold is not considered by the correspondence algorithm, and to search for the feature corresponding to projector ray r1 in the identified search space, the processor lowers the threshold in order to consider features that were not considered by the correspondence algorithm. For some implementations, the threshold is an intensity threshold.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 97 and 99 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mottelson (U.S. Pub. No. 20240398519 A1), in view of Saphier (U.S. Pub. No. 20200404243 A1). Regarding to claim 97: 97. Mottelson teach the intraoral scanning system of claim 88, Mottelson do not explicitly teach wherein the computing device is further configured to: use a second trained machine learning model to select candidate points for a plurality of projector rays based on one or more inputs comprising probabilities of candidate points corresponding to projector rays, wherein the 3D coordinates are determined based on the selected candidate points. However Saphier teach wherein the computing device is further configured to: use a second trained machine learning model to select candidate points for a plurality of projector rays based on one or more inputs comprising probabilities of candidate points corresponding to projector rays, wherein the 3D coordinates are determined based on the selected candidate points. (Saphier [0698] For some applications, when (a) first and second neural network modules 434 and 436 are part of the same neural network 400 and (b) different sets of training-stage two-dimensional images are used for the training of first neural network module 434 and the training of second neural network module 436, respectively, then, concurrently with the training of second neural network module 436, first neural network module 434 can continue to optimize its depth map learning based on the new confidence-training-stage two-dimensional images 412t′. This is depicted in stage 2; phase (ii) of FIG. 52E. For each input of the confidence-training-stage two-dimensional images 412t′ neural network 400 (including both modules) determines an estimated depth map 414, 414s, and/or 414r. Each of these estimated depth maps is compared to a corresponding true depth map 418, using comparator 448, and based on the comparison first neural network module 434 is optimized to better estimate a subsequent estimated depth map 414, 414s, and/or 414r, e.g., by updating the model weights, as represented by arrow 450.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Mottelson, further incorporating Saphier in video/camera technology. One would be motivated to do so, to incorporate use a second trained machine learning model to select candidate points for a plurality of projector rays based on one or more inputs comprising probabilities of candidate points corresponding to projector rays, wherein the 3D coordinates are determined based on the selected candidate points. This functionality will improve quality with predictable results. Regarding to claim 99: 99. Mottelson teach the intraoral scanning system of claim 98, wherein the computing device is further configured to: generate an input comprising a candidate point for a projector ray, (Mottelson [0032] several depths may be assigned to each image feature, since at this point it is not known which depth is the true solution to the correspondence problem for that particular image feature. The depth(s) may be determined by triangulation. A depth may be understood as the distance from the projector location along a given projector ray to a point in 3D, where said projector ray intersects a camera ray within a given tolerance/distance. In particular, a triangulation approach may be utilized in case each camera of the scanning device has a predefined fixed position relative to one or more projector units. In other words, the scanning system may be configured to determine points in 3D space based on a triangulation technique, wherein said points correspond to projected pattern features.) Mottelson do not explicitly teach one or more additional candidate points for the projector ray, and one or more additional projector rays for the candidate point; provide the input to the trained machine learning model, wherein the trained machine learning model outputs a selection of the candidate point or one of the one or more additional candidate points for the projector ray; and remove the selected candidate point from association with the one or more additional projector rays that were associated with the selected candidate point. However Saphier teach one or more additional candidate points for the projector ray, and one or more additional projector rays for the candidate point; (Saphier [0020] The process is repeated for the additional features (e.g., spots) along a camera sensor path, and the feature (e.g., spot) for which the highest number of cameras “agree” is identified as the feature (e.g., spot) that is being projected onto the surface from the given projector ray. A three-dimensional position on the surface is thus computed for that feature of the pattern (e.g., that spot). [0093]) provide the input to the trained machine learning model, (Saphier [0721] This method is based on the realization that if neural network 400 is trained using training-stage cameras from only one handheld wand 20, and that training-stage handheld wand 20 has some specific characteristic and/or peculiarity, then neural network 400 learns that characteristic and/or peculiarity. However, if training-stage images are input to neural network 400 from a plurality, e.g., many, different handheld wands 20, each perhaps with its own slightly differing characteristics and/or peculiarities, then neural network 400 will learn to output the estimated maps regardless of these differing characteristics and/or peculiarities, i.e., neural network 400 will learn to ignore these types of small differences between each intraoral scanner.) wherein the trained machine learning model outputs a selection of the candidate point or one of the one or more additional candidate points for the projector ray; (Saphier [0020] The process is repeated for the additional features (e.g., spots) along a camera sensor path, and the feature (e.g., spot) for which the highest number of cameras “agree” is identified as the feature (e.g., spot) that is being projected onto the surface from the given projector ray. A three-dimensional position on the surface is thus computed for that feature of the pattern (e.g., that spot). [0093]) and remove the selected candidate point from association with the one or more additional projector rays that were associated with the selected candidate point. (Saphier [0021] In some embodiments, once a position on the surface is determined for a specific feature of the pattern (e.g., a specific spot), the projector ray that projected that feature (e.g., spot), as well as all camera rays corresponding to that feature (e.g., spot), may be removed from consideration and the correspondence algorithm is run again for a next projector ray. [0093]) Allowable subject matter Regarding to claim 90-91, 93, 95-96 and 100-101: Claims 90-91, 93, 95-96 and 100-101 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the limitations of these dependent claims are not obvious from the prior art search when all the limitations of independent and intervening claims are taken into account. Regarding to claim 90: 90. Mottelson teach the intraoral scanning system of claim 89, wherein the computing device is further configured to: wherein the candidate intersection comprises an intersection of the one or more candidate points with the projector ray, (Mottelson [0123] As an example, a scanning device comprising a projector unit and four cameras is considered. In this example, at least one image is acquired by each camera, whereby at least four images are acquired. A neighborhood graph may be generated for each image and the image features may be associated to pattern features as previously explained. Then, for each pattern feature in the projector plane, a projector ray is projected in 3D together with up to four camera rays, one from each camera, wherein each camera ray pass through an image feature associated with the considered projector ray. All intersections (3D points), wherein the camera rays cross a given projector ray within the predefined distance, are then determined, whereby potentially one or more depths may be assigned to the image features for the projector ray) Prior art do not teach: for each projector ray, group one or more candidate points from different images of the plurality of images for which the distance matches into a candidate intersection, and wherein the distance for candidate points match if the distance varies by less than a threshold amount. Regarding to claim 93: 93. Mottelson teach the intraoral scanning system of claim 92, wherein a first subset of the plurality of structured light projectors produces light having a first wavelength, and wherein a second subset of the plurality of structured light projectors produces light having a second wavelength, and wherein the computing device is further configured to: determine the 3D coordinates for one or more points of a first subset of points of the plurality of points having the first wavelength; and independently determine the 3D coordinates for one or more additional points of a second subset of points of the plurality of points having the second wavelength; (Mottelson [0077] The light source(s) may be configured to generate light of a single wavelength or a combination of wavelengths (mono- or polychromatic). The combination of wavelengths may be produced by a light source configured to produce light comprising different wavelengths (such as white light). In preferred embodiments, each projector unit comprises a light source for generating white light. The light source may be a multichromatic light source. Alternatively, the projector unit(s) may comprise multiple light sources such as LEDs individually producing light of different wavelengths (such as red, green, and blue) that may be combined to form light comprising different wavelengths. Thus, the light produced by the light source(s) may be defined by a wavelength defining a specific color, or a range of different wavelengths defining a combination of colors such as white light. In an embodiment, the projector unit(s) comprise a light source configured for exciting fluorescent material to obtain fluorescence data from the dental object such as from teeth. Such a light source may be configured to produce a narrow range of wavelengths. In other embodiments, the scanning device comprises one or more infrared light sources configured to emit infrared light, which is capable of penetrating dental tissue.) Prior art do not teach: identify one or more projector rays for which candidate points from at least one of the first subset of points or the second subset of points have not been selected; combine information for the first subset of points and the second subset of points; and determine the 3D coordinates for one or more additional points of the first subset of points and the 3D coordinates for one or more additional points of the second subset of points after combining the information. Regarding to claim 95: 95. Mottelson teach the intraoral scanning system of claim 88, wherein the computing device is further configured to: (Mottelson [0032] several depths may be assigned to each image feature, since at this point it is not known which depth is the true solution to the correspondence problem for that particular image feature. The depth(s) may be determined by triangulation. A depth may be understood as the distance from the projector location along a given projector ray to a point in 3D, where said projector ray intersects a camera ray within a given tolerance/distance. In particular, a triangulation approach may be utilized in case each camera of the scanning device has a predefined fixed position relative to one or more projector units. In other words, the scanning system may be configured to determine points in 3D space based on a triangulation technique, wherein said points correspond to projected pattern features.) Prior art do not teach: generate a tuple for a projector ray comprising: distances and probabilities for one or more top candidate points for the projector ray; and distances and probabilities for one or more top candidate points for one or more additional projector rays that are proximate to the projector ray; and input the tuple into a second trained machine learning model, wherein the second trained machine learning model outputs an updated probability for one or more candidate points for the projector ray. Regarding to claim 96: 96. Mottelson teach the intraoral scanning system of claim 88, (Mottelson [0175] As training input, the neural network may be given a rendering of a 3D object, such as a dental object. A suitable image for training purposes is illustrated in FIG. 10. The objective is for the neural network to deliver one or more types of data as output. Examples are given below: [0176] a channel containing the likelihood [probability] that the pixel is in a part of the pattern [projector ray] with phase 0 (corresponding to a black checker)) Prior art do not teach: wherein the plurality of images are associated with a current frame, wherein a previous plurality of images was generated at a prior frame prior to generation of the plurality of images, and wherein the computing device is further configured to: determine, for a projector ray, a 3D coordinate associated with the projector ray for the prior frame; and update, for a candidate point for the projector ray, the probability that the candidate point corresponds to the projector ray based on the 3D coordinate associated with the projector ray for the prior frame. Regarding to claim 100: 100. Mottelson teach the intraoral scanning system of claim 99, wherein the computing device is further configured to: repeatedly perform the following until no remaining projector rays have an associated candidate point with at least a threshold probability: (Mottelson [0033] A next step of the method may be to sort the image features associated to a given pattern feature/projector ray according to depth. This sorting is preferably performed/repeated for all pattern features) Prior art do not teach: generate a next input comprising a next candidate point for a next projector ray, one or more next additional candidate points for the next projector ray, and one or more next additional projector rays for the next candidate point; provide the next input to the trained machine learning model, wherein the trained machine learning model outputs a selection of the next candidate point or one of the one or more next additional candidate points for the next projector ray; and remove the selected next candidate point from association with the one or more next additional projector rays that were associated with the selected next candidate point; and Regarding to claim 101: 101. Mottelson teach the intraoral scanning system of claim 98, wherein the computing device is further configured to: generate a first list associating projector rays with candidate intersections, wherein each candidate intersection comprises an intersection of a projector ray of the plurality of projector rays and a candidate point of the one or more candidate points that might have been caused by the projector ray, (These are rejected for the same reason as claim 88) Prior art do not teach: the first list comprising, for each projector ray, one or more candidate intersections associated with the projector ray; and generate a second list of the plurality of points, the second list comprising, for each point of the plurality of points, one or more candidate intersections associated with the point; wherein at least one of the first list or the second list is used to generate the input. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIM N NIRJHAR/Primary Examiner, Art Unit 2896
Read full office action

Prosecution Timeline

Apr 24, 2024
Application Filed
Jan 25, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598324
DEPTH DIFFERENCES IN PLACE OF MOTION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12593131
VELOCITY MATCHING IMAGING OF A TARGET ELEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12593074
SYSTEMS AND METHODS OF BUFFERING IMAGE DATA BETWEEN A PIXEL PROCESSOR AND AN ENTROPY CODER
2y 5m to grant Granted Mar 31, 2026
Patent 12587662
METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587628
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month