Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,126

3D DENTAL ARCH MODEL TO DENTITION VIDEO CORRELATION

Final Rejection §103
Filed
Mar 19, 2024
Examiner
FOSTER, THOMAS JOHN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Align Technology, Inc.
OA Round
2 (Final)
95%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 95% — above average
95%
Career Allow Rate
19 granted / 20 resolved
+33.0% vs TC avg
Moderate +7% lift
Without
With
+7.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
72.7%
+32.7% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Arguments Applicant’s arguments, filed 02/05/2026, with respect to how the newly amended claim features differ from the prior art cited in the last office have been fully considered. The following portion of the amended claim 1 can be rejected using the original grounds of rejection: “dividing a video comprising a face of an individual into a plurality of time segments, wherein each time segment of the plurality of time segments comprises a sequence of frames in which a jaw of the individual has an orientation that deviates by less than a threshold receiving a selection of a time segment from the plurality of time segments”. However, upon further consideration, a new ground(s) of rejection is made for the following limitation: “automatically selecting an individual frame of the video representative of the selected time segment”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 8, 10, 12-15, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sachs (Pub No. US 20190122411 A1) in view of Herz (Pub No. US 20130169834 A1). As per claim 1, Sachs teaches the claimed: 1. A non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to perform operations comprising: (Sachs claim 13: “13. A non-transitory machine readable medium containing processor instructions for generating a three dimensional (3D) head model from a captured image, where execution of the instructions by a processor causes the processor to perform a process that comprises: receiving a captured image; identifying a set of taxonomy attributes from the captured image; selecting a template model for the captured image; and performing a shape solve for the selected template model based on the identified taxonomy attributes.” Sachs also teaches determining the orientation of the two dental arches and the updating of a 3D model as described below.). dividing a video comprising a face of an individual into a plurality of time segments, wherein each time segment of the plurality of time segments comprises a sequence of frames in which a jaw of the individual has an orientation that deviates by less than a threshold receiving a selection of a time segment from the plurality of time segments; (Sachs [0011]: “the tracking of the 3D surface to at least one of generating the set of rig parameters and mapping to the audio data is performed using a temporal model including a recurrent neutral network. In accordance a number of embodiments, the tracking of the 3D surface to generate the set of rig parameters and the mapping to the audio data is performed using a time series model including a convolutional neural network.” The time series with the time segments is used for the neural network. Each step of the neural network involves calculating the motion of the jaw until it reaches a certain threshold angle. It is one of the landmarks. Sach’s [0009]: “In accordance with some embodiments, the one or more processes determine a position for each of a plurality of facial landmarks in the image by performing a Mnemonic Descent Method (MDM) using a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) that are jointly trained. In accordance with many of these embodiments, the determining of the position of each of a plurality of landmarks is performed by: aligning each of the landmarks at positions aligned to a center of the face in the image, and iteratively re-calculating the position of each of the plurality of landmarks until a threshold value is met. The re-calculating being performed by: obtaining a patch of pixels of a predetermined size surrounding the position of each of the plurality of landmarks, applying the patch for each of the plurality of descriptors to the CNN to generate an N length descriptor describing each patch, concatenating the N length descriptors of each patch to generate a descriptor encapsulating all of the patches, projecting the descriptor encapsulating all of the patches through the RNN to determine an adjustment amount for each of the plurality of landmarks, and updating the landmarks based on the current position of each of the plurality of landmarks and the adjustment amount of each of the plurality of landmarks. In accordance with some embodiments, the CNN includes a global aver” The neural network is used to set the shape of the 3D model and calculate the position of the landmark of the jaw angle over the time series until it reaches a certain threshold angle. This is the sequence of frames that are used for this part of the 3D model.). Sachs alone does not explicitly teach the remaining claim limitations. However, Sachs in combination with Herz teaches the claimed: automatically selecting an individual frame of the video representative of the selected time segment (Herz teaches selecting one frame to represent a full time-segment of frames. Herz [0023]: “For example, a first selection may be a frame sequence in which the general composition of the image spatially within the frame has a quality or characteristic of interest, and a second selection may be a single frame within the frame sequence based on a momentary event displayed in the video frame. Alternatively, the tiered decision may be based on temporal aspects before spatial aspects. For example, the first selection may be a frame sequence based on a particular time segment, and the second selection may be a single frame within the frame sequence based on the size, position and/or orientation of the image content within the frame. The spatial aspect decision may also include input from the user having selected a region of interest 452 within the frame, as described above in step 203. Alternatively, the decision may be tiered based on various spatial aspects alone, or based on various temporal aspects alone.” Herz teaches a frame of interest selector: Herz [0011]: “FIG. 4B shows a graphical user interface display showing a frames of interest selector and extracted photos;” This is an autonomous process and involves an image of a user’s face. This is the automatic selection. Herz [0029]: “Using this autonomous process, the processing unit 101 may select a "best" choice from the optimized photo extraction. The user may then select the extracted photo based on the initial center-of-mass frame or the optimized result according to user preference by comparing the displayed results on the GUI 104 as shown in FIG. 4B as an initial center-of-mass extracted photo 452 and an optimized extracted photo 453. Alternatively, the processing unit 101 may present the user as a display on the GUI 104 with a set of extracted photos resulting from the multiple iterations, from which the user may select a still photo. For example, in a first extracted photo, the object of interest may include a person, where the person's face is in focus, while in the second extracted photo, the person's feet may be in focus. The user may select from the first and the second extracted photos depending on preference for the area in focus. The GUI 104 may also display a selection option for printing the extracted photo thereby enabling the user print the extracted photo on a connected printer device.” Processing unit 101 of Herz is autonomous. It can select another frame on its own to better represent the scene being observed, and improve quality be helping to detect artifacts in the image. This would also be autonomous. This can be the selected frame that represents the time segment. Herz [0027]: “In step 207, the processing unit 101 may optionally select another center-of-mass frame temporally offset to the initial center-of-mass frame (for example, if necessitated by an unsatisfactory quality of the processed center-of-mass frame) and may repeat steps 205 and 206 to correct detected artifacts while generating a histogram of the results. Using an image quality assessment of the results, an optimized photo extraction is achieved, and the optimized extracted photo 453 is displayed on a display of the GUI 104 as shown in FIG. 4B.” The process is used for the selection of a best choice for optimized photo extraction.). determining an orientation of a jaw of the individual in the selected frame; (Sachs teaches one of the traits being tracked is the jaw angle, which is the orientation of the jaw. Sachs [0163]: “Local attributes in accordance with a number of embodiments of the invention can include (but are not limited to) head shape, eyelid turn, eyelid height, nose width, nose turn, mustache, beard, sideburns, eye rotation, jaw angle, lip thickness, and chin shape. In some embodiments, local attributes are identified using terminals that are trained to identify particular attributes and/or features for a face. Terminals in accordance with some embodiments of the invention can include separate terminals for attributes of different features (such as, but not limited to, eyes, ears, nose, lips, etc.). In some embodiments, multiple terminals can be used to identify multiple attributes of a single feature”). and updating an orientation of a three-dimensional (3D) model of a dental arch of the individual to match the orientation of the jaw of the individual in the selected frame. (Sachs [0079]: “In accordance with some embodiments, a rig is generated for the static 3D model. The rig can be generated by applying a standard set FACS blend shapes to a mesh of the static 3D model of the head. The motion of one or more landmarks and/or 3D shapes in visible video can be tracked and the blend shapes of the static 3D model video recomputed based on the tracked landmarks and/or to provide a customized rig for the 3D model.” The part of the video that shows the motion of the jaw is the selected frame. It is used to customize a rig for the 3D model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the selection of a frame representative of a full time-segment as taught by Herz with the system of Sach in order to track the movement of the face in the scene in a more memory efficient way and create keyframes for the movement of the facial features. As per claim 13, this claim is similar in scope to limitations recited in claim 1, and thus is rejected under the same rationale. As per claim 2, Sachs teaches the claimed: 2. The non-transitory computer readable medium of claim 1, the operations further comprising: The non-transitory computer readable medium of claim 1, the operations further comprising: receiving a selection of a second time segment of the plurality of time segment; (Sachs [0011]: “In accordance with some embodiments, the one or more processes receive video images synchronized to audio data, track a 3D surface of the face in the frames of the video images to generate a set of rig parameters for a portion of the video images, and map the set of rig parameters for each portion of the video images to a corresponding synchronized portion of the audio data. In accordance with many embodiments, the tracking of the 3D surface to at least one of generating the set of rig parameters and mapping to the audio data is performed using a temporal model including a recurrent neutral network. In accordance a number of embodiments, the tracking of the 3D surface to generate the set of rig parameters and the mapping to the audio data is performed using a time series model including a convolutional neural network.” Sachs is based on a sequence of frames of a video, and the different positions of the jaw are found on different frames. Sachs teaches a time-series model that can be divided into time segments.). Sachs alone does not explicitly teach the remaining claim limitations. However, Sachs in combination with Herz teaches the claimed: automatically selecting a second frame of the video representative of the selected second time segment determining an orientation of an opposing jaw of the individual in the second frame; (Herz teaches automatic selection of a frame of a selected time segment from a plurality of time segments, as taught above in the rejection to claim 1.). and updating an orientation of a second 3D model of a second dental arch of the individual to match the orientation of the opposing jaw of the individual in the second frame. (Sachs [0141]: “In accordance with some embodiments, the rig parameters are nonlinearly related to a rigged models shape changes. The relationship is nonlinear because nonlinear corrective shapes account for interactions between blend shapes. As such, some groups of rig parameters are mutually exclusive in order to provide plausible animation of the model. Thus, processes for determining the rig parameters in accordance with some embodiments are nonlinear guided optimizations of the parameters. In accordance with many embodiments, the determining of the rig parameters may be performed in stages over different subsets of rig parameters where each subset explains a different fraction of variation in the image. The stages in accordance with several embodiments may include, but are not limited to a rigid solve stage, a mouth solve stage, and a upper face solve stage. The rigid solve stage may determine rig parameters explaining motion of non-deforming head features as a rigid motion of the head and may be used to stabilize the geometry for other stages. The mouth solve stage may determine rig parameters for the jaw opening to match a chin position and for mouth shape networks including, but not limited to, moving of mouth corners inward and/or outwards, and rolling of the lips. In accordance to a few embodiments, complementary or exclusive shape groups may be coupled using optimizer constraints. The upper face solve stage determines the rig parameters for facial features independently of the mouth shape networks. Example of facial features that may have movement determined in the upper face solve stage include, but are not limited to, eyes, eyebrows, and nostrils.” The parameters for the mouth opening involve the first and second dental arches and determining an angle between the upper jaw and the opposing jaw. The rig parameters for the jaw opening to match the chin position are the updating of the 3D model. Sachs fig. 23 shows a rig with a before position and an after structure, based on the solve stage cited above. Sachs [0023]: “FIG. 23 illustrates a comparison between an image having a final texture solved with a single tone and an image having a determined skin tone with smooth variations generated by a hole filling process in accordance with an embodiment of the invention.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the automated selection of a frame from a selected time segment as taught by Herz with the system of Sach in choose the best time segment to show the motion of the second dental arch and compare the movement of the two of them relative to each other. As per claim 14, this claim is similar in scope to limitations recited in claim 2, and thus is rejected under the same rationale. As per claim 3, Sachs teaches the claimed: 3. The non-transitory computer readable medium of claim 1, the operations further comprising: determining an orientation of an opposing jaw of the individual in the selected frame; and updating an orientation of a second 3D model of a second dental arch of the individual to match the orientation of the opposing jaw of the individual in the selected frame. (This claim has limitations similar to claim 2, and is rejected for similar reasons as the claim stated above. However, claim 3 pertains to the selected image of claim 1, and would be applied to a different frame of the video than claim 2. Sachs teaches a process that is used for multiple frames representing multiple time segments of an image of a face, and those different frames are used to update the 3D model. Thus, Sachs teaches these limitations regarding the image of claim 1.). As per claim 15, this claim is similar in scope to limitations recited in claim 3, and thus is rejected under the same rationale. As per claim 8, Sachs alone does not explicitly teach the remaining claim limitations. However, Sachs in combination with Herz teaches the claimed: 8. The non-transitory computer readable medium of claim 1, the operations further comprising: receiving a selection of a new time segment of the plurality of time segments automatically selecting a new frame of the video representative of the selected new time segment; determining a new orientation of the jaw of the individual in the new frame; and updating an orientation of the 3D model of a dental arch of the individual to match the orientation of the jaw of the individual in the new frame. (Sachs [0121]: “The identification of certain features or landmarks of a face can be useful in a variety of applications including (but not limited to) face detection, face recognition, and computer animation. In many embodiments, the identification of landmarks may aid in animation and/or modification of a face in a 3D model. In accordance with some embodiments of an invention, a Mnemonic Descent Method (MDM) is used for facial landmark tracking. The goal of tracking the facial landmarks is to predict a set of points on an image of a face that locate salient features (such as eyes, lip corners, jawline, etc.). The MDM performed in accordance with many embodiments of the invention predicts facial landmarks by jointly training a convolutional neural network (CNN) and a recurrent neural network (RNN) to refine an initial set of landmarks on the image until the landmarks are in the correct positions on the face of the subject. … The process determines whether the determined positions of the landmarks are satisfactory. In accordance with some embodiments, this may be determined by passing through the iterative process a predetermined number of times. However, other thresholds may be used to make this determination. If the landmarks are satisfactory, the landmarks are stored in memory (3045) for future use. Otherwise, the iterative process is repeated.” The facial landmark tracking if the updating of the 3D model, as the jaw angle is one of the facial landmarks. The model is updated with new images which are frames automatically selected to represent the time segments, as taught by Herz above.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the selection of a frame representative of a full time-segment as taught by Herz with the system of Sach in order to track the movement of the face in the scene in a more space conscious way and create keyframes for the movement of the jaw opening and closing to determine orientation. As per claim 10, Sachs alone does not explicitly teach the remaining claim limitations. However, Sachs in combination with Herz teaches the claimed: 10. The non-transitory computer readable medium of claim 1, the operations further comprising: and presenting the plurality of time segments. (Herz [0013]: “A system and method are provided for extracting a still photo from video. The system and method allow a user to select from among various available settings as presented on a display of a graphical user interface. Categories of settings include, but are not limited to, entering input pertaining to known types of defects for the video data, selecting a real-time or a playback mode, video data sample size to be analyzed (e.g., number of frames), identifying blur contributors (e.g., velocity of a moving camera), and selecting various color adjustments. A user interface may also include selection of frames of interest within a video segment from which a still photo is desired.” The video segments are related to the time segments. These are displayed by the interface.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the presentation of the time segments as taught by Herz with the system of Sach in order to show the time segments to understand how their respective frames will be used to track the movement of the face. As per claim 18, this claim is similar in scope to limitations recited in claim 10, and thus is rejected under the same rationale. As per claim 12, Sachs teaches the claimed: 12. The non-transitory computer readable medium of claim 1, the operations further comprising: outputting an indication of other time segments of the video for which the orientation of the jaw approximately corresponds to the orientation of the jaw in the selected frame (Sachs [0079]: “In accordance with some embodiments, a rig is generated for the static 3D model. The rig can be generated by applying a standard set FACS blend shapes to a mesh of the static 3D model of the head. The motion of one or more landmarks and/or 3D shapes in visible video can be tracked and the blend shapes of the static 3D model video recomputed based on the tracked landmarks and/or to provide a customized rig for the 3D model.” Sachs [0080]: “In many embodiments, a process for determining rig parameters, which may also be referred to as control curves that can be used to animate a 3D model of a face in response to sounds in audio samples is performed…. In accordance with a number of embodiments, the input image may be compared to a 3D static geometry and surface retargeting performed on the 3D static geometry to generate a customized rig for the static 3D model. The process can track the shape position in each frame of video associated with a particular sound to determine the specific parameters (i.e., motion data) that can move the 3D model of the face in a similar motion to the face visible on the video sequence. The rig parameters associated with each particular sound can be stored and may be used to generate computer animations using the 3D model based on received audio and/or text.” These parameters are output to be used in a model. These include the jaw orientation. The position of the face, including the jaw, is tracked on each frame of the video. The frames are used to track the face movement of the 3D model. Because the angle of the jaw oscillated between being opened and being closed, future frames will have similar jaw positions related to be open or being shut. The 3D model with different jaw positions is the indication of which frames of the video have similar jaw position, since the model is informed by the frames with different movement. “In many embodiments, facial features in an image (e.g., eye shape, nose width, lip thickness) correspond to model parameters to be solved. Deep learning estimates initial values for these parameters directly from the image, and they are subsequently refined by a geometric solver. In numerous embodiments, facial landmark detection is performed to localize known landmarks L on the face such as corners of the eyes, jaw line, and lip contour. Camera parameters and face shape can be solved in accordance with various embodiments of the invention. Shape solves in accordance with numerous embodiments of the invention can involve fitting the parameterized model template to a set of sparse 2D facial landmarks. In a number of embodiments, fitting can involve computing camera parameters R, t, f as well as shape parameters θ which minimize some energy function:”). As per claim 20, this claim is similar in scope to limitations recited in claim 12, and thus is rejected under the same rationale. Claims 4-7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sachs in view of Herz and further in view of Mednikov (Pub No. US 20200000552 A1). As per claim 4, Sachs alone does not explicitly teach the claimed limitations. However, Sachs in combination with Mednikov teaches the claimed: 4. The non-transitory computer readable medium of claim 1, wherein the selected frame is output to a first region of a display and the 3D model is output to a second region of the display. (Mednikov teaches both showing an image of a patient’s dentition being displayed along with a 3D model based on it to show correction. Mednikov in [0014]: “In some implementations, sending instructions to display at least one of the plurality of modified images includes sending instructions to simultaneously display a three-dimensional model corresponding to the at least one image's treatment stage. In some implementations, the modified image and the three-dimensional model are displayed adjacent to one another. In some implementations, the displayed three dimensional model includes one or more labels indicating a degree of malocclusion for one or more malocclusions identified in the patient's dentition. In some implementations, the displayed three dimensional model includes one or more labels indicating an amount of orthodontic correction to be applied to parts of the patient's dentition.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the display of both an image and a 3D model representation of that image as taught by Mednikov with the system of Sachs in order to display an image and its application to its 3D for better data visualization and to see the accuracy of the 3D model, as well as to use that model to inform other images of the same subject. As per claim 5, Sachs alone does not explicitly teach the claimed limitations. However, Sachs in combination with Mednikov teaches the claimed: 5. The non-transitory computer readable medium of claim 1, wherein at least a portion of the 3D model is overlaid on the image. (Mednikov fig. 11-12 show elements of a 3D model overlaid onto 2D images of the subject. Mednikov in [0038]-[0039]: “FIG. 11 depicts a close up of a composite image of the 3D bite model and the 2D image of the patient with overlaid contours, in accordance with one or more embodiments herein; FIG. 12 depicts a composite image of the 3D bite model in a final position and the 2D image of the patient, in accordance with one or more embodiments herein;”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the of the 3D model onto the image as taught by Mednikov with the system of Sachs in order to allow the 3D model to be modified or extrapolated and then have its results used to correct or compare the model with the original image. As per claim 6, Sachs alone does not explicitly teach the claimed limitations. However, Sachs in combination with Mednikov teaches the claimed: 6. The non-transitory computer readable medium of claim 1, the operations further comprising: modifying the selected frame by replacing a dental site in the selected frame with at least a portion of the 3D model. (Mednikov fig. 30 shows overlaid teeth area of a 3D model over an original image, specifically the lower dental arch.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the replacing of a dental site in an image with a portion of the 3D as taught by Mednikov with the system of Sachs in order to fill in gaps in that image. As per claim 7, Sachs alone does not explicitly teach the claimed limitations. However, Sachs in combination with Mednikov teaches the claimed: 7. The non-transitory computer readable medium of claim 6, the operations further comprising: determining an area between lips of the individual in the selected frame; and replacing image data in the area with image data from the 3D model of the dental arc. (Mednikov in [0035]-[0036]: “FIG. 8 depicts the selection of reference points on the 2D image of a patent, in accordance with one or more embodiments herein; FIG. 9 depicts the selection of reference points on the 3D bite model of the patient's teeth, in accordance with one or more embodiments herein;” The patient’s teeth require knowledge of the location of the area between the lips of the individual in the image. The bite model coming from the 3D model is integrated with a 2D image. Mednikov [0037]: “FIG. 10 depicts the integration of the 3D bite model into the 2D image of the patient to create a composite image, in accordance with one or more embodiments herein;”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the replacement of image data with data from the 3D model as taught by Mednikov with the system of Sachs in order to replace missing parts of other images with the positioning calculated for the 3D model, as well has to compare the accuracy of the 3D model with the original image. As per claim 16, this claim is similar in scope to limitations recited in claim 7, and thus is rejected under the same rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS JOHN FOSTER whose telephone number is (571)272-5053. The examiner can normally be reached Mon, Fri 8:30-6. Tues-Thurs 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS JOHN FOSTER/Examiner, Art Unit 2616 /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §103
Feb 02, 2026
Applicant Interview (Telephonic)
Feb 02, 2026
Examiner Interview Summary
Feb 05, 2026
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597097
INFORMATION PROCESSING DEVICE, MEASUREMENT SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592031
IMAGE PROCESSING METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586272
Methods and Systems for Transferring Hair Characteristics from a Reference Image to a Digital Image
2y 5m to grant Granted Mar 24, 2026
Patent 12586158
IMAGE SIGNAL PROCESSOR FOR A COMPOSITE CHROMINANCE IMAGE AND A COMPOSITE WHITE IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586143
METHOD, DEVICE, AND PRODUCT FOR GPU CLUSTER
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
95%
Grant Probability
99%
With Interview (+7.1%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month