Prosecution Insights
Last updated: April 19, 2026
Application No. 18/373,102

THREE-DIMENSIONAL MODEL GENERATION FOR TUMOR TREATING FIELDS TRANSDUCER LAYOUT

Non-Final OA §103
Filed
Sep 26, 2023
Examiner
WELCH, DAVID T
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Novocure GmbH
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
247 granted / 303 resolved
+19.5% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
332
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 303 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7, 9-12, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Juppe et al. (U.S. Patent Application Publication No. 2022/0237880), referred herein as Juppe, in view of Aleem et al. (U.S. Patent Application Publication No. 2023/0252655), referred herein as Aleem, and further in view of Parthasarathy et al. (U.S. Patent Application Publication No. 2016/0143621), referred herein as Parthasarathy. Regarding claim 1, Juppe teaches a computer-implemented method for generating a three-dimensional (3D) composite model of a region of a subject, the method comprising: generating a 3D clinical model of the region of the subject based on one or more images of the region of the subject (paragraph 71, lines 1-12; 3D point cloud representation generated based on images/video of a subject); obtaining a 3D generic model of the region of a generic subject (paragraph 72, lines 1-10; 3D model template of the generic version of the subject); combining the 3D clinical model and the 3D generic model using an affine transformation and a squeezing transformation of the 3D generic model to obtain the 3D composite model of the subject (paragraph 74; the 3D generic template is combined with the 3D point cloud representation using an affine transformation [e.g. the translation and/or rotation] and a squeezing transformation [e.g. the alignment – see applicant’s specification, page 18, which states that squeezing is a transformation to match the 3D generic model to the 3D clinical model]); displaying the 3D composite model on a display (paragraph 59; smartphones such as iPhone™ and Galaxy™ comprise a display and generate the resulting 3D representation; this is also further expanded upon in the discussion of Aleem, below). Juppe further teaches transformations that could be reasonably interpreted as “bending” (see, for example, the transformations disclosed in paragraph 77); although a bending transformation is not explicitly discussed. However, in a similar field of endeavor, Aleem teaches a method for generating a 3D composite model of a region of a subject by combining a 3D clinical model of the subject and a 3D generic model of a region of the subject (fig 4D and figs 5; fig 6; paragraph 44, lines 1-15; paragraph 45, lines 3-10; paragraph 51), and displaying the combined image on a display (figure 7, display 716, 754, etc.; paragraph 59, lines 1-9; paragraph 67, lines 1-8), wherein combining the 3D clinical and generic models comprises using a bending transformation (paragraph 46, lines 1-22). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the bending transformation with the transformations performed in Juppe, because as taught by Aleem, this helps to more accurately align the models such that a higher quality combined image may be generated (see, for example, Aleem, paragraph 46, the last 14 lines). Juppe in view of Aleem does not teach displaying at least one transducer array position on the 3D composite model. However, in a similar field of endeavor, Parthasarathy teaches a method for generating a 3D composite model of a region of a subject, comprising generating a 3D clinical model of the region based on image of the region, and combining the 3D clinical model with additional generic 3D model information to obtain and 3D composite model (paragraph 44, lines 1-3; paragraph 46, lines 1-12; paragraph 59), and further comprising displaying at least one transducer array position on the 3D composite model (paragraph 60, the last 13 lines; paragraphs 62 and 71); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the transducer array display with the 3D anatomical modeling of Juppe in view of Aleem because this helps capture very precise 3D images in an easy, efficient, optimal manner, which can be especially helpful when obtaining 3D models in Juppe’s physical condition assessment application (see, for example, Parthasarathy, paragraphs 11 and 14; paragraph 54). Regarding claim 2, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the affine transformation comprises: translating the 3D generic model to the 3D clinical model; rotating the 3D generic model to align with the 3D clinical model; and scaling the 3D generic model to align with the 3D clinical model (Juppe, paragraph 74, lines 6-16; paragraph 77; aligning the 3D model template and 3D point cloud representation using translation, rotation, and scaling). Regarding claim 3, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 2, wherein translating the 3D generic model to the 3D clinical model comprises: identifying a center of the 3D clinical model; identifying a center of the 3D generic model; and translating the 3D generic model so that the center of the 3D generic model overlaps the center of the 3D clinical model (Juppe, paragraph 74, lines 9-12; translating the center of the 3D model template to align with the center of the 3D point cloud representation). Regarding claim 7, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the squeezing transformation comprises: transforming the 3D generic model to match the 3D clinical model (Juppe, paragraph 74; alignment of the 3D generic template to match the 3D point cloud representation). Regarding claim 9, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, further comprising generating one or more recommended transducer array positions for one or more transducer arrays on the 3D clinical model for applying tumor treating fields, wherein displaying at least one transducer array position on the 3D composite model comprises displaying at least one of the recommended transducer array positions on the 3D composite model on the display (Parthasarathy, paragraph 60, the last 13 lines; paragraph 61, the last 8 lines; paragraphs 62 and 68; paragraph 71; the motivation to combine is substantially similar to that discussed in the rejection of claim 1, above) Regarding claim 10, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the region of the subject is a head of the subject (Aleem, figs 4 and 5; paragraph 44; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1, above). Regarding claim 11, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the region of the subject is a torso of the subject (Juppe, fig 16; paragraph 61, lines 5-12; paragraph 71, lines 1-12 and the last 4 lines; paragraph 87, lines 1-9). Regarding claim 12, the limitations of this claim substantially correspond to the limitations of claim 1 (except for the apparatus comprising processors executing instructions stored in memory, which is taught by Juppe, paragraph 54); thus they are rejected on similar grounds as those articulated in the rejection of claim 1. Regarding claim 18, the limitations of this claim substantially correspond to the limitations of claims 1 and 9 (except for the computer and medium comprising instructions to perform a method, which is taught by Juppe, paragraph 54); thus they are rejected on similar grounds as claims 1 and 9, above. Regarding claim 19, Juppe in view of Aleem, further in view of Parthasarathy teaches the non-transitory computer-readable medium of claim 18, wherein a surface of the 3D clinical model comprises a plurality of meshes, wherein a surface of the 3D generic model comprises a plurality of meshes (Juppe, paragraphs 72 and 73; see also Aleem, paragraph 48, the last 11 lines), wherein combining the 3D clinical model and the 3D generic model comprises deforming the meshes of the 3D generic model in accordance with the meshes of the 3D clinical model (Juppe, paragraphs 74 and 77; the mesh 3D models are deformed to combine the models). Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Juppe, in view of Aleem, further in view of Parthasarathy, and further in view of Berlin et al (U.S. Patent No. 11,308,657), referred herein as Berlin. Regarding claim 4, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 2, wherein rotating the 3D generic model to align with the 3D clinical model comprises: identifying a location in the 3D clinical model; identifying a location in the 3D generic model; and rotating the 3D generic model so that the location in the 3D generic model overlaps the location in the 3D clinical model (Juppe, paragraph 74, lines 1-16; aligning the 3D model with the point cloud representation by rotating so that they overlap). As shown above, Juppe in view of Aleem, further in view of Parthasarathy teaches rotation of the 3D models. Juppe in view of Aleem, further in view of Parthasarathy further teaches aligning the 3D models in consideration of eye position/location (see, for example, Aleem, figs 4 and 5; paragraph 45, lines 1-11; paragraph 46, the last 14 lines; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1, above). However, Juppe in view of Aleem, further in view of Parthasarathy does not explicitly teach identifying an eye location in the clinical model and in the generic model, and rotating the generic model so that the eye locations overlap. However, in a similar field of endeavor, Berlin teaches obtaining generic and clinical model images and applying transformation to align the models (column 14, lines 11-21; the source image may be the generic model, and the destination image may be the clinical model ), wherein the alignment comprises identifying an eye location in the clinical model and in the generic model, and rotating the generic model so that the eye locations overlap (column 14, lines 21-32; the source and destination models are aligned/overlapped by rotating to match the respective eye positions). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the eye location alignment with the alignment of Juppe in view of Aleem, further in view of Parthasarathy because this enables a better analysis of the differences between the models such that the resulting alignment is more accurate (see, for example, Berlin, column 14, lines 17-21). Regarding claim 6, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the bending transformation comprises: transforming a location of the 3D generic model to match a location of the 3D clinical model (Juppe, paragraph 74, lines 1-16; aligning the 3D model with the point cloud representation). As shown above, Juppe in view of Aleem, further in view of Parthasarathy teaches transforming the location of the 3D models. Juppe in view of Aleem, further in view of Parthasarathy further teaches transforming the 3D models in consideration of eye position/location (see, for example, Aleem, figs 4 and 5; paragraph 45, lines 1-11; paragraph 46, the last 14 lines; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1, above). However, Juppe in view of Aleem, further in view of Parthasarathy does not explicitly teach transforming an eye location of the generic model to match an eye location of the clinical model without moving ear positions of the generic model. However, in a similar field of endeavor, Berlin teaches obtaining generic and clinical model images and applying transformation to align the models (column 14, lines 11-21; the source image may be the generic model, and the destination image may be the clinical model ), wherein the transformation comprises transforming an eye location of the generic model to match an eye location of the clinical model (column 14, lines 21-32; the source and destination models are aligned/overlapped by rotating to match the respective eye positions) without moving ear positions of the generic model (column 17, lines 14-21; ears and other non-eye parts may be excluded from the transformation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the eye location transformation with the transformation of Juppe in view of Aleem, further in view of Parthasarathy because as taught by Berlin, this enables a better analysis of the differences between the models such that the resulting alignment is more accurate (see, for example, Berlin, column 14, lines 17-21). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Juppe, in view of Aleem, further in view of Parthasarathy, and further in view of Gold et al. (U.S. Patent No. 11,983,834), referred herein as Gold. Regarding claim 5, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 2, wherein scaling the 3D generic model to align with the 3D clinical model comprises: scaling the 3D generic model so that one region of the 3D generic model aligns with one region of the 3D clinical model (Juppe, paragraph 74, lines 6-16; paragraph 77; aligning the 3D model template and 3D point cloud representation using translation, rotation, and scaling); and scaling the 3D generic model so that an eye region of the 3D generic model aligns with an eye region of the 3D clinical model (Juppe, paragraph 74, lines 6-16; paragraph 77; aligning the 3D model template and 3D point cloud representation using translation, rotation, and scaling; Aleem, figs 4 and 5; paragraph 45, lines 1-11; paragraph 46, the last 14 lines; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1, above). Juppe in view of Aleem, further in view of Parthasarathy teaches that consideration for ear position may be taken into account (see, for example, Aleem, paragraph 34) but does not teach scaling the 3D generic model so that an ear region of the 3D generic model aligns with an ear region of the 3D clinical model. However, in a similar field of endeavor, Gold teaches a method for aligning a 3D generic model with a 3D clinical model using affine transformation (column 9, lines 49-67; column 37, lines 9-13; a 3D feature model and a 3D root model are aligned), wherein the transformation comprises scaling the 3D generic model so that an ear region of the 3D generic model aligns with an ear region of the 3D clinical model (figures 20 and 22; column 10, lines 25-36 and 41-43; column 37, lines 9-13, 20-23, 31-40, and 48-54; identified regions, such as the ear, are scaled to align the feature and root 3D models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the ear alignment of Gold with the alignment of Juppe in view of Aleem, further in view of Parthasarathy because as taught by Gold, this can simplify the alignment process while maintaining or increasing the high quality of the resulting composite image (see, for example, Gold, column 1, lines 36-44 and 54-60; column 2, lines 4-9). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Juppe, in view of Aleem, further in view of Parthasarathy, and further in view of Saphier et al. (U.S. Patent Application Publication No. 2023/0068727), referred herein as Saphier. Regarding claim 8, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, further comprising: performing surface fitting on the 3D composite model, wherein the surface fitting comprises a transformation (Juppe, paragraph 58; paragraph 74, lines 6-16). Juppe in view of Aleem, further in view of Parthasarathy does not explicitly teach that the surface fitting comprises at least one of interpolation or extrapolation. However, in a similar field of endeavor, Saphier teaches a method for aligning clinical and generic 3D models using transformations to generate a composite 3D model (paragraph 199, lines 1-8; paragraph 200) and displaying the 3D models to the user (paragraph 201), wherein the method comprises surface fitting on the 3D composite model, wherein the surface fitting comprises at least one of interpolation or extrapolation (paragraph 199, the last 10 lines). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the interpolation of Saphier with the surface fitting of Juppe in view of Aleem, further in view of Parthasarathy because as taught by Saphier, this helps to create a more complete and accurate composite 3D model, thus improving the end result of the alignment process (see, for example Saphier, paragraph 7). Claims 13, 14, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Juppe, in view of Aleem, further in view of Parthasarathy, and further in view of Chen (U.S. Patent No. 12,183,035), referred herein as Chen. Regarding claim 13, Juppe in view of Aleem, further in view of Parthasarathy teaches the apparatus of claim 12, wherein the 3D clinical model and the 3D generic model each comprise: a center; an X-axis; a Y-axis orthogonal to the X-axis, intersecting the center; and a Z-axis orthogonal to the X-axis and the Y-axis and intersecting the center (Juppe, paras 75 and 76; each model inherently has a center; the three main axes are used for alignment in Juppe, and as known in the art, are inherently orthogonal to one another and meet at the center). Juppe in view of Aleem, further in view of Parthasarathy teaches that consideration for ear position may be taken into account, and processes models that include the head and ears (see, for example, Aleem, figs 4 and 5; paragraph 34). Juppe in view of Aleem, further in view of Parthasarathy does not explicitly teach that the 3D models comprise an X-axis intersecting a left ear fiducial position and a right ear fiducial position; and a Y-axis between a front and a back of the head. However, in a similar field of endeavor, Chen teaches an apparatus for obtaining a generic 3D model and clinical 3D model, and performing affine transformation operations to combine the models into a composite 3D model (column 3, lines 3-15; column 7, lines 23-27; column 9, lines 29-43), wherein the 3D models comprise an X-axis intersecting a left ear fiducial position and a right ear fiducial position, and a Y-axis between a front and a back of the head (fig 3A; column 8, lines 18-24). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the axis alignment of Chen with the alignment of Juppe in view of Aleem, further in view of Parthasarathy because as taught by Chen, this provides a simple, fast, and accurate way to generate the 3D model, without sacrificing realism and quality of the resulting model (see, for example, Chen, column 1, lines 28-39). Regarding claim 14, Juppe in view of Aleem, further in view of Parthasarathy, and further in view of Chen teaches the apparatus of claim 13, wherein the affine transformation of the 3D generic model comprises: overlapping the center of the 3D generic model with the center of the 3D clinical model (Juppe, paragraph 74, lines 6-12); and rotating the 3D generic model around the X-axis to place a position equidistant between a left eye fiducial position and a right eye fiducial position of the 3D generic model on an x-y plane (Juppe, paragraph 74, lines 12-16; Chen, column 10, lines 4-11, 17-25, and 37-47; the motivation to combine Chen is substantially similar to that discussed in the rejection of claim 13). Regarding claim 16, Juppe in view of Aleem, further in view of Parthasarathy, and further in view of Chen teaches the apparatus of claim 13, wherein the bending transformation of the 3D generic model comprises: bending the 3D generic model in accordance with the 3D clinical model at the X-axis (Aleem, paragraph 46, lines 1-22; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1), wherein after bending the 3D generic model, a front position of the 3D generic model is on the Y-axis, wherein the front position of the 3D generic model is a position equidistant between a left eye fiducial position and a right eye fiducial position of the 3D generic model (Aleem, paragraph 42, bending transformation along the [inherent] x-axis, among others; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1; Chen, fig 3A; column 8, lines 18-24 and 29-39; column 10, lines 17-25 and 30-47; after transformation the front position of the 3D model is a position equidistant between the left and right eye positions; the motivation to combine Chen is substantially similar to that discussed in the rejection of claim 13). Regarding claim 17, Juppe in view of Aleem, further in view of Parthasarathy, and further in view of Chen teaches the apparatus of claim 13, wherein the squeezing transformation of the 3D generic model comprises: squeezing the 3D generic model in accordance with the 3D clinical model at the X-axis (Juppe, paragraph 74; the 3D generic template is combined with the 3D point cloud representation using a squeezing transformation along the [inherent] x-axis, among others [e.g. the alignment – see applicant’s specification, page 18, which states that squeezing is a transformation to match the 3D generic model to the 3D clinical model]). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Juppe, in view of Aleem, further in view of Parthasarathy, and further in view of Averbuch (U.S. Patent Application Publication No. 2009/0156951), referred herein as Averbuch. Regarding claim 21, Juppe in view of Aleem, further in view of Parthasarathy teaches the computer-implemented method of claim 1, wherein the bending transformation is an affine transformation, wherein the squeezing transformation is an affine transformation (Juppe; paragraph 74; the 3D generic template is combined with the 3D point cloud representation using an affine transformation [e.g. the translation and/or rotation] and a squeezing transformation [e.g. the alignment – see applicant’s specification, page 18, which states that squeezing is a transformation to match the 3D generic model to the 3D clinical model]; Aleem, paragraph 46, lines 1-22; the motivation to combine Aleem is substantially similar to that discussed in the rejection of claim 1). Juppe in view of Aleem, further in view of Parthasarathy does not teach that the transformations are second order transformations. However, in a similar field of endeavor, Averbuch teaches a medium storing instructions to perform a method, comprising generating a 3D clinical model of a subject based on images of the subject, obtaining a generic model of a generic subject, and matching the models by performing transformations (paragraph 34; paragraph 35, lines 1-8; paragraph 37; paragraphs 46-49), wherein the transformations are second order transformations (paragraph 53; paragraphs 56 and 57). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the second order transformations of Averbuch with the transformations of Juppe in view of Aleem, further in view of Parthasarathy because this helps improve the accuracy of the models by increasing the degrees of freedom of the transformations, thereby providing a better fit for the models (see, for example, Averbuch, paragraph 16; paragraph 51, lines 1-3). Allowable Subject Matter Claim 15 remains objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to the claim objection (Remarks at 9) have been fully considered and are persuasive. The amendments have overcome this objection; thus it is withdrawn. Applicant’s arguments with respect to the prior art rejections (Remarks at 10-14) have been fully considered, but they are moot in view of the new grounds of rejection presented above. Although Juppe and Aleem do not appear to teach the amended limitations regarding displaying transducer array positions, it is respectfully submitted that these limitations are disclosed by Parthasarathy, as discussed above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID T WELCH whose telephone number is (571)270-5364. The examiner can normally be reached Monday-Thursday, 8:30-5:30 EST, and alternate Fridays, 9:00-2:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DAVID T. WELCH Primary Examiner Art Unit 2613 /DAVID T WELCH/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Sep 26, 2023
Application Filed
May 21, 2025
Non-Final Rejection — §103
Nov 12, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103
Feb 13, 2026
Response after Non-Final Action
Feb 27, 2026
Request for Continued Examination
Mar 02, 2026
Response after Non-Final Action
Mar 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602742
IMAGE PROCESSING APPARATUS, BINARIZATION METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602842
TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS
2y 5m to grant Granted Apr 14, 2026
Patent 12592048
System and Method for Creating Anchors in Augmented or Mixed Reality
2y 5m to grant Granted Mar 31, 2026
Patent 12579734
METHOD FOR RENDERING VIEWPOINTS AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573119
APPARATUS AND METHOD FOR GENERATING SPEECH SYNTHESIS IMAGE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 303 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month